top of page
Empty Chairs in Lecture Room

COLLOQUIUM

Wednesdays 13:00 - 14:00

Up Next

Vlad Landa

On

Accurate classification of grape seeds to a verity level using machine learning

March 6, 2024

Grapevine (Vitis vinifera L.) currently includes thousands of cultivars. Discrimination between these varieties, historically done by ampelography, in recent decades mostly by genetic analysis. However, when aiming to identify archaeobotanical remains, which are mostly charred with extremely low genomic preservation, the application of the genomic approach is rarely successful. As a result, variety-level identification of most grape remains is currently prevented. Because grape pips are highly polymorphic, several attempts were made to utilize their morphological diversity as a classification tool, mostly using 2D image analysis technics. Here, we present a highly accurate varietal classification tool using an innovative and accessible 3D seed scanning approach. The suggested classification methodology is machine-learning-based, applied with the Iterative Closest Point (ICP) registration algorithm and the Linear Discriminant Analysis (LDA) technique. This methodology achieved classification results of 91% to 93% accuracy in average when trained by fresh or charred seeds to test fresh or charred seeds, respectively. We show that when classifying 8 groups, enhanced accuracy levels can be achieved using a "tournament" approach. Future development of this new methodology can lead to an effective seed classification tool, significantly improving the fields of archaeobotany, as well as general taxonomy.

Zoom.png

Location of Seminar:

Moreshet Building, room 58.3.35.

Schedule 

Archive 

Inbar Wald

On

Measuring the effects of luck in games

February 21, 2024

Popular sports are a mixture of skill and luck.
Games based solely on skill, where the result is pre-determined, are not attractive to play or watch, while entirely random games tend to be even worse.
We attempt to estimate the effects of hidden luck elements in general games precisely as even "pure skill" games often harbor undetectable luck factors.
We use a deceptively complicated method that works exceptionally well with the new definition of luck we provide.
The implications of these estimations can influence social dynamics, such as determining the optimal game count for esports and legal considerations in discerning the gambling nature of certain games.

Rafi Shalom

On

Formal Methods in Software Engineering, Security and Computer Science

February 14, 2024

Formal methods are applicable in various domains in a way that combines the rigor of mathematical theory with fully implemented and evaluated tools.

 

We focus on reactive synthesis of specifications reducible to GR(1). Reactive synthesis is an automated procedure to obtain a correct-by-construction reactive system from its temporal logic specification. GR(1) is an assume-guarantee fragment of Linear Temporal  Logic (LTL) that has an efficient symbolic synthesis algorithm. We present two works about explaining and repairing unrealizable specifications, i.e., specifications that have no correct implementation.

 

One means to deal with unrealizability is the computation of an unrealizable core, which can be viewed as a fault-localization approach.

 

Existing solutions, however, are computationally costly. We present QuickCore, a novel algorithm, the first domain-specific core computation in this domain, that considerably accelerates unrealizable core computations. 

Another means to handle unrealizability is by automatically inferring additional assumptions which results in a realizable specification, namely, specification repair. We present two novel symbolic algorithms for repairing unrealizable GR(1) specifications. The first algorithm infers new assumptions based on the justice violations transition system (JVTS), a symbolic, compact representation of counter-strategies. The second algorithm infers new assumptions directly from the specification, and it is the first complete GR(1) repair algorithm. Both algorithms are the first two symbolic algorithms for that problem, and are highly efficient and scalable compared with previous approaches.

We implemented our work on top of Spectra, an open-source language and synthesis environment.

We validated the correctness of our implementation and evaluated our ideas on benchmarks from the literature.

 

Finally, we will present one slider on using formal methods for each of the following: synthesis from component and connector specifications, attack graph minimization, and ontology alignment.

A short Bio:

Dr. Rafi Shalom is a researcher at Accenture Labs Israel. Accenture is a Fortune Global 500 company. He is also part time employed as research staff at TAU, and has many years of teaching experience.

He received his Computer Science PhD from TAU in 2021, and conducted postdoctoral studies at TAU.

Eliya Nachmani

On

 Towards a Realistic Immersive 3D Audio Generation

February 7, 2024

Recent advancements in audio and language processing have yielded significant progress in audio analysis and synthesis. In the realm of audio analysis, researchers are addressing the crucial challenges of Automatic Speech Recognition (ASR), Sound Localization, Event Detection, Emotion Recognition, Speaker Diarization, and Speaker Identification. Meanwhile, in the synthesis domain efforts are focused on Speech Synthesis, Speech Separation, Audio Vocoders, and Speech-Bots. Despite the progress made, there remains a significant void in the advancement of neural audio generative models that possess the capability to understand audio landscapes and skillfully create or improve new auditory surroundings. In this talk, I will address two pivotal research directions aimed at closing this gap: 

 

(i) The development of an oracle-powered speechbot involves achieving a profound understanding of the acoustic environment and integrating comprehensive world knowledge. I'll present Spectron, a speechbot that leverages a Large Language Model (LLM) to perform question answering (QA) and speech continuation.

 

(ii) The second challenge revolves around audio separation for a multitude of sources. While current audio separation literature predominantly focuses on isolating single-source domains like speech or sound events, the real-world scenario demands the separation of diverse sources such as speech, noise, and acoustic events. I will present a solution capable of separating numerous speakers based on a single microphone recording as well as a theoretical upper bound for the single channel speech separation.

 

Concluding the discussion, I will outline future research directions, focusing on the evolution of multi-agent speechbots, the advancement of generative audio models within the 3D domain, and the fusion of synthetic sounds into real-world environments.

 

 

Short Bio:

Eliya Nachmani currently serves as a research scientist at Google Research, specializing in machine learning for audio processing. Prior to his role at Google, he conducted research at Facebook AI Research (FAIR) and pursued his Ph.D. at Tel-Aviv University. Eliya holds a Master of Science in Electrical Engineering from Tel-Aviv University and a Bachelor of Science in Electrical Engineering from the Technion.

Varun Narayanan

On

A Complete Characterization of Broadcast and Pseudo-Signatures from Correlations

January 31, 2024

Unconditionally secure broadcast is feasible among parties connected by pairwise secure links only if there is a strict two-thirds majority of honest parties when no additional resources are available. This limitation may be circumvented when the parties have recourse to additional resources such as correlated randomness. Fitzi, Wolf, and Wullschleger (CRYPTO 2004) attempted to characterize the conditions on correlated randomness shared among three parties which would enable them to realize broadcast. Due to a gap in their impossibility argument, it turns out that their characterization is incorrect. Using a novel construction we show that broadcast is feasible under a considerably larger class of correlations. In fact, we realize pseudo-signatures, which are information theoretic counterparts of digital signatures using which unconditionally secure broadcast may be obtained. We also obtain a matching impossibility result thereby characterizing the class of correlations on which three-party broadcast (and pseudo-signatures) can be based. Our impossibility proof, which extends the well-know argument of Fischer, Lynch and Merritt (Distr. Comp., 1986) to the case where parties observe correlated randomness, maybe of independent interest.

 

This is a joint work with Vinod Prabhakaran, Neha Sangwan and Shun Watanabe, which was presented in Eurocrypt 2023. 

 

About me: Varun Narayanan completed his Ph.D. in computer science from TIFR. He is a researcher working in cryptography and information theory. Following his Ph.D., Varun was a post-doctoral fellow in the computer science department at Technion. He is currently a post-doc at UCLA, hosted by Rafail Ostrovsky.

Dmitry Itsykson

On

Lower bounds for regular resolution over parities

January 24, 2024

The Boolean satisfiability problem SAT is one of the most important NP-complete problems. Nowadays, SAT solvers solve this problem for formulas raised in practice with thousands of variables. The fastest SAT solvers are based on the CDCL (conflict-driving clause learning). It is known that for unsatisfiable formulas CDCL solvers construct resolution refutations. Thus formulas that do not have short resolution refutations are hard for CDCL-based SAT solvers. For example, it is known that unsatisfiable linear systems over GF(2) require resolution refutations of exponential size, hence GF(2) linear systems are hard for CDCL-solvers.

The proof system resolution over parities (Res(⊕)) extends the resolution proof system by incorporating linear algebra over GF(2). Proving a superpolynomial lower bound on the size of Res(⊕) refutations remains a highly challenging open question. In the talk, we will discuss the importance of this problem and present our recent results about the exponential lower bound for regular resolution over parities.

No specific prior knowledge is required. An accessible introduction to propositional proof complexity theory will be given.

The talk is based on joint work with Klim Efremenko and Michal Garlik.
https://eccc.weizmann.ac.il/report/2023/187/

Aryeh Kontorvitch

On

What Estimating the distribution in various norms

January 17, 2024

We consider the problem of estimating a discrete distribution in various natural norms that also come up in common applications: ℓ₁, ℓ₂, ℓ∞. We will discuss the distinct behaviors of all of these estimates and their minimax risk rates, as well as the best-case and worst-case distributions for estimation in each of the norms.

Some applications will be presented.

Joint work with Amichai Painsky.

Eran Omri

On

What I Learned on my Sabbatical and what I Still Don't Know about Maliciously Secure Distributed Private Data Analysis

January 10, 2024

In this talk I will tell you about my sabbatical year at Georgetown University. I will briefly survey some of the projects that went on while at GU. I will take the point of view of the imitation games, borrowed from the Turing test into several research fields.
I will end the talk with an ongoing work about distributed differential privacy – regarding a search for an adequate notion of security for the malicious setting. I will present some ideas and try to explain where they fall short.

AViGAIL STEKEL

On

Evidence From Computational Linguistics for the Concept of Biconsonantal Etymons in Hebrew‏

The rules of Semitic morphology from biconsonantal (2C) etymons, to triconsonantal (3C) roots, which make up the majority of words in Biblical Hebrew as well as in other Semitic languages such as modern Hebrew, Arabic, etc. The rules for reducing the 3C roots to their 2C etymons are provided in detail. We use BHSA, a manually annotated corpus of the Hebrew Bible, and Word2Vec, a method for converting words to a vector representing their semantic meaning, to study the hypothesis of evolution from 2C etymons to 3C roots in biblical Hebrew. Namely, we show that words in Hebrew with different roots, that might have originated from the same 2C etymons form a denser cluster than random sets of words of the same size. These differences are statistically significant and strongly support the hypothesis of evolution from 2C etymons to 3C roots

Elina Opalinsky

On

Selective Dynamic Compression

July 12, 2023

Dynamic compression methods continuously update the model of the underlying text file to be compressed according to the already processed part of the file, assuming that such a model accurately predicts the distribution in the remaining part. Since this premise is not necessarily true, we suggest to update the model only selectively. We give empirical evidence that this hardly affects the compression efficiency, while it obviously may save processing time

Milana Frenkel-Morgenstern

On

Advantages of Deep Learning and Graphs Theory in Diagnostics and Personalized Therapies of Complex Diseases

July 05, 2023

Circulating cell-free DNA (cfDNA) is free-floating small fragments of DNA in blood plasma that are not associated with cell fragments. The plasma or serum cfDNA pattern is indicative of an apoptotic and/or necrotic origin. We have analyzed cfDNA cell secretion, mutations, chromosomal aberrations, epigenetics markers, and pathogens’ metagenomics using protein-protein interaction (PPI) networks, statistics and deep learning techniques. In glioblastoma, we used the PPI graphs to identify protein fusions that can be targeted by tyrosine kinase inhibitors suggesting targeted therapy. We also used the deep learning and ChiTaH, our "reference-based" computational method, to build a novel liquid biopsy platform in rheumatoid arthritis to reduce joint destruction. We suggest that computational methods may improve diagnostics and may serve patients with different complex diseases to provide tailored therapies.

Saed Asaly

On

Applying Machine Learning (ML) Techniques with Ground- and Space Based
Measurements for Natural Hazards Predictions

June 21, 2023

Natural hazards such as solar flares, earthquakes, and flash floods can cause significant damage and loss of
life, making the development of accurate and reliable methods for predicting these events of critical importance.
This PhD thesis presents a comprehensive study on the application of machine learning techniques and
diverse data sources for predicting natural hazards. Through the analysis of three articles, we investigate the
potential of these techniques to improve the accuracy and robustness of natural hazard prediction models.
The first article proposes a new methodology for predicting solar flare events up to 3 days before they occur by
applying the Support Vector Machine (SVM) technique with Total Electron Content (TEC) maps extracted from
Global Navigation Satellite System (GNSS) ionospheric path delays.
The second article tests the ability to predict earthquake events using ionospheric TEC anomalies by applying
the SVM technique with GPS ionospheric TEC time series data.
The third article explores the potential for using machine learning along with Precipitable Water Vapor (PWV),
derived from GNSS tropospheirc path delays, and lightning activity data to accurately classify flash flood
events, achieving strong performance across multiple score metrics with the testing set.
The research conducted in this thesis work provides new insights into the use of machine learning techniques
and diverse data sources for predicting natural hazards. The results demonstrate the potential for these
techniques to improve the accuracy and robustness of natural hazard prediction models, but also highlight the
limitations of the research such as small number of data points and narrow focus on specific regions or type of
hazards. This thesis provides a strong foundation for future research to build upon and improve the
performance of the algorithms and to explore the use of additional data sources for predicting natural hazards
in a more effective way.

Keren Nivasch

On

Keyboard Layout Optimization and Adaptation

June 14, 2023

Since the keyboard is the most common method for text input on computers today, the design of the keyboard layout is very significant. Despite the fact that the QWERTY keyboard layout was designed more than 100 years ago, it is still the predominant layout in use today. There have been several attempts to design better layouts, both manually and automatically. In this paper we improve on previous works on automatic keyboard
layout optimization, by using a deep neural network to assist in a genetic search algorithm, which enables the use of a sophisticated keyboard evaluation function that would otherwise take a prohibitive amount of time. We also show that a better choice of crossover routine greatly improves the genetic search. Finally, in order to test how users with different levels of experience adapt to new keyboard layouts, we conduct some layout adaptation experiments with 300 participants to examine how users adapt to new keyboard layouts.

Noa de Roos

On

Applications of Stair Convexity

June 07, 2023

In our research we consider two problems in discrete geometry, the weak epsilon-nets problem, and the stabbing simplices problem. Both problems have been extensively investigated. We improve their bounds and
investigate cases that have not been explored yet. In the first problem, which we study in the planar case, our objective is to construct a set of points $S\subset\mathbb{R}^2$ such that for any $k$ points in $\mathbb{R}^2$, there exists a convex set that contains at least $\varepsilon|S|$ points of $S$, and does not contain any of the $k$ points. We want $\varepsilon=\varepsilon(k)$ as large as possible. In the second problem, the construction we want is a set of $n$-points $S\subset\mathbb{R}^d$ such that for any $k$ points in $\mathbb{R}^d$ there are at most $c_{d,k}n^{d+1}-o(n^{d+1})$ simplices spanned by $S$ that contain at least one of the $k$ points. We search for a construction that gives us $c_{d,k}$ as small as possible. Also here,
we focus on the case $d=2$. For these constructions we use a tool called \textit{stair-convexity}, which is a modification of the notion of convexity. In this thesis, we present a construction which prove $\varepsilon(2)\geq 4/7$, and construction which provide a strong evidence for $\varepsilon(3)\geq 0.47778 ...., \varepsilon_4\geq 0.415\ldots,\varepsilon_5\geq0.377...$.
For the second problem, we provide strong evidences for $c_{2,2}\leq 0.05297$ and $c_{2,3}\leq 0.06557$. Finally, we consider the problem of constructing for every fixed $k$ and every $n$, an $n$-
point set $S\subseteq\mathbb{R}^2$ in general position and a $k$-point set $K\subseteq\mathbb{R}^2$ such that $K$ stabs as many stair-triangles} spanned by $S$ as possible. The corresponding problem for regular triangles instead of stair-triangles has not been studied before for $k>1$, as far as we know.

Zoom.png

Dean Doron

On

Almost Chor–Goldreich Sources and Adversarial Random Walks

May 24, 2023

In this talk we consider the following adversarial, non-Markovian, random walk on “good enough” expanders: Starting from some fixed vertex, walk according to the instructions X = X1,…,Xt, where each Xi is only somewhat close to having only little entropy, conditioned on any prefix. The Xi-s are not independent, meaning that the distribution of the next step depends not only on the walk’s current node, but also on the path it took to get there.

We show that such walks (or certain variants of them) accumulate most of the entropy in X.

 

We call such X-s “almost Chor–Goldreich (CG) Sources”, and our result gives deterministic condensers with constant entropy gap for such sources, which were not known to exist even for standard CG sources, and even for the weaker model of Santha–Vazirani sources. As a consequence, we can simulate any randomized algorithm with small failure probability using almost CG sources with no multiplicative slowdown. This result extends to randomized protocols as well, and any setting in which we cannot simply cycle over all seeds, and a “one-shot” simulation is needed.

 

Joint work with Dana Moshkovitz, Justin Oh, and David Zuckerman

Location of Seminar:

Moreshet Building, room 58.3.35.

Ayal Klein

On

Representing Natural Language with Natural Language

May 17, 2023

Natural Language Processing (NLP) tasks often require an explicit representation of the information conveyed by text, particularly in challenging scenarios. However, existing semantic representation approaches that rely on comprehensive linguistically-based formalisms are difficult to obtain at scale and to transfer to new languages, as they require expert annotators. In this talk, I will present my research on developing a comprehensive semantic representation that is intuitive and accessible to non-experts.

 

The proposed approach captures the meaning of sentences using natural-language question-answer pairs that correspond to the elementary information units expressed by the sentence. To accomplish this, we developed specialized annotation tasks and question templates targeting various linguistic constructions, such as verb-argument relations, nominalizations, discourse relations, adjectives, and complex noun phrases. Attaining our QA-based representation through crowdsourcing platforms has yielded high-quality annotations and strong models.

 

 

I will present several works that leverage our QA-based representation, and conclude by discussing our current efforts to make it available for new languages. Overall, this research provides a promising solution for addressing the challenge of developing a comprehensive semantic representation that is natural and intuitive for non-experts.

Or Haim Anidjar

On

The Interception Between Speech Recognition & Natural Language Processing

May 10, 2023

You are the CEO of a successful and global world-wide company. 

Every year, thousands of meeting hours are recorded on daily basis. As a result, tasks, targets and summaries are getting lost due to lack of documentation. 

Thus, you are interested in an automatic solution that enables:

•Searchable database of specific employees' voiceprints

•Determine who spoke when and about what?

•What would happen when new customers are being added to meetings?

In addition, as your company is a global one, different languages are being spoken during meeting all over the world. How would it affect over your approach Does it require any changes?

Shula Shazman

On

A Recommendation System for Selecting Intermittent Fasting Method to
Improve Health in Type 2 Diabetes

May 3, 2023

This study describes a machine learning approach to reveal the rules for optimal treatment to improve Type 2 Diabetes (T2D) risk parameters per individual. The results suggest a promising recommendation system to support physicians in giving personalized medicine advice to their patients. The input for the system is the details about the query person including age, gender, Body Mass Index (BMI), weight, basal fasting glucose level , fasting glucose level after intervention, basal fasting insulin level and fasting insulin level after intervention. The system is based on combining of several decision tree classifiers and based on data collected from 13 Random Clinical Trail studies of different intermittent fasting interventions in human. The recommendation system can successfully predict the most efficient intermittent fasting approach to help an individual, to reduce his T2D risk parameters, with AUC of 0.85 and 80% accuracy. The ability to predict whether a specific intermittent
fasting intervention is successful in improving T2D risk parameters is already achieved and published in a previous study of mine A Machine Learning Approach to select the type of Intermittent Fasting in order to improve health by effects on Type 2 Diabetes, Bioinformatics 2020. DOI: 10.5220/0008950201310137. However, the results of the new recommendation system presented in this study enable to rank the different intermittent fasting methods and choose the most efficient method for the recommendation.

Hila Dahari

On

That’s Not my Signature! Fail-Stop Signatures, Reloaded

March 29'th, 2023

The Snowden's revelations kick-started a community-wide effort to develop cryptographic tools against mass surveillance. In this work, we propose to add another primitive to that toolbox: Fail-Stop Signatures (FSS) [EUROCRYPT'89]. FSS are digital signatures enhanced with a forgery-detection mechanism that can protect a PPT signer from more powerful attackers. Despite the fascinating concept, research in this area stalled after the '90s. However, the ongoing transition to post-quantum cryptography, with its hiccups due to the novelty of underlying assumptions, has become the perfect use case for FSS.

Our work aims to reboot research on FSS following Rogaway's call [ASIACRYPT 2015] for more effective means to resist mass surveillance. The first part of the paper includes a systematization of knowledge and provides formal definitions of FSS while classifying and improving the minimal assumptions required to build them. We also show that FSS are equivalent to standard, provably secure digital signatures that do not require rewinding or programming random oracles.

The second part of our work contains new FSS constructions for the post-quantum setting. We start with a toy example of lattice-based FSS as a proof-of-concept for alternative post-quantum FSS. We then continue with an FSS version of SPHINCS, which required building FSS versions of all its building blocks: WOTS, XMSS, and FORS. We identify fundamental issues when deriving a large number of private keys from a single seed and when building FSS for Hash-and-Sign-based signatures. We provide generic solutions based on extending the FSS model to allow for fine-grained assumptions.   

 

This is a joint work with Cecilia Boschini, Moni Naor, and Eyal Ronen.

Bar Alon

On

MPC for Tech Giants (GMPC):
Enabling Gulliver and the Lilliputians to Cooperate Amicably

March 22'nd, 2023


In the current digital world, large organizations (sometimes referred to as tech giants) provide service to extremely large numbers of users. The service provider is often interested in computing various data analyses over the private data of its users, which in turn have their incentives to cooperate, but do not necessarily trust the service provider. 

 

In this work, we introduce the Gulliver multi-party computation model (GMPC) to realistically capture the above scenario. The GMPC model considers a single highly powerful party, called the server or Gulliver, that is connected to n users over a star topology network (alternatively formulated as a full network, where the server can block any message). The users are significantly less powerful than the server, and, in particular, should have both computation and communication complexities that are polylogarithmic in n.
Protocols in the GMPC model should be secure against malicious adversaries that may corrupt a subset of the users and/or the server. 

 

Designing protocols in the GMPC model is a delicate task, since users can only hold information about polylog(n) other users (and, in particular, can only communicate with polylog(n) other users). In addition, the server can block any message between any pair of honest parties. Thus, reaching an agreement becomes a challenging task. Nevertheless, we design generic protocols in the GMPC model, assuming that at most \alpha<1/8 fraction of the users may be corrupted (in addition to the server). Our main contribution is a variant of Feige's committee election protocol [FOCS 1999] that is secure in the GMPC model. Given  this tool 
we show: 

  1. Assuming fully homomorphic encryption (FHE), any computationally efficient function with O(polylog(n))-size output can be securely computed in the GMPC model.

  2. Any function that can be computed by a circuit of O(polylog(n)) depth, $O(polylog(n) size, and bounded fan-in and fan-out can be securely computed in the GMPC model without assuming FHE

  3. In particular, sorting can be securely computed in the GMPC model without assuming FHE. This has important applications for the shuffle model of differential privacy, and resolves an open question of Bell et al. [CCS 2020].

Gabriel Hartmann

On

Meta-Reinforcement Learning Using Model Parameters

January 25'th, 2023

In reinforcement learning, an agent typically attempts to achieve a high reward by training in a single environment. In meta-reinforcement learning, an agent is trained in multiple, different environments and attempts to learn a meta-policy that can efficiently adapt to new environments.
In this talk, I will present a new meta-reinforcement learning agent that utilizes the idea that a neural network, trained to predict environment dynamics, encapsulates the environment information. Therefore, the neural network's parameters can serve as the context for the reinforcement learning agent. This method is constructed in two phases: in the first phase, a multi-environment parameterized dynamic model is learned. In the second phase, the model parameters of this dynamic model are used as context for the multi-environment policy of a multi-environment reinforcement learning agent. The performance of this method is demonstrated in simulated experiments and compared to existing methods.

Steven B. Damelin

On


Whitney Extensions and the Alignment of Data Problem in R^n with  Applications to Optimal Transport, Clustering, Manifold Learning, Vision and Neuroscience

January 18'th, 2023

Manifold learning, alignment of data, clustering and optimal transport are  popular areas of research with important applications for example in vision, signal processing, medicine and neuroscience.   In this talk I want to talk about recent work dealing with a new way to view the alignment of data problem in R^n and optimal transport via Whitney extension maps. I will discuss applications to manifold learning, clustering, vision, medicine and neuroscience. 

 

Collaborators: Joe Kileel, University of Texas, Austin, Keaton Hamm, University of Texas Arlington, Austin, Soheil Kolouri, Vanderbilt, N. Charalambides,  University of Michigan and B. Swartz, University of Michigan.

 

References:

 

1. S. B. Damelin, On the Whitney near extension problem, BMO, alignment of data, best approximation in algebraic geometry, manifold learning and their beautiful connections: A modern treatment, https://arxiv.org/abs/2103.09748: John Wiley & Sons,  to appear.

 

2.   S. B. Damelin and W. Miller, Mathematics of Signal Processing, Cambridge Texts in Applied Mathematics (No. 48) February 2012..

Slides are available here.

Uri Stemmer

On

Differential Privacy 101: How to Approximate the Median of the Data

January 11'th, 2023

In this talk I will survey the following task: Given n numbers (each representing the information of one individual, say blood sugar level), return a privacy preserving approximation for the median of these numbers. This is arguably one of the most fundamental and basic tasks in the literature of private data analysis. I will survey our current understanding of this task, focusing on the question of how many input points are needed (i.e., what should n be) in order to achieve both privacy and utility simultaneously. As we will see, the answer to this question is surprisingly complex.


Based on a sequence of works, joint with Amos Beimel, Mark Bun, Edith Cohen, Haim Kaplan, Katrina Ligett, Xin Lyu, Yishay Mansour, Moni Naor, Jelani Nelson, Kobbi Nissim, Tamás Sarlós, and Salil Vadhan.

Eli Bagno

On

A q,r-analogue of poly Stirling numbers of the first and second kinds with combinatorial applications

January 4'th, 2023

Abstract.  

The Stirling number of the first kind counts the number of permutations that are decomposed into 

a prescribed number of cycles. Likewise, the Stirling number of the second kind counts the number of set partitions having a prescribed number of blocks.

This setting is called in the literature "type A", since it involves the Coxeter groups of type A, a.k.a. the symmetric groups. The Coxeter group of type B, the group of signed permutations, provides a different context to Stirling numbers, and a lot of effort has been done to extend the concepts involved in the combinatorics of the Stirling numbers to that context. 

This talk deals with several generalizations of Stirling numbers of the first and second kinds to the settings of Coxeter groups of type B in both analytical and combinatorial aspects. 

In the analytical part, we generalize Comtet and Lancaster results dealing with equivalent characterizations of Stirling numbers, including recursions, generating functions and more to the case of the q,r-poly-Stirling numbers in the setting of Coxeter type B.

In the combinatorial part, we apply the approach of Cai-Reddy, using restricted growth words to represent set partitions of type B, and define a new parameter on restricted growth words of type B that enables us to realize combinatorially some of the results obtained by analytic methods. 

 

A new approach to Stirling numbers of the first kind of type B, via a new definition of restricted growth words for cycles, will also be presented and will lead to a new parameter on those words. 

 

Based on a joint work with David Garber and Takao Komatsu.

Denil Sharipov

On

Polynomial Formulations as a Barrier for Reduction-Based Hardness Proofs

December 28'th, 2022

Abstract.  

The Strong Exponential Time Hypothesis (SETH) asserts that for every \varepsilon>0 there exists k such that k-SAT requires time (2-\varepsilon)^n. The field of fine-grained complexity has leveraged SETH to prove quite tight conditional lower bounds for dozens of problems in various domains and complexity classes, including Edit Distance, Graph Diameter, Hitting Set, Independent Set, and Orthogonal Vectors. Yet, it has been repeatedly asked in the literature whether SETH-hardness results can be proven for other fundamental problems such as Hamiltonian Path, Independent Set, Chromatic Number, MAX-k-SAT, and Set Cover.
In this paper, we show that fine-grained reductions implying even \lambda^n-hardness of these problems from SETH for any \lambda>1, would imply new circuit lower bounds: super-linear lower bounds for Boolean series-parallel circuits or polynomial lower bounds for arithmetic circuits (each of which is a four-decade open question).
We also extend this barrier result to the class of parameterized problems. Namely, for every \lambda>1 we conditionally rule out fine-grained reductions implying SETH-based lower bounds of \lambda^k for a number of problems parameterized by the solution size k.
Our main technical tool is a new concept called polynomial formulations. In particular, we show that many problems can be represented by relatively succinct low-degree polynomials, and that any problem with such a representation cannot be proven SETH-hard (without proving new circuit lower bounds).

Anat Paskin-Cherniavsky  &  Divya Ravi

On

 On the Bottleneck Complexity of MPC with Correlated Randomness

December 21'st, 2022

Abstract.  

In their PKC21' paper, Orlandi et. al. the bottleneck complexity of MPC in the preprocessing model, where parties are given correlated randomness ahead of time.
They present two constructions of bottleneck-efficient MPC protocols, whose bottleneck complexity is independent of the number of parties:

1. A protocol for computing abelian programs, based only on one-way functions.
2. A protocol for selection functions, based on any linearly homomorphic encryption scheme.

Compared with previous bottleneck-efficient constructions, our protocols can be based on a wider range of assumptions, and avoid the use of fully homomorphic encryption. We present their results, and briefly discuss some of the follow up work, done in collaboration between Ariel University and Aarhus University.

Boaz Ben-Moshe

On

Maximizing the Throughput of Nanosatellites via Ad-hoc Multi Ground Stations

December 7'th, 2022

Abstract.  

Nanosatellites are becoming a preferred platform for testing innovative technologies and conducting academic research in space. The growing number of nanosatellites produces a massive amount of data that needs to be transmitted to dedicated ground stations. Nanosatellites commonly have a relatively narrow bandwidth and their expected visibility is about 1% of the time (assuming a single ground station and a LEO). Thus, the effective throughput of nanosatellites is limited.

This research presents the concept of multi ground stations, such as TinyGS, which schedules hundreds of amature (low cost) ground stations to “listen” to the “closest” nanosatellite (using LoRa). In particular, the communication performance of our picosatellite (named SATLLA-2B), demonstrates the collaborative potential of the suggested model.  

 

Therefore, we present a resource allocation optimization problem to maximize the communication performance of the suggested model. More formally, Given a set (S) of n satellites, each of which transmits a unique message on a unique frequency, and a set (R) of k receivers, each of which can listen to only a single frequency. The objective is to maximize the expectation (E) of the total unique messages received by all the k receivers.

A theoretical optimization model will be presented following a generic heuristical framework for optimizing the allocation problem. The suggested multi ground station concept can be generalized from RF communication to Free Space Optics (laser communication).

This talk will first present  the needed background on nano satellites, then several optimization problems related to the multi ground station will be discussed. Finally the problems will be generalized to large satellite constellations such as SpaceX.

Note: this talk requires no background in satellite communication. During the talk an introduction to nanosatellite will be given - including an introduction to the SATLLA project (as of Q1 2023 the SATLLA project is moving to all-open-source methodology).

 

Joint work with Rony Ronen,  Zachi Ben Shitrit, Michael Britvin, Assaf Hiya from the Nanosatellite Research Center, Department of Computer Science, Ariel University, Israel

Edward A. Hirsch

On

The Power of the Binary Value Principle

November 23'd, 2022

Abstract.  

The (extended) Binary Value Principle (eBVP) is the equation $\sum x_i2^{i-1} + k = 0$ for $k>0$ and Boolean variables $x_i$. It received considerable attention recently in the (propositional)
proof complexity because it allows one to characterize relations between strong (semi)algebraic proof systems by proving polynomial simulations and exponential lower bounds on the proof size.

After a brief introduction to the field of proof complexity, I will talk about polynomial simulations that emerge in the presence of eBVP and about some proof size lower bounds. Results (stated informally):

Theorem 1. Strong algebraic systems + eBVP polynomially simulate strong semialgebraic systems.

Theorem 2. The square root rule can be polynomially simulated in strong algebraic systems + eBVP.

Theorem 3. There are no polynomial-size refutations of eBVP in strong algebraic systems (up to the Shub and Smale conjecture for the strongest ones).
Moreover, there are no polynomial-size derivations of any CNFs from eBVP.

The talk is based mainly on
[AH22] Alekseev, Hirsch: The Power of the Bit Value Principle,
arXiv:2210.17429, October 2022.
[AGHT20] Alekseev, Grigoriev, Hirsch, Tzameret: Can a Natural Number be Negative?
Semi-Algebraic Proofs, IPS Lower Bounds and the $\tau$-Conjecture. STOC-2020.

Gabriel Nivasch

Fusible Numbers and Peano Arithmetic

On

November 16'th, 2022

Abstract.  Fusible numbers are a well-ordered set of rational numbers that naturally arise from a mathematical riddle. Since the order type of the set of fusible numbers is eps_0, it turns out that several first-order properties of fusible numbers are unprovable in Peano Arithmetic. In particular, consider the recursively defined algorithm "M(x): if x<0 return -x, else return M(x-M(x-1))/2". Then M terminates on all real inputs, although PA cannot prove that M terminates on all natural inputs. We also explore some generalizations of fusible numbers that generate larger countable ordinals. Joint work with Alexander Bufetov, Jeff Erickson, Fedor Pakhomov, and Junyan Xu. 

Harel Berger

On

Robust Android Malware Detection

November 9'th, 2022

Abstract.   A growing number of malware detection methods are heavily based on Machine Learning (ML) and Deep Learning techniques. However, these classifiers are often vulnerable to evasion attacks, in which an adversary manipulates a malicious instance from being detected. 

This study offers a framework that enhances the effectiveness of ML-based malware detection systems in the field of Android application packages (APK). This work follows previous work on the PDF domain. This study analyzes different aspects of defenses based on retraining methods of problem-space and feature-space evasion attacks. Also, several key insights were drawn during this research. The first insight is the creation of the malicious predictor system that tries to predict if an evasion attack is successful. The second insight is the effect of merging two types of feature sets to address evasion attacks of multiple types. 

Dana Shapira

On

Weighted Adaptive Coding

July 20th, 2022

Abstract.   Huffman coding is known to be optimal under certain constraints, yet its dynamic version, which constantly alters the Huffman tree as a function of the already processed characters, may be even more efficient in practice. A new forward looking variant of Huffman compression has been proposed recently, that provably always performs better than static Huffman coding by at least m-1 bits, where m denotes the size of the alphabet, and has a better worst case size than the standard dynamic Huffman coding. This talk introduces a new generic coding method, extending the known static and dynamic variants and including them as special cases. In fact, the generalization is applicable to all statistical methods, including arithmetic coding. This leads then to the formalization of a new double-pass coding method that is adaptive in the sense that it uses changing statistics depending on the current position within the processed file, yet it behaves like static coding, as it assumes the knowledge of the distribution in the entire file; this is contrary to online variants that rely only on the text seen so far and adapt the model dynamically. We call the new method positional coding, and its compression performance, using global statistics, is provably always at least as good as that of the best dynamic variants known to date. Moreover, we present empirical results that show improvements by positional coding and its extensions over static and dynamic Huffman and arithmetic coding, even when the encoded file includes the model description. 

Aviram Zilberman

On

Heterogeneous SDN Controller Placement Problem:
The Wi-Fi and 4G LTE-U Case

July 20th, 2022

Abstract.   The widespread deployment of 5G wireless networks has brought about a radical change in the nature of mobile data consumption. It requires the deployment of Long Term Evolution to the unlicensed spectrum (LTE-U) to alleviate the shortage of available licensed spectrum while Wi-Fi is the most dominant radio access technology in the unlicensed spectrum. This has led to suggestions concerning the coexistence of multi radio access technologies such as Wi-Fi and LTE-U. Clearly, Software Defined Networks (SDN) can utilize the LTE-U radio bandwidth efficiently. The SDN paradigm decouples the network's control logic from the underlying routers and switches, which promotes centralization of network control. The controller placement problem is crucial to SouthBound interface optimization and involves the location of the controllers, their number and the assignment function of switches to controllers. In this paper, a novel strategy is presented to address the wireless controller placement problem, which improves the throughput, link failure probability and the transparency on the SouthBound interface in cases where hybrid providers choose either LTE-U or Wi-Fi as link layer technologies. The problem of determining the placement of LTE-U and Wi-Fi based controllers is modeled and two heuristic solutions are proposed. Simulations show that the proposed algorithms provide a fast and efficient solution.

Johannes Carmesin

On

Local Separators

2nd of March, 2022

Abstract. Tree-decompositions and corresponding methods to split graphs along separators are a central tool in algorithmic as well as structural graph theory. It is a natural step to consider local separators of graphs, vertex sets that split graphs only locally.

Similar to the fact that tree-decompositions can be viewed as a set of non-crossing genuine separators, a set of non-crossing local separators can be subsumed as a decomposition of the graph along a genuine decomposition graph. This extends tree-decompositions by allowing genuine decomposition graphs. In recent years we have used these ideas to solve the following problems:

- extend Whitney's planarity criterion from the plane to arbitrary surfaces
- a duality theorem characterising graphs containing bounded
subdivisions of wheels
- a local strengthening of the block-cutvertex theorem
- a local strengthening of the 2-separator theorem
- a criterion to detecting normal subgroups in finite groups
- a decomposition theorem for graphs without cycles of intermediate length

We expect that these methods will be applied further in parallel
computing for large networks.

 

Maya Stein

On

Towards a Pósa-Seymour conjecture for hypergraphs

23rd of March, 2022

Abstract. A central problem in extremal graph theory is to study degree conditions that force a graph G to contain a copy of some large or even spanning graph F. One of the classical results in this area is Dirac's theorem on Hamilton cycles, and its extension, the Pósa-Seymour conjecture on powers of Hamilton cycles, which has been proved for large graphs by Komlós, Sárközy and Szemerédi. Another classical results is the latter authors' result for spanning trees in graphs of large minimum degree.

 

Extension of these results to hypergraphs, using codegree conditions and tight (powers of) cycles, have been studied by various authors. We give an overview of the known results, and show a generalization of the Komlos-Sárközy-Szemerédi Theorem to hypergraphs. We also find a codegree condition which is sufficient for ensuring arbitrary powers of tight Hamilton cycles, for any uniformity. This can be seen as an approximate hypergraph version of the Pósa-Seymour conjecture. On route to our result, we show that the same codegree conditions are sufficient for finding a copy of every spanning hypergraph of bounded tree-width which admits a tree decomposition where every vertex is in a bounded number of bags.

 

This is joint work with Nicolás Sanhueza-Matamala and Matias Pavez-Signé.

 

Ari Trachtenberg

On

Autopsy of a Scientific Death:  
Automated Exposure Notification for COVID-19

March 30th, 2022

Abstract. Way back at the start of the pandemic, Automated Exposure Notification emerged as a promising technology for tracking and isolating those exposed to COVID-19.  One homegrown solution, initially designed at Boston University (BU), sought to connect ubiquitous Bluetooth-capable phones in a privacy-preserving manner that notified close contacts of potential exposure to a sick individual.  After much work with many excellent collaborators and at record pace, the resulting system was elegant, relatively accurate, efficient, and provided concrete privacy guarantees.  Indeed, large countries and many companies the world over implemented some form or another of the system in a futile attempt to control the spread of the infernal disease.  Alas, despite many millions of dollars and thousands of person-hours for design and implementation, the various systems did not appear to provide significant benefit in early identification of new cases.

We will begin with an explanation of the problem space and the system we eventually designed and implemented, focusing on some of the technical and systems challenges.  We will then describe some of the potential reasons why our fantastic system was (i) not very effective, and (ii) could not be published in the peer-reviewed literature:

* R. Canetti, A. Trachtenberg, and M. Varia:, Anonymous Collocation Discovery: Taming the Coronavirus While Preserving Privacy. CoRR abs/2003.13670 (2020)

* R. Canetti, Y.T. Kalai, A. Lysyanskaya , R. L. Rivest , A. Shamir, E. Shen, A. Trachtenberg, M. Varia, and D.J. Weitzner, Privacy-Preserving Automated Exposure Notification, Cryptology ePrint Archive, Report 2020/863, 2020. 

Stefan Glock

On

Hypergraph matchings with(out) conflicts

April 6th, 2022

Abstract. A celebrated theorem of Pippenger, and Frankl and Rödl, states that every regular hypergraph with small codegrees has an almost-perfect matching. We develop an extension of this that takes `conflicts' into account, where conflicts are encoded via a collection of subsets of edges. We say that a matching is conflict-free if no element of the given collection is a subset of the matching. Under natural assumptions on the given collection of conflicts, we prove that a conflict-free almost-perfect matching exists. This has many applications, one of which yields new asymptotic results for so-called `high-girth' Steiner systems. Our main tool is a random greedy algorithm which we call the `conflict-free matching process'. In the talk, I will motivate this concept of conflict-free matchings by illustrating that several earlier results, seemingly unrelated, can be derived from it, as well as presenting some new applications. I will explain how the algorithm works and sketch how to analyse it.

This is joint work with Felix Joos, Jaehoon Kim, Marcus Kühn and Lyuben Lichev.

Ron Aharoni

On

Connectivity and colorings

April 13th, 2022

Abstract. This is mostly a survey talk, on the connection between a parameter called "connectivity of a complex" (a complex is a closed down hypergraph; its edges are called "simplices") and partitions of the ground set into simplices. The best known such partitions are colorings of graphs, in which the complex consists of the independent sets in a given graph. The basic theorem in the area is (roughly) that the coloring number of a complex (minimal number of simplices needed to cover the ground set)  is at most the number of vertices divided by the connectivity. "Unroughing": 

 

              \lceil   maximum of |S|/connectivity of the complex restricted to S, over all sets S of vertices \rceil

Ronen Hershman

On

Cognitive Control and Pupil Dilation

April 26th, 2022

Abstract. Pupil dilation is an unobtrusive and (relatively) high temporal resolution measure that is thought to change as a result of task difficulty. By using CHAP - open-source software (written in Matlab) that we developed for the processing and analyzing of pupillometry data, and by using our noise-based algorithm for eye-blinks detection, as well as the temporal analysis, we found that pupillometry seems to be a more sensitive measurement for cognitive conflicts in cognitive control tasks. Specifically, we found that measuring changes in pupil size could present both task and information conflicts when manual responses were required. In further studies we found that task conflict depends on the readability of a stimulus, in addition to its meaningfulness, so the more a stimulus is non-readable and meaningless, the less task conflict there will be. Recently, task conflict has attracted attention and researchers tend to suggest that task conflict is a major component of cognitive control interference. Hence, researchers who run cognitive control tasks, and specifically, researchers who want to eliminate task conflict from the tasks, should consider using meaningless stimuli. Our findings provide evidence for the temporal development of the various conflicts that involve cognitive control tasks and improve our understanding of the temporal development of these conflicts.

Merav Allouche

On

Assisting Children with Special Needs in
Their Daily Interaction with Other People

April 27th, 2022

Abstract. Children and adults with special needs may find it difficult to recognize danger and threats as well as socially complex situations. They are thus at risk of becoming victims of exploitation and violence. In addition, they may find themselves unintentionally insulting their friends and relatives. The ultimate aim of this thesis is to help children with special needs better understand the environment around them and interact effectively with other people. In order to accomplish this, we propose developing an assisting agent to help people with special needs  recognize risky or insulting situations. The assisting agent will detect these situations,  signal the user accordingly (by text, speech, or other forms of signaling), and suggest an appropriate response.

Revital Marbel

On

Dynamic Network Formation for FSO Satellite Communication

April 27th, 2022

Abstract. Satellite network optimization is essential, particularly since the cost of manufacturing, launching and maintaining each satellite is significant.  Moreover, classical communication optimization methods, such as Minimal Spanning Tree,  cannot be applied directly in dynamic scenarios where the satellite constellation is constantly changing. Motivated by the rapid growth of the Star-Link constellation that, as  of Q4 2021, consists of over 1600 operational LEO satellites with thousands more  expected in the coming years, this paper focuses on the problem of constructing an optimal inter-satellite (laser) communication network. More formally, given a large set of LEO satellites, each equipped with a fixed number of laser links, we direct each laser module on each satellite such that the underlying laser network will be optimal with respect to a given objective function and communication demand. In this work, we present a novel heuristic to create an optimal dynamic optical network communication using an Ant Colony algorithm. This method takes into account both the time it takes to establish an optical link (acquisition time) and the bounded number of communication links, as each satellite has a fixed amount of optical communication modules installed. Based on a large number of simulations, we conclude that, although the underlying problem of bounded-degree-spanning-tree is NP-hard (even for static cases), the suggested ant-colony heuristic is able to compute cost-efficient solutions in semi-real-time.

Rony Ronen

On

Optimization of ground stations allocation in networks of Nanosatellites

May 11th, 2022

Abstract. The rapid development of small electronic components has accelerated the development of a variety of advanced technologies. These developments are now being used in the field of space technology and, in particular, in the development of nanosatellites. In addition, the entry of civil industry into the space market has significantly reduced the development and launch of nanosatellites into orbit. Nanosatellites are now preferred by academia and industry to test innovative technologies and conduct academic research. Ariel University is a partner in nanosatellite research and development and has already launched two nanosatellites into orbit. These satellites produce massive amounts of data that need to be transmitted to the satellite operator via terrestrial ground stations. However, environmental dynamics, weather conditions, and other constraints pose a fascinating challenge to the efficient use of ground stations to receive satellite messages. Therefore, we would like to present a resource allocation optimization problem to maximize the expectation of the total unique messages received by ground stations. 

 

Formally, given a set (S) of n satellites, each of which transmits a unique message on a unique frequency (simultaneously), and a set (R) of k receivers, each of which can only listen to a single frequency. The objective is to maximize the expectation (E) of the total unique messages received by all k receivers. We will present a model of the problem and several approaches to solving it.

Yitzhak Spielberg

On

The Concept of criticality in artificial intelligence

May 11th, 2022

Abstract. When people operate in the world or engage in a learning process they encounter different
situations. There are situations where the action choice has only a minor influence on their
lives, for example, when a person needs to choose between multiple dinner variants, and there
are other situations where this influence is much larger (critical situations), for example, when
a person needs to choose his life partner. Naturally, people adjust their physical and mental
states to the type of situation they encounter and in accordance with common sense which
says that critical situations require more intense thought and, in general, higher investment
of resources. Clearly, also AI agents during training or upon deployment encounter states
with different criticality levels. Thus, the central theme of this thesis is how AI agents can
adjust their mode of learning/operation to the criticality levels of the states they visit.
In this thesis, I introduce a novel concept in Artificial Intelligence—the concept of critical-
ity. Since criticality is a rather abstract notion, instead of providing a definition that holds for
all AI domains, this thesis suggests different definitions of criticality for different AI domains.
In addition to the introducing and discussing how the novel concept can manifest itself in a
few chosen AI domains, the major focus of the thesis is to show that it can be utilized to help
AI agents to learn more quickly and to operate more safely and efficiently. For this purpose
the thesis presents 3 applications of criticality: Two in the context of Reinforcement Learning
(chapters 2 and 3) and one in the context of AI Safety (chapter 4).
Whereas the first 4 chapters discuss applications of the novel concept in multiple domains
of AI, chapter 5 does not deal with criticality in the narrow sense of the term but revolves 

around a metric that can be regarded as closely related to criticality, namely, task difficulty.
In the context of AI in education, the main question of that chapter is whether revelation
of the task difficulty improves the student’s learning experience. Task difficulty is related to
criticality because it calls for an adjustment of the mode of operation: a difficult task requires
the student to exhibit maximal mental and intellectual effort. Therefore, in the context of
education, task difficulty can be considered as an aspect of criticality.

Richard Mycroft

On

Spanning trees and tree-like graphs in dense directed graphs

May 18th, 2022

Abstract. There are numerous recent developments and enticing open problems concerning the question of which oriented trees (or other tree-like structures) can be found in dense directed graphs or tournaments. I will give an overview of some of these questions, before describing recent work with Tassio Naia, in which we show that every oriented tree whose maximum degree is not too large is contained in every directed graph of large minimum semidegree on the same number of vertices. Further development of our methods yields similar result for a wide range of `tree-like’ structures

Andrew Treglown

On

Tilings in graphs

May 25th, 2022

Abstract. Given two graphs H and G, an H-tiling in G is a collection of vertex disjoint copies of H in G. Thus, an H-tiling is simply a generalisation of the notion of a matching (which corresponds to the case when H is an edge). An H-tiling in G is perfect if every vertex of G is covered by the H-tiling. Over the last 60 years there have been numerous results on perfect H-tilings. In this talk we give a high-level overview of some of the key ideas that permeate the topic. In particular, we will discuss some typical behaviour of extremal examples, and also some complexity questions.

Yuval Khachatryan

On

Cyclic descent extensions and permutations of fixed length

June 1st, 2022

Abstract. Cyclic descents were studied by Klyachko, Cellini and many others.
A major problem in this field is whether a given set of permutations admits a cyclic descent extension (CDE).
Recently, an algebraic characterization for Schur-positive sets which admit a CDE, was given by Adin, Hegedus and Roichman.

In this talk, we study an important Schur-positive subset of the symmetric group - the set of all permutations with a fixed Coxeter length.
We obtain sufficient conditions on the positive integer $m$, under which a CDE exists for permutations of length $m$ in $S_n$, for almost all $n$.
The results involve surprising connections to generalized pentagonal numbers.

Based on results from a PhD thesis prepared under supervision of Yuval Roichman and Ron Adin.

Moti Geva

On

Ensuring Quality of Service During Bandwidth DDoS Attacks  

Abstract.  Distributed Denial of Service (DDoS) attacks are aimed at exhausting various resources of victim hosts, thereby preventing legitimate usage of their computational capabilities. DDoS attacks are often launched by organized crime, hacktivists, or other (un)usual suspects, making this type of cyber crime a major concern for many organizations around the world. The Internet is a best-effort packet-switching network. Everyday usage shows that the Internet is able to properly work, and successfully deliver information across the globe in an instant. However, bandwidth distributed denial of service (BW-DDoS) attacks bring to light the limitations of best-effort networks. In this talk we present our research about mitigation of Bandwidth DDoS (BWDDoS) attacks. BW-DDoS is aimed at exhausting network resources, commonly routers’ queue space, and preventing access to the victim server. BW-DDoS attacks have a devastating effect over protocols employing congestion control, as their performance is sharply degraded as a result of losses and delays. BW-DDoS mitigation techniques introduced in this talk refrain from making changes to Internet infrastructure equipment, i.e. routers, hence they are focused on adjusting end-host behavior, and the configuration of routers. 

Liat Cohen

On

Algorithms for Approximation of Random Variables  

July 13th, 2022

Abstract.   In planning algorithms, scheduling, and other domains, there is often a need to run long computations on random variables, and to store intermediate results. A source of computational complexity often neglected in the analysis of such algorithms is that the support, the number of non-zero probability values, of the random variables needed as intermediate results may grow exponentially along the computation. To avoid exponential memory and time complexities, we suggest to shrink the support of those variables in ways that yield good approximations. In this work, we introduce algorithms and methods for approximating random variables in which the support size can be contained with a cost of an error. 

We introduce several approximation algorithms that differ in the run-time performance, accuracy, and type of approximation needed for the application of interest.

As a motivating example that demonstrates the importance of the problem and also accentuates the NP-hardness of the problem, we study the use-case of estimating the probability that a plan makespan meets a given deadline; we call this problem the Deadline Problem.

bottom of page