Seminar Past

2024

Approximate counting of permutation patterns
Show and Hide Ext. Content for Approximate counting of permutation patterns
Wed, November 20
Omri Ben-Eliezer (Technion)

A copy of a permutation pattern (say, 132) in a sequence of numbers is any subsequence whose values have the same relative order as in the pattern. (Say, for 132, the first element is smallest, the second is largest, and the third is in-between.)

 

Counting permutation patterns has a surprisingly rich set of connections and applications in ranking, statistics, combinatorics, fine-grained complexity, and parametrized complexity, especially for fixed small k. Here are three examples: 
(i) Counting 4-cycles in sparse graphs is equivalent to counting 4-patterns [Dudek and Gawrychowski, 2020]. 
(ii) Many fundamental tests in nonparametric statistics amount to counting k-patterns for k up to 5. 
(iii) The study of twin-width in parametrized complexity has originated from a breakthrough FPT algorithm of Guillemot and Marx [2013] for permutation pattern detection, which runs in linear time when k is fixed.

 

In this talk I will describe an algorithm for approximately counting all k-patterns for k up to 5 in near-linear time, deterministically, to within a (1+eps)-multiplicative error. This algorithm gives the first known (conditional) separation between exact and approximate counting in this domain. Interestingly, our algorithm leverages sublinear techniques from distribution testing, which to our knowledge have not been used in a pattern counting context before.

 

Joint work with Slobodan Mitrović (UC Davis) and Pranjal Srivastava (MIT).

Semester-Start Social
Show and Hide Ext. Content for Semester-Start Social
Wed, November 13
Theory Group Members

Following the longstanding tradition (started last semester), we will start the semester with a social theory lunch, where students, postdocs and faculty members in the theory group get to chat over lunch and then briefly introduce themselves and their interests to the group in a round table. (A gong may or may not be used to enforce time 🙂 )

When: Wednesday 13/11, 13:30-14:30. (Note unusual time.)
Where: Location to be announced.
What: Food (provided by the organizers) and mingling (provided by you).
Who: You, and your friends interested in theory.

Chag Sameach, and wishing us all a fruitful and quiet semester,
David and Omri (the organizers)

PS
For new joiners: The theory seminar (a.k.a theory lunch) is a weekly get together on Wednesdays, 13:00, where the theory group members get lunch and attend a talk on recent results.

A stronger bound for linear 3-LCC
Show and Hide Ext. Content for A stronger bound for linear 3-LCC
Wed, August 21
Tal Yankovitz (Tel Aviv University)
A q-locally correctable code (LCC) C:{0,1}^k->{0,1}^n  is a code in which it is possible to correct every bit of a (not too) corrupted codeword by making at most q queries to the word. The cases in which q is constant are of special interest, and so are the cases that C is linear.

In a breakthrough result Kothari and Manohar (STOC 2024) showed that for linear 3-LCC n=2^Ω(k^1/8) . In this work we prove that n=2^Ω(k^1/4) . As Reed-Muller codes yield 3-LCC with n=2^O(k^1/2) , this brings us closer to closing the gap. Moreover, in the special case of design-LCC (into which Reed-Muller fall) the bound we get is n=2^Ω(k^1/3).

On Sparse Partitions
Show and Hide Ext. Content for On Sparse Partitions
Wed, July 24
Arnold Filtser (Bar-Ilan university)
A partition mathcal{P} of a metric space (X,d_X) is (sigma,tau,Delta)-sparse if each cluster has a diameter at most Delta, and every ball of radius Delta/sigma intersects at most tau clusters. In this talk, we will explore the construction and different applications of sparse partitions in their various forms over the years. As time allows, we will discuss applications to: Universal TSP, Steiner point removal, universal Steiner tree, and facility location.
Improved Approximations for Vector Bin Packing via Iterative Randomized Rounding
Show and Hide Ext. Content for Improved Approximations for Vector Bin Packing via Iterative Randomized Rounding
Wed, July 17
Ariel Kulik (Technion)

The talk will focus on the d-dimensional vector bin packing problem. The input for the problem is a set of items, each associated with a d-dimensional weight vector. The objective is to partition the items into a minimal number of bins, such that the total weight of items in a bin is at most one in every dimension. The problem is a basic generalization of the classic Bin Packing problem and has various applications, such as virtual machines assignment in cloud environments.

In the talk, we will show how iterative randomized rounding, a simple and intuitive algorithmic technique, can be used to attain improved approximation for the d-dimensional vector bin packing problem and other related problems. The technique works in iterations, where each iteration solves a configuration LP relaxation for the residual instance (from previous iterations) and samples a small number of configurations based on the solution for the configuration LP. The main challenge in the analysis is to show that the value of the configuration LP decreases over time.

Learning probability distributions; what can, what can't be done.
Show and Hide Ext. Content for Learning probability distributions; what can, what can't be done.
Wed, July 10
Shai Ben-David (University of Waterloo & Vector Institute for AI)

Characterizing learnability by a combinatorial dimension is a hallmark of machine learning theory.
Starting from the fundamental characterization of binary classification PAC by the VC-dimension, through the characterization of online mistake bound learnability by the Littlestone dimension, all the way to recent papers characterizing multi-class learning and differential privacy.

However, for some basic learning setup no such characterizations have been provided.

I will focus on one of these setups — unsupervised learning probability distributions. 

I will start by showing that with no prior assumptions density estimation cannot have success guarantees.

Next, I will describe a positive learnability result — settling the question of the sample complexity of learning mixtures of Gaussians. 

Finally, I will turn to the question of characterizing learnable classes of distributions.

I will show that there can be no combinatorial characterization of the family of learnable classes of distributions, as well as similar impossibility results for other learnability setups (some of these going back to work of Giora Benedek and Alon Itai). This settles open problems that have been open for more than 30 years.

The first part is based on joint work with Hasan Ashiani, Nick Harvey, Chris Law, Abas Merhabian and Yaniv Plan and the second part on work with my student Tosca Lechner.

Fast Approximate Counting of Cycles
Show and Hide Ext. Content for Fast Approximate Counting of Cycles
Wed, June 26
Tomer Even (Technion)
We consider the problem of approximate counting of triangles and longer fixed length cycles in undirected and directed graphs. We provide an algorithm which is faster as $t$, the number of copies of the searched subgraph, increases. Our running time, which significantly improves upon the state of the art (Tětek [ICALP’22]), is the same as that of multiplying an $n\times n/t$ matrix by an $n/t \times n$ matrix, up to  polylogarithmic factors. Finally, we show that under popular fine-grained hypotheses, this running time is optimal.
The talk is based on a joint work with Keren Censor-Hillel and Virginia Vassilevska Williams. To appear in ICALP 2024.
Approximate Matrix Multiplication via Spherical Convolutions
Show and Hide Ext. Content for Approximate Matrix Multiplication via Spherical Convolutions
Wed, June 19
Omri Weinstein (Hebrew U)

We develop a new framework, Polyform, for fast approximate matrix multiplication (AMM), through sums of sparse polynomial multiplications, using (variants of) FFT. Using this framework, we obtain new data-dependent speed-accuracy tradeoffs, which often improve on the worst-case accuracy of randomized sketching algorithms. Meanwhile, Polyformcan be viewed as a cheap alternative bilinear operator to matrix multiplication in Deep Neural Networks, which is our motivating application.

Our algorithm involves unexpected connections to Additive Combinatorics and Spherical Harmonics. The former one relates the error of Polyform to the purely existential problem of finding Sumsets which minimize intersections  with arithmetic progressions. In the latter, we show  that optimizing the polynomial’s coefficients leads to a low-rank SDP problem generalizing Spherical Codes, and provide a novel projection-free approximation algorithm for this problem, in close to input-sparsity time ~O(nnz(A,B)). Meanwhile, our experiments demonstrate that, when using SGD to optimize the coefficients of Polyform in stat-of-art Transformer models (ViT), our algorithm provides a significant reduction in FLOPs (x2-x5) compared to vanilla MM, with little to no accuracy loss.

Explicit Codes for Poly-Size Circuits and Functions that are Hard to Sample on Low Entropy Distributions
Show and Hide Ext. Content for Explicit Codes for Poly-Size Circuits and Functions that are Hard to Sample on Low Entropy Distributions
Wed, June 5
Ronen Shaltiel (Haifa U)

In the talk I will survey a research direction which combines coding-theory and computational complexity. This direction was introduced by Lipton, and refined by Guruswami and Smith. The goal is to construct error correcting codes which correct errors that are introduced by computationally bounded channels. This is in contrast to Hamming’s standard model which assumes a bound on the fraction of bits that a channel may flip, but allows the channel to be computationally unbounded.

In recent work with Jad Silbak we give explicit constructions of codes with rate that beats the best possible rate of codes in Hamming’s standard model and matches the capacity of Shannon’s binary symmetric channels. These results are achieved against channels with small space, and channels that can be implemented by poly-size circuits (under a suitable hardness assumption).

My plan is to explain the model, survey the new results, and try to show some of the components that are used in these works.

Joint work with Jad Silbak.

 

Social Event (Food with no talk)
Show and Hide Ext. Content for Social Event (Food with no talk)
Wed, May 29
Theory Members

This week’s seminar will take on a more social aspect, and members will be encouraged to introduce themselves and mingle.

Toward Better Depth Lower Bounds: A KRW-like theorem for Strong Composition
Show and Hide Ext. Content for Toward Better Depth Lower Bounds: A KRW-like theorem for Strong Composition
Wed, April 3
Or Meir (Haifa University)

One of the major open problems in complexity theory is proving super-logarithmic lower bounds on the depth of circuits. Karchmer, Raz, and Wigderson (Computational Complexity 5(3/4), 1995) suggested approaching this problem by proving that depth complexity of a composition of two functions is roughly the sum of their individual depth complexities. They showed that the validity of this conjecture would imply the desired lower bounds.

The intuition that underlies the KRW conjecture is that composition should behave like a “direct-sum problem”, in a certain sense, and therefore the depth complexity of the composition should be the sum of the individual depth complexities. Nevertheless, there are two obstacles toward turning this intuition into a proof: first, we do not know how to prove that the composition must behave like a direct-sum problem; second, we do not know how to prove that the complexity of the latter direct-sum problem is indeed the sum of the individual complexities.

In this talk, we will describe a new work that tackles the second obstacle. We will consider a notion of “strong composition” that is forced to behave like a direct-sum problem, and see that a variant of the KRW conjecture holds for this notion. This result demonstrates that the first obstacle above is the crucial barrier toward resolving the KRW conjecture. Along the way, we will discuss some general techniques that might be of independent interest.

Exploring the Generalization Ability of (Convex) Optimization Algorithms
Show and Hide Ext. Content for Exploring the Generalization Ability of (Convex) Optimization Algorithms
Wed, March 27
Tomer Koren (Tel-Aviv university)

In machine learning, there has been considerable interest over the past decade in understanding the ability of optimization algorithms to generalize—namely, to produce solutions (models) that extend well to unseen data—particularly in the context of overparameterized problems.  I will survey several recent theoretical studies that explore generalization in classical convex optimization, that reveal intriguing behavior of common optimization methods and shed some light on the concept of generalization in high dimension.

Deterministic Online embedding of metrics, ***Unusual time: 13:30***
Show and Hide Ext. Content for Deterministic Online embedding of metrics, ***Unusual time: 13:30***
Wed, March 20
Ilan Newman (Haifa University)

A finite metric space $(X,d)$ on a set of points $X$ is just the shortest path metric $d$ on a positively weighted graph $G=(X,E)$. In the online setting, the vertices of the input finite metric space $(X,d)$ are exposed one by one, together with their distances $d(*,*)$
to the previously exposed vertices. The goal is to embed (map) $X$ into a given host metric space $(H,d_H)$ (finite or not) and so to distort the distances as little as possible (distortion is the worst case ratio of how the distance is expanded, assuming it is never contracted).

I will start by a short survey on the main existing results of offline embedding into $ell_1, ell_2, ell_{infty}$ spaces. Then will present some results on online embedding: mainly lower bounds (on the best distortion) for small dimensional spaces, and some upper bounds for 2-dim Euclidean spaces.

As an intriguing question: the “rigid” $K_5$ metric space is a metric space on $G=K_5$, in which each edge should be thought as a unit interval $[0,1]$. The points are the $5$ fixed vertices of $K_5$ (that are exposed first) in addition to $n$ points that are arbitrary placed any where in the edges, and exposed one by one. It is easy to show that an offline embedding into the $2$-dim Euclidean results in a distortion $Omega(n)$. What can be achieved in the online case ?? It was “believed” that an exponential distortion could be proven. We show that the distortion is bounded by $O(n^2)$.
———-

The talk is based on joint work with Noam Licht and Yuri Rabinovich

A unified characterization of private learning
Show and Hide Ext. Content for A unified characterization of private learning
Wed, March 13
Hilla Schefler (Technion)

Differential Privacy (DP) is a mathematical framewirk for ensuring the privacy of individuals in a dataset. Roughly speaking, it guarantees that privacy is protected in data analysis by ensuring that the output of an analysis does not reveal sensitive information about any specific individual, regardless of whether their data is included in the dataset or not.

This talk presents a unified framework for characterizing both pure and approximate differentially private learnabiliity under the PAC model. The framework uses the language of graph theory: for a concept class H, we define the contradiction graph G of H. Its vertices are realizable datasets, and two datasets S, S′ are connected by an edge if they contradict each other (i.e., there is a point x that is labeled differently in S and S′). Our main finding is that the combinatorial structure of G is deeply related to learning H under DP. Learning H under pure DP is captured by the fractional clique number of G. Learning H under approximate DP is captured by the clique number of G. Consequently, we identify graph-theoretic dimensions that characterize DP learnability: the clique dimension and fractional clique dimension.

Faster matrix game solvers via ball oracle acceleration
Show and Hide Ext. Content for Faster matrix game solvers via ball oracle acceleration
Wed, March 6
Yair Carmon (Tel-Aviv university)

We design a new stochastic first-order algorithm for approximately solving matrix games as well as the more general problem of minimizing the maximum of smooth convex functions. Our central tool is ball oracle acceleration: a technique for minimizing any convex function with a small number of calls to a ball oracle that minimizes the same function restricted to a small ball around the query point. To design an efficient ball oracle for our problems of interest we leverage stochastic gradient descent, softmax smoothing, rejection sampling, and sketching. For a large number of general smooth functions, our algorithm obtains optimal gradient query complexity. For matrix games, it improves over all previous runtime bounds in a range of problem parameters.

Based on joint work with Arun Jambulapati, Yujia Jin, and Aaron Sidford.

Distance Sensitivity Oracles
Show and Hide Ext. Content for Distance Sensitivity Oracles
Wed, February 28
Sarel Cohen (Tel-Aviv Academic College)

An f-edge fault-tolerant distance sensitivity oracle (f-DSO) is a data-structure that, when queried with two vertices (s, t) and a set F of at most f edges of a graph G with n vertices, returns an estimate tilde{d}(s,t,F) of the distance d(s,t,F) from s to t in G – F. The oracle has stretch alpha if the estimate satisfies d(s,t,F) le tilde{d}(s,t,F) le alpha cdot d(s,t,F) . In the last two decades, extensive research has focused on developing efficient f-DSOs. This research aims to optimize preprocessing time, space consumption, and query time, as well as to improve the stretch (approximation guarantee). Efforts have also been made to derandomize the construction and query algorithms of these systems. Over the last two decades, extensive research has been conducted on developing efficient f-DSOs. This research has focused on optimizing various aspects such as preprocessing time, space consumption, query time, and stretch (approximation guarantee). Efforts have also been made to derandomize the construction algorithms of these data-structures. While multiple constructions of f-DSOs are already known, surprisingly, there is still no optimal data-structure that supports multiple failures. In this talk, we will cover several recent f-DSO data-structures and discuss open questions in this field.

IOPs with Inverse Polynomial Soundness Error
Show and Hide Ext. Content for IOPs with Inverse Polynomial Soundness Error
Wed, February 21
Gal Arnon (Weizmann institute)

We show that every language in NP has an Interactive Oracle Proof (IOP) with inverse polynomial soundness error and small query complexity. This achieves parameters that surpass all previously known PCPs and IOPs. Specifically, we construct an IOP with perfect completeness, soundness error 1/n, round complexity O(loglog n), proof length poly(n) over an alphabet of size O(n), and query complexity O(loglog n). This is a step forward in the quest to establish the sliding-scale conjecture for IOPs (which would additionally require query complexity O(1)).

Our main technical contribution is a emph{high-soundness small-query} proximity test for the Reed–Solomon code. We construct an IOP of proximity for Reed–Solomon codes, over a field F with evaluation domain L and degree d, with perfect completeness, soundness error (roughly) max{1-delta , O(rho^{1/4})}$ for delta-far functions, round complexity O(loglog d), proof length O(|L|/rho) over F, and query complexity O(loglog d); here rho = (d+1)/|L| is the code rate. En route, we obtain a new high-soundness proximity test for bivariate Reed–Muller codes.

The IOP for NP is then obtained via a high-soundness reduction from NP to Reed–Solomon proximity testing with rate rho = 1/poly(n) and distance delta = 1-1/poly(n) (and applying our proximity test). Our constructions are direct and efficient, and hold the potential for practical realizations that would improve the state-of-the-art in real-world applications of IOPs.

Based on joint work with Alessandro Chiesa and Eylon Yogev

Optimal Prediction Using Expert Advice and Randomized Littlestone Dimension
Show and Hide Ext. Content for Optimal Prediction Using Expert Advice and Randomized Littlestone Dimension
Wed, February 14
Idan Mehalel (Technion)

Suppose that n forecasting experts (or functions) are providing daily rain/no-rain predictions, and the best among them is mistaken in at most k many days. For how many days will a person allowed to observe the predictions, but has no weather-forecasting prior knowledge, mispredict?

In this talk, we will discuss how such classical problems can be reduced to calculating the (average) depth of binary trees, by using newly introduced complexity measures (aka dimensions) of the set of experts/functions. All of those measures are variations of the classical Littlestone dimension (Littlestone ’88).

For the forecasting problem outlined above, Cesa-Bianchi, Freund, Helmbold, and Warmuth [’93, ’96] provided a nearly optimal bound for deterministic learners, and left the randomized case as an open problem. We resolve this problem by providing an optimal randomized learner, and showing that its expected mistake bound equals half of the deterministic bound of Cesa-Bianchi et al., up to negligible additive terms.

For general (possibly infinite) function classes, we show that the optimal expected regret (= #mistakes – k) when learning a function class with Littlestone dimension d is of order d + (kd)^0.5.

Based on a joint work with Yuval Filmus, Steve Hanneke, and Shay Moran.

Algorithmic Cheap Talk
Show and Hide Ext. Content for Algorithmic Cheap Talk
Wed, February 7
Konstantin Zabaranyi (Technion)

The literature on strategic communication originated with the influential cheap talk model, which precedes the Bayesian persuasion model by three decades. This model describes an interaction between two agents: sender and receiver. The sender knows some state of the world that the receiver does not know. The sender tries to influence the receiver’s action by communicating a cheap talk message to the receiver regarding the state of the world.

Our paper initiates the algorithmic study of cheap talk in a finite environment (i.e., a finite number of states and receiver’s possible actions). We first prove that approximating the sender-optimal or the social welfare-maximizing cheap talk equilibrium up to a certain additive constant or multiplicative factor is NP-hard. Fortunately, we identify three naturally restricted cases with efficient algorithms for finding a sender-optimal equilibrium. These include a state-independent sender’s utility structure, a constant number of states, or a receiver having only two possible actions.

This is a joint work with Yakov Babichenko, Inbal Talgam-Cohen, and Haifeng Xu

Almost Logarithmic Approximation for Cutwidth and Pathwidth
Show and Hide Ext. Content for Almost Logarithmic Approximation for Cutwidth and Pathwidth
Wed, January 31
Dor Katzelnick (Technion)

We study several graph layout problems with a min max objective. Here, given a graph we wish to find a linear ordering of the vertices that minimizes some worst case objective over the natural cuts in the ordering; which separate an initial segment of the vertices from the rest. A prototypical problem here is cutwidth, where we want to minimize the maximum number of edges crossing a cut. The only known algorithm here is by [Leighton-Rao J.ACM 99] based on recursively partitioning the graph using balanced cuts. This achieves an O(log^(3/2)n) approximation using the O(log^(1/2)n) approximation of [Arora-Rao-Vazirani J.ACM 09] for balanced cuts.
We depart from the above approach and give an improved O(log^(1+o(1))n) approximation for cutwidth. Our approach also gives a similarly improved O(log^(1+o(1))n) approximation for finding the pathwidth of a graph. Previously, the best known approximation for pathwidth was O(log^(3/2)n).

Talk is based on a joint work with Nikhil Bansal and Roy Schwartz.

The Sample Complexity Of ERMs In Stochastic Convex Optimization
Show and Hide Ext. Content for The Sample Complexity Of ERMs In Stochastic Convex Optimization
Wed, January 24
Daniel Carmon (Technion)

Stochastic convex optimization is one of the most well-studied models for learning in modern machine learning. Nevertheless, a central fundamental question in this setup remained unresolved:
How many data points must be observed so that any empirical risk minimizer (ERM) shows good performance on the true population?
This question was proposed by Feldman who proved that Ω(\frac{d}{ϵ} + \frac{1}{ϵ^2}) data points are necessary (where d is the dimension and ε > 0 is the accuracy parameter). Proving an ω(\frac{d}{ϵ} + \frac{1}{ϵ^2} ) lower bound was left as an open problem. In this work we show that in fact \tilde{O}(\frac{d}{ϵ} + \frac{1}{ϵ^2}) data points are also sufficient. This settles the question and yields a new separation between ERMs and uniform convergence. This sample complexity holds for the classical setup of learning bounded convex Lipschitz functions over the Euclidean unit ball. We further generalize the result and show that a similar upper bound holds for all symmetric convex bodies. The general bound is composed of two terms: (i) a term of the form \tilde{O}(\frac{d}{ϵ}) with an inverse-linear dependence on the accuracy parameter, and (ii) a term that depends on the statistical complexity of the class of linear functions (captured by the Rademacher complexity). The proof builds a mechanism for controlling the behavior of stochastic convex optimization problems.

Online edge coloring
Show and Hide Ext. Content for Online edge coloring
Wed, January 17
David Wajc (Technion)

Vizing’s Theorem provides an algorithm that edge colors any graph of maximum degree Δ using Δ+1 colors, which is necessary for some graphs, and at most one higher than necessary for any graph. In online settings, the trivial greedy algorithm requires 2Δ-1 colors, and Bar-Noy, Motwani and Naor in the early 90s showed that this is best possible, at least in the low-degree regime. In contrast, they conjectured that for graphs of superlogarithmic-in-n maximum degree, much better can be done, and that even (1+o(1))Δ-colors suffice online. In this talk I will outline the history of this conjecture, and its recent resolution, together with extensions of a flavor resembling classic and recent results on *list* edge-coloring and “local” edge-coloring.

Talk based in part on joint works with many wonderful and colorful collaborators, including Sayan Bhattacharya, Joakim Blikstad, Ilan R. Cohen, Fabrizio Grandoni, Seffi Naor, Binghui Peng, Amin Saberi, Aravind Srinivasan, Ola Svensson and Radu Vintan.

2023

TBD
Show and Hide Ext. Content for TBD
Wed, October 25
David Wajc

TBD

TBD
Show and Hide Ext. Content for TBD
Wed, July 5
Jonathan Mosheiff

TBD

TBD
Show and Hide Ext. Content for TBD
Wed, June 28
Yuval Rabani

TBD

Incompressiblity and Next-Block Pseudoentropy
Show and Hide Ext. Content for Incompressiblity and Next-Block Pseudoentropy
Wed, June 21
Noam Mazor (Tel-Aviv university)

A distribution is k-incompressible, Yao [FOCS ’82], if no efficient compression scheme compresses it to less than k bits. While being a natural measure, its relation to other computational analogs of entropy such as pseudoentropy, Hastad, Impagliazzo, Levin, and Luby [SICOMP 99], and to other cryptographic hardness assumptions, was unclear.
We advance towards a better understating of this notion, showing that a k-incompressible distribution has (k-2) bits of next-block pseudoentropy, a refinement of pseudoentropy introduced by Haitner, Reingold, and Vadhan [SICOMP ’13]. We deduce that a samplable distribution X that is (H(X) + 2)-incompressible, implies the existence of one-way functions.

Joint work with Iftach Haitner and Jad Silbak.

Approximate All-Pairs Shortest Paths: Recent Advances and Open Questions
Show and Hide Ext. Content for Approximate All-Pairs Shortest Paths: Recent Advances and Open Questions
Wed, June 14
Michal Dory (Haifa university)

The All-Pairs Shortest Paths (APSP) problem is one of the most fundamental problems in graph algorithms. It is well-known that APSP can be solved in O(n^3) time in weighted graphs, and in O(n^{omega}) time in unweighted graphs, where omega<2.376 is the exponent of matrix multiplication. However, these complexities may be high for large graphs, and it is desirable to find faster algorithms. One approach for obtaining faster algorithms is to compute approximations for APSP, rather than computing the distances exactly. In this talk, I will give an overview of algorithms for approximate APSP, and discuss recent progress and interesting open questions in this area. The talk will overview some classic results based on [Aingworth, Chekuri, Indyk, and Motwani, 1999] and [Dor, Halperin and Zwick, 2000] as well as recent results based on a joint work with Sebastian Forster, Yasamin Nazari and Tijn de Vos.

List Agreement Testing and High Dimensional Expansion
Show and Hide Ext. Content for List Agreement Testing and High Dimensional Expansion
Wed, June 7
Roy Gotlib (Bar-Ilan university)

One of the key components in PCP constructions are agreement tests.
In agreement testing the tester is given access to subsets of fixed size of some set, each equipped with an assignment.
The tester is then tasked with testing whether these local assignments agree with some global assignment over the entire set.
One natural generalization of this concept is the case where, instead of a single assignment to each local view, the tester is given access to $ell$ different assignments for every subset.
The tester is then tasked with testing whether there exist $ell$ global functions that agree with all of the assignments of all of the local views.

In this talk I will present a recent result that shows that if the subsets form a (sufficiently good) high dimensional expander then they support list agreement testing under mild assumptions on the local assignments.

In addition, I will also show that those mild assumptions are necessary for list agreement.

I will not assume any prior knowledge of high dimensional expanders.

Based on a joint work with Tali Kaufman

Almost Chor–Goldreich Sources and Adversarial Random Walks
Show and Hide Ext. Content for Almost Chor–Goldreich Sources and Adversarial Random Walks
Wed, May 31
Dean Doron

In this talk we consider the following adversarial, non-Markovian, random walk on “good enough” expanders: Starting from some fixed vertex, walk according to the instructions X = X1,…,Xt, where each Xi is only somewhat close to having only little entropy, conditioned on any prefix. The Xi-s are not independent, meaning that the distribution of the next step depends not only on the walk’s current node, but also on the path it took to get there.
We show that such walks (or certain variants of them) accumulate most of the entropy in X.

We call such X-s “almost Chor–Goldreich (CG) Sources”, and our result gives deterministic condensers with constant entropy gap for such sources, which were not known to exist even for standard CG sources, and even for the weaker model of Santha–Vazirani sources.
As a consequence, we can simulate any randomized algorithm with small failure probability using almost CG sources with no multiplicative slowdown. This result extends to randomized protocols as well, and any setting in which we cannot simply cycle over all seeds, and a “one-shot” simulation is needed.

Joint work with Dana Moshkovitz, Justin Oh, and David Zuckerman

A sheaf-theoretic approach to constructing locally testable codes
Show and Hide Ext. Content for A sheaf-theoretic approach to constructing locally testable codes
Wed, May 24
Uriya First

I will discuss a new approach towards constructing good locally testable codes (LTCs) with better qualities than the recent constructions of good LTCs. This approach continues the trend of using high dimensional expanders (HDXs) for constructing LTCs, but introduces a new ingredient: a sheaf on the HDX at hand. We show that if one could find a single example of a sheaved HDX satisfying some local expansion conditions and a cohomological condition — both of which can be checked in finite (constant) time —, then this example could be propagated into an infinite family of good LTCs. We also propose a heuristic method for constructing the initial sheaved HDX. The LTCs arising from our framework are 2-query LTCs which also admit a stronger testability property called (T)-testability.
This is a joint work with Tali Kaufman.

CS Colloquia: Testing Quantum Systems in the High-complexity Regime
Show and Hide Ext. Content for CS Colloquia: Testing Quantum Systems in the High-complexity Regime
Wed, May 17
Thomas Vidick

From carefully crafted quantum algorithms to information-theoretic security in cryptography, a quantum computer can achieve impressive feats with no classical analogue. Can their correct realization be verified? When the power of the device greatly surpasses that of the user, computationally as well as cryptographically, what means of control remain available to the user? Recent lines of work in quantum cryptography and complexity develop approaches to this question based on the notion of an interactive proof. Generally speaking an interactive proof models any interaction whereby a powerful device aims to convince a restricted user of the validity of an agree-upon statement — such as that the machine generates perfect random numbers or executes a specific quantum algorithm. Two models have emerged in which large-scale verification has been shown possible: either by placing reasonable computational assumptions on the quantum device, or by requiri ng that it consists of isolated components across which Bell tests can be performed. In the talk I will discuss recent advances on the verification power of interactive proof systems between a quantum device and a classical user, focusing on the certification of quantum randomness from a single device, under cryptographic assumptions.

TBD
Show and Hide Ext. Content for TBD
Wed, May 10
Amnon Ta-Shma (Tel-Aviv university)

TBD

Algorithms Should Have Bullshit Detectors! (or Polynomial Time Byzantine Agreement with Optimal Resilience)
Show and Hide Ext. Content for Algorithms Should Have Bullshit Detectors! (or Polynomial Time Byzantine Agreement with Optimal Resilience)
Wed, February 15
Seth Pettie (University of Michigan)

One thing that distinguishes (theoretical) computer science from other
scientific disciplines is its full-throated support of a fundamentally
adversarial view of the universe. Malicious adversaries, with
unbounded computational advantages, attempt to foil our algorithms at
every turn and destroy their quantitative guarantees. However, there
is one strange exception to this world view and it is this: the
algorithm must accept its input as sacrosanct, and may never simply
reject its input as illegitimate. But what if some inputs really are
illegitimate? Is building a “bullshit detector” for algorithms a good
idea?

To illustrate the power of the Bullshit Detection worldview, we give
the first polynomial-time protocol for Byzantine Agreement that is
resilient to f<n/3 corruptions against an omniscient, computationally
unbounded adversary in an asynchronous message-passing model. (This is
the first improvement to Ben-Or and Bracha’s exponential time
protocols from the 1980s that are resilient to f<n/3 corruptions.) The
key problem is to design a coin-flipping protocol in which corrupted
parties (who chose the outcomes of their coins maliciously) are
eventually detected via statistical tests.

We will also discuss other algorithmic contexts in which bullshit
detection might be useful.

Hitting Minors, Planarization, and Kernelization
Show and Hide Ext. Content for Hitting Minors, Planarization, and Kernelization
Wed, January 25
Michal Wlodarczyk (Ben Gurion University)

The concept of a graph minor is fundamental in topological graph theory. First, I will describe the cornerstones of this theory from the lens of parameterized complexity. Next, I will survey more recent results concerning minor-hitting problems, focusing on three algorithmic paradigms: approximation, kernelization, and parameterized algorithms. Here, an important special case is the Vertex Planarization problem (remove as few vertices as possible to make a given graph planar) – this problem is equivalent to hitting all K_5 and K_{3,3} minors in a given graph. Finally, I will talk about our recent result: an O(1)-approximate kernel for Vertex Planarization, being a combination of approximation and kernelization. This is a joint work with Bart. M. P. Jansen. No prior background on graph minors or kernelization is required.

Verifying The Unseen: Interactive Proofs for Label-Invariant Distribution Properties
Show and Hide Ext. Content for Verifying The Unseen: Interactive Proofs for Label-Invariant Distribution Properties
Wed, January 18
Tal Herman (Weizmann institute)

Given i.i.d. samples from an unknown distribution over a large domain [N], approximating several basic quantities, including the distribution’s support size, its entropy, and its distance from the uniform distribution, requires (NlogN) samples [Valiant and Valiant, STOC 2011].

Suppose, however, that we can interact with a powerful but untrusted prover, who knows the entire distribution (or a good approximation of it). Can we use such a prover to approximate (or rather, to approximately {\em verify}) such statistical quantities more efficiently? We show that this is indeed the case: the support size, the entropy, and the distance from the uniform distribution, can all be approximately verified via a 2-message interactive proof, where the communication complexity, the verifier’s running time, and the sample complexity are O(N) . For all these quantities, the sample complexity is tight up to \polylogN factors (for any interactive proof, regardless of its communication complexity or verification time).

More generally, we give a tolerant interactive proof system with the above sample and communication complexities for verifying a distribution’s proximity to any label-invariant property (any property that is invariant to re-labeling of the elements in the distribution’s support). The verifier’s running time in this more general protocol is also O(N) , under a mild assumption about the complexity of deciding, given a compact representation of a distribution, whether it is in the property or far from it.

A Graph Theoretic Approach for Resilient Distributed Algorithms
Show and Hide Ext. Content for A Graph Theoretic Approach for Resilient Distributed Algorithms
Thu, January 12
Merav Parter (Weizmann Institute)

Following the immense recent advances in distributed networks, the explosive growth of the Internet, and our increased dependency on these infrastructures, guaranteeing the uninterrupted operation of communication networks has become a major objective in network algorithms. The modern instantiations of distributed networks, such as the Bitcoin network and cloud computing, introduce new security challenges that deserve urgent attention in both theory and practice.

In this talk, I will present a unified framework for obtaining fast, resilient and secure distributed algorithms for fundamental graph problems. Our approach is based on a graph-theoretic perspective in which common notions of resilient requirements are translated into suitably tailored combinatorial graph structures. We will discuss recent developments along the following two lines of research:

– Initiating and establishing the theoretical exploration of security in distributed graph algorithms. Such a notion has been addressed before mainly in the context of secure multi-party computation (MPC). The heart of our approach is to develop new graph theoretical infrastructures to provide graphical secure channels between nodes in a communication network of an arbitrary topology.

– Designing distributed algorithms that can handle various adversarial settings, such as, node crashes and Byzantine attacks. We will mainly provide general compilation schemes that are based on exploiting the high-connectivity of the graph. Our key focus will be on the efficiency of the resilient algorithms in terms of the number of communication rounds.

Finally, I will highlight future directions that call for strengthening the connections between the areas of fault tolerant network design, distributed graph algorithms and information theoretic security.

Efficient approximation for budgeted matroid independent set, budgeted matching, and budgeted matroid intersection
Show and Hide Ext. Content for Efficient approximation for budgeted matroid independent set, budgeted matching, and budgeted matroid intersection
Wed, January 11
Ilan Doron-Arad (Technion)

Based on joint works with Ariel Kulik and Hadas Shachnai

Abstract: We consider the budgeted matroid independent set problem. The input is a ground set, where each element has a cost and a non-negative profit, along with a matroid over the elements and a budget. The goal is to select a subset of elements which maximizes the total profit subject to the matroid and budget constraints. Several well known special cases, where we have, e.g., a uniform matroid and a budget, or no matroid constraint (i.e., the classic knapsack problem), admit a fully polynomial-time approximation scheme (FPTAS). In contrast, already a slight generalization to the multi-budgeted matroid independent set problem has a PTAS but does not admit an efficient polynomial-time approximation scheme (EPTAS). This implies a PTAS for our problem, which is the best known result prior to our work.

Our main contribution is an EPTAS for the budgeted matroid independent set problem, and a generalization of the technique for obtaining an EPTAS for budgeted matching and budgeted matroid intersection. A key idea of the scheme is to find a representative set for the instance, whose cardinality depends solely on 1/\eps,
where \eps>0 is the accuracy parameter of the scheme. Our scheme enumerates over subsets of the representative set and extends each subset using Lagrangian relaxation techniques.

For a single matroid, the representative set is identified via a matroid basis minimization, which can be solved by a simple greedy approach. For matching and matroid intersection, matroid basis minimization is used as a baseline for a more sophisticated approach.

Mutual Empowerment between Circuit Obfuscation and Circuit Minimization
Show and Hide Ext. Content for Mutual Empowerment between Circuit Obfuscation and Circuit Minimization
Wed, January 4
Ilya Volkovich (Boston College)

We study close connections between Indistinguishability Obfuscation (IO) and the Minimum Circuit Size Problem (MCSP), and argue that algorithms for one of MCSP or IO would empower the other one. Some of our main results are:

· If there exists a perfect (imperfect) IO that is computationally-secure against non-uniform polynomial-size circuits, then we obtain fixed-polynomial lower bounds against NP(MA).

· In addition, computationally-secure IO against non-uniform polynomial-size circuits imply super-polynomial lower bounds against NEXP.

· If MCSP is in BPP, then statistical security and computational security for IO are equivalent.

To the best of our knowledge, this is the first consequence of strong circuit lower bounds from the existence of an IO. The results are obtained via a construction of an optimal universal distinguisher, computable in randomized polynomial time with access to the MCSP oracle, that will distinguish any two circuit-samplable distributions with the advantage that is the statistical distance between these two distributions minus some negligible error term. This is our main technical contribution. As another application, we get a simple proof of the result by Allender and Das (Inf. Comput., 2017) that SZK is contained in BPP^MCSP.

2022

Error Correcting codes that Achieve BSC Capacity Against Channels that are Poly-Size Circuits
Show and Hide Ext. Content for Error Correcting codes that Achieve BSC Capacity Against Channels that are Poly-Size Circuits
Wed, December 14
Ronen Shaltiel (Haifa university)

Guruswami and Smith (J. ACM 2016) considered codes for channels that are computationally bounded and flip at most a p-fraction of the bits of the codeword. This class of channels is significantly stronger than Shannon’s binary symmetric channel (which flips each bit independently with probability p) but weaker than Hamming’s channels (which may flip at most a p-fraction of the bits, and are computationally unbounded).
The goal of this area is to construct codes against channels that are computationally bounded (e.g., bounded memory channels, or channels that are poly-size circuits). In this talk I will explain this direction, focusing on a recent result by Shaltiel and Silbak (FOCS 2022) that consider channels that can be implemented by poly-size circuits.
The main result of this work is a code that:

– Achieves optimal rate of 1-H(p) (matching the capacity of binary symmetric channels, and beating the capacity of Hamming channels)- Has poly-time encoding and decoding algorithms, after a randomized joint pre-processing stage (this is often referred to as a “Monte-Carlo construction”).
Our techniques rely on ideas from coding theory, pseudorandomness and cryptography.
This is a joint work with Jad Silbak.

Streaming algorithms for submodular maximization with a cardinality constraint
Show and Hide Ext. Content for Streaming algorithms for submodular maximization with a cardinality constraint
Wed, December 7
Moran Feldman (Haifa university)

Motivated by machine learning applications, much research over the last decade was devoted to solving submodular maximization problems under Big Data computational models. Perhaps the most basic such problem is the problem of maximizing a submodular function subject to a cardinality constraint. A recent series of papers has studied this problem in the data stream model, and in particular, fully determined the approximation ratio that can be obtained for it by (semi-)-streaming algorithms both when the objective function is monotone and non-monotone. In this talk we will survey the main ideas behind these tight results.

Based on joint works with Naor Alaluf, Alina Ene, Huy Nguyen, Ashkan Norouzi-Fard, Andrew Suh, Ola Svensson and Rico Zenklusen.

Random walks on rotating expanders
Show and Hide Ext. Content for Random walks on rotating expanders
Wed, November 30
Gil Cohen (Tel-Aviv university)

Random walks on expanders are extremely useful in TOC. Unfortunately though, they have an inherent cost. E.g., the spectral expansion of a Ramanujan graph deteriorates exponentially with the length of the walk (when compared to a Ramanujan graph of the same degree). In this talk, we will see how this exponential cost can be reduced to linear by applying a permutation after each random step. These permutations are tailor-made to the graph at hand, requiring no randomness to generate. Our proof is established using the powerful framework of finite free probability and interlacing families that was introduced, around ten years ago, by Marcus, Spielman, and Srivastava in their seminal sequence of works on the existence of bipartite Ramanujan graphs of every size and every degree, and in their solution to the Kadison-Singer problem.

Joint work with Gal Maor.

APMF < APSP? Gomory-Hu Tree in Subcubic Time
Show and Hide Ext. Content for APMF < APSP? Gomory-Hu Tree in Subcubic Time
Wed, November 23
Amir Abboud (Weizmann institute)

The All-Pairs Max-Flow problem (APMF) asks to compute the maximum flow
(or equivalently, the minimum cut) between all pairs of nodes in a graph. The naive solution of making n^2 calls to a (single-pair) max-flow algorithm was beaten in 1961 by a remarkable algorithm of Gomory and Hu that only makes n-1 calls. Within the same
time bound, their algorithm also produces a cut-equivalent tree (a.k.a. GH-Tree) that preserves all pairwise minimum cuts exactly. This gives a cubic upper bound for APMF assuming that single-pair max-flow can be solved optimally and the only improvements
since 1961 have been on getting us closer to this assumption; new algorithms that break the cubic barrier were only known for special graph classes or with approximations.

The All-Pairs Shortest-Paths problem (APSP) is similar, but asks to
compute the distance rather than the connectivity between all pairs of nodes. Its time complexity also appears similar, with classical cubic time algorithms that have only been broken in special cases or with approximations. Meanwhile, in the past 10 years,
the conjecture that APSP requires cubic time has played a central role in fine-grained complexity, leading to cubic conditional lower bounds for many other fundamental problems that appear even easier than APMF. However, a formal reduction from APSP to APMF
has remained elusive.

This talk will survey recent progress (based on joint works with Robert
Krauthgamer and Ohad Trabelsi), starting with partially successful attempts at reducing APSP to APMF, going through algorithmic progress on APMF in limited settings, and leading up to a very recent paper (also with Jason Li, Debmalya Panigrahi, and Thatchaphol
Saranurak) where we break the 60-year old cubic barrier for APMF; suggesting a separation between APMF and APSP.

Dimension-free relations in Communication Complexity
Show and Hide Ext. Content for Dimension-free relations in Communication Complexity
Wed, November 16
Lianna Hambardzumyan (Hebrew university)

In this talk we will discuss dimension-free relations between basic communication and query
complexity measures and various matrix norms. Dimension-free relations are inequalities that
bound a parameter as a function of another parameter without dependency on the number of input
bits. This is in contrast to the more common framework in communication complexity where polylogarithmic dependencies are tolerated. Dimension-free bounds are closely related to structural
results, where one seeks to describe the structure of Boolean matrices and functions that have “low
complexity”. We prove such bounds for several communication and query complexity measures
as well as various matrix and operator norms. We also propose several conjectures, and establish
connections to graph theory, operator theory, and harmonic analysis.
The talk is based on joint work with Hamed Hatami and Pooya Hatami.

Optimal Weak to Strong Learning
Show and Hide Ext. Content for Optimal Weak to Strong Learning
Wed, November 9
Kasper Green Larsen (Aarhus university)

The classic algorithm AdaBoost allows to convert a weak learner, that is an algorithm that produces a hypothesis which is slightly better than chance, into a strong learner, achieving arbitrarily high accuracy when given enough training data. We present a new algorithm that constructs a strong learner from a weak learner but uses less training data than AdaBoost and all other weak to strong learners to achieve the same generalization bounds. A sample complexity lower bound shows that our new algorithm uses the minimum possible amount of training data and is thus optimal. Hence, this work settles the sample complexity of the classic problem of constructing a strong learner from a weak learner.

This work was accepted at NeurIPS’22 and is joint work with Martin Ritzert from Aarhus University.

The power of the Binary Value Principle in proof complexity
Show and Hide Ext. Content for The power of the Binary Value Principle in proof complexity
Wed, November 2
Yaroslav Alekseev (St. Petersburg university)

The (extended) Binary Value Principle (eBVP: $k + x_0 + 2x_1 + … + 2^n x_n $ for $k>0$ and $x^2_i=x_i$) has received a lot of attention recently: several lower bounds have been proved for it (Alekseev et al 2020, Alekseev 2021, Part and Tzameret 2021),
and a polynomial simulation of a strong semialgebraic proof system in IPS+eBVP has been shown (Alekseev et al 2020).

In this talk, we consider Polynomial Calculus with the algebraic version of Tseitin’s extension rule. We show that in this context eBVP still allows to simulate similar semialgebraic systems. We also prove that it allows to simulate the Square Root Rule (Grigoriev and Hirsch 2003),
which is absolutely unclear in the context of ordinary Polynomial Calculus.

On the other hand, we demonstrate that eBVP probably does not help in proving exponential lower bounds for Boolean tautologies: we show that an Extended Polynomial Calculus derivation of any such tautology from eBVP must be of exponential size.

Learning dimensions
Show and Hide Ext. Content for Learning dimensions
Wed, October 26
Amir Yehudayoff (Technion)

We shall survey and discuss several dimensions in the theory of machine learning.

Hardness of Approximation in P via Short Cycle Removal: Cycle Detection, Distance Oracles, and Beyond
Show and Hide Ext. Content for Hardness of Approximation in P via Short Cycle Removal: Cycle Detection, Distance Oracles, and Beyond
Wed, August 10
Seri Khoury, UC Berkeley

Triangle finding is at the base of many conditional lower bounds in P, mainly for distance computation problems, and the existence of many $4$- or $5$-cycles in a worst-case instance had been the obstacle towards resolving major open questions.

We present a new technique for efficiently removing almost all short cycles in a graph without unintentionally removing its triangles. Consequently, triangle finding problems do not become easy even in almost $k$-cycle free graphs, for any constant $kgeq 4$. This allows us to establish new hardness of approximation results for distance related problems, such as distance oracles, dynamic shortest paths, and more.

Based on a joint work with Amir Abboud, Karl Bringmann, and Or Zamir (STOC 2022).

Location changed to Amado 814
APMF < APSP? Gomory-Hu Tree in Subcubic Time
Show and Hide Ext. Content for APMF < APSP? Gomory-Hu Tree in Subcubic Time
Wed, June 29
Amir Abboud, Weizmann institute

The All-Pairs Max-Flow problem (APMF) asks to compute the maximum flow
(or equivalently, the minimum cut) between all pairs of nodes in a graph. The naive solution of making n^2 calls to a (single-pair) max-flow algorithm was beaten in 1961 by a remarkable algorithm of Gomory and Hu that only makes n-1 calls. Within the same
time bound, their algorithm also produces a cut-equivalent tree (a.k.a. GH-Tree) that preserves all pairwise minimum cuts exactly. This gives a cubic upper bound for APMF assuming that single-pair max-flow can be solved optimally and the only improvements
since 1961 have been on getting us closer to this assumption; new algorithms that break the cubic barrier were only known for special graph classes or with approximations.

The All-Pairs Shortest-Paths problem (APSP) is similar, but asks to
compute the distance rather than the connectivity between all pairs of nodes. Its time complexity also appears similar, with classical cubic time algorithms that have only been broken in special cases or with approximations. Meanwhile, in the past 10 years,
the conjecture that APSP requires cubic time has played a central role in fine-grained complexity, leading to cubic conditional lower bounds for many other fundamental problems that appear even easier than APMF. However, a formal reduction from APSP to APMF
has remained elusive.

This talk will survey recent progress (based on joint works with Robert
Krauthgamer and Ohad Trabelsi), starting with partially successful attempts at reducing APSP to APMF, going through algorithmic progress on APMF in limited settings, and leading up to a very recent paper (also with Jason Li, Debmalya Panigrahi, and Thatchaphol
Saranurak) where we break the 60-year old cubic barrier for APMF; suggesting a separation between APMF and APSP.

Communication complexity-based Lower Bounds for Proof Complexity of Natural Formulas
Show and Hide Ext. Content for Communication complexity-based Lower Bounds for Proof Complexity of Natural Formulas
Wed, June 22
Artur Ryazanov, St. Petersburg university

A canonical communication problem Search(ϕ) is defined for every unsatisfiable CNF ϕ: an assignment to the variables of ϕ is distributed among the communicating parties, they are to find a clause of ϕ falsified by this assignment. Lower bounds on the randomized k-party communication complexity of Search(ϕ) in the number-on-forehead (NOF) model imply tree-size lower bounds, rank lower bounds, and size-space tradeoffs for the formula ϕ in the semantic proof system Tcc(k,c) that operates with proof lines that can be computed by k-party randomized communication protocol using at most c bits of communication [Göös, Pitassi, 2014]. All known lower bounds on Search(ϕ) (e.g. [Impagliazzo, Pitassi, Urquhart, 1994]; [Beame, Pitassi, Segerlind, 2007]; [Göös, Pitassi, 2014], ) are realized on ad-hoc formulas ϕ (i.e. they were introduced specifically for these lower bounds). We introduce a new communication complexity approach that allows establishing proof complexity lower bounds for natural formulas.
First, we demonstrate our approach for two-party communication and apply it to the proof system Res(⊕) that operates with disjunctions of linear equalities over ?2 [Itsykson, Sokolov, 2014]. Let a formula PMG encode that a graph G has a perfect matching. If G has an odd number of vertices, then PMG has a tree-like Res (⊕)-refutation of a polynomial-size [Itsykson, Sokolov, 2014]. It was unknown whether this is the case for graphs with an even number of vertices. Using our approach we resolve this question and show a lower bound 2Ω(n) on the size of tree-like Res(⊕)-refutations of PMKn+2,n.
Then we apply our approach for k-party communication complexity in the NOF model and obtain a Ω(1/k 2n/2k – 3k/2) lower bound on the randomized k-party communication complexity of Search BPHPM2n w.r.t. to some natural partition of the variables, where BPHPM2n is the bit pigeonhole principle and M=2n+2n(1-1/k). In particular, our result implies that the bit pigeonhole requires exponential tree-like Th(k) proofs, where Th(k) is the semantic proof system operating with polynomial inequalities of degree at most k and k= O(log1-ε n) for some ϵ > 0. We also show that BPHP2n+12n superpolynomially separates tree-like Th(log1-ε m) from tree-like Th(log m), where m is the number of variables in the refuted formula.

The talk is based on joint work with Dmitry Itsykson.

Binary Codes with Resilience Beyond 1/4 via Interaction
Show and Hide Ext. Content for Binary Codes with Resilience Beyond 1/4 via Interaction
Wed, June 15
Klim Efremenko, Ben-Gurion university

In the reliable transmission problem, a sender, Alice, wishes to transmit a bit-string x to a remote receiver, Bob, over a binary channel with adversarial noise. The solution to this problem is to encode x using an error-correcting code. As it is long known that the distance of binary codes is at most 1/2, reliable transmission is possible only if the channel corrupts (flips) at most a 1/4-fraction of the communicated bits.

We revisit the reliable transmission problem in the two-way setting, where both Alice and Bob can send bits to each other. Our main result is the construction of two-way error-correcting codes that are resilient to a constant fraction of corruptions strictly larger than 1/4. Moreover, our code has a constant rate and requires Bob to only send one short message.

Curiously, our new two-way code requires a fresh perspective on classical error-correcting codes: While classical codes have only one distance guarantee for all pairs of codewords (i.e., the minimum distance), we construct codes where the distance between a pair of codewords depends on the “compatibility” of the messages they encode. We also prove that such codes are necessary for our result.

Joint work with Gillat Kol, Raghuvansh R. Saxena and Zhijun Zhang

Nonlinear Repair Schemes of Reed-Solomon Codes
Show and Hide Ext. Content for Nonlinear Repair Schemes of Reed-Solomon Codes
Wed, June 8
Itzhak Tamo

The problem of repairing linear codes, particularly Reed Solomon (RS) codes, has attracted a lot of attention in recent years due to its importance in distributed storage systems. In this problem, a failed code symbol (node) needs to be repaired by downloading as little information as possible from a subset of the remaining nodes. There are examples of RS codes with efficient repair schemes, and some are even optimal. However, these schemes fall short in several aspects; for example, they require a considerable field extension degree, and in particular, they do not work over prime fields. In this work, we explore the power of nonlinear repair schemes of RS codes and show that such schemes are crucial over prime fields, and in some cases, they outperform all linear schemes.

Based on joint work with Roni Con.

From Selection theorems to Weak Epsilon-Nets in Higher Dimensions (and back?)
Show and Hide Ext. Content for From Selection theorems to Weak Epsilon-Nets in Higher Dimensions (and back?)
Wed, June 1
Natan Rubin

Given a finite point set $P$ in $R^d$, and $\eps>0$ we say that a point set $N$ in $R^d$ is a weak $\eps$-net if it pierces every convex set $K$ with $|K\cap P|\geq \eps |P|$.

Let $d\geq 3$. We show that for any finite point set in $R^d$, and any $\eps>0$, there exists a weak $\eps$-net of cardinality $o(1/\eps^{d-1/2})$. Here $delta>0$ is an arbitrary small constant.

This is the first improvement of the bound of $O^*(1/\eps^d)$ that was obtained in 1993 by Chazelle, Edelsbrunner, Grigni, Guibas, Sharir, and Welzl for general point sets in dimension $d\geq 3$. The study of weak epsilon-nets is closely related to the fundamental problem of finding a point that hits many simplices in a dense $(d+1)$-uniform geometric hypergraph.

Temporal Path Finding in the Presence of Delays
Show and Hide Ext. Content for Temporal Path Finding in the Presence of Delays
Wed, May 18
Hendrik Molter

Consider planning a trip in a train network. In contrast to, say, a road network, the edges are temporal, i.e., they are only available at certain times. Another important difficulty is that trains, unfortunately, sometimes get delayed. This is especially bad if it causes one to miss subsequent trains. The best way to prepare against this is to have a connection that is robust to some number of (small) delays. An important factor in determining the robustness of a connection is how far in advance delays are announced and whether deviating from the initial route is possible.
We analyse three settings:
– All delays are known before departure.
– Delays may be occurring without prior warning.
– Deviating from the route is not possible. (In this case it does not make any difference whether delays are known before departure or occur during the travel.)
We provide a (parameterized) complexity analysis of all three cases.

Based on joint work with Eugen Füchsle, Rolf Niedermeier, and Malte Renken.

On the Size of Succinct Non-interactive Arguments in the Random Oracle Model
Show and Hide Ext. Content for On the Size of Succinct Non-interactive Arguments in the Random Oracle Model
Wed, May 11
Eylon Yogev

Are all SNARG constructions in the random oracle model inherently large? The answer to this question is still open, but I will survey recent work that makes significant progress towards a solution.

In particular, we will see a new construction that achieves a smaller argument size. This is the first progress on the Micali construction since it was introduced over 25 years ago.
Our construction relies on a strong soundness notion for PCPs and a weak binding notion for commitments. We hope that our work paves the way for understanding if a linear argument size is achievable in the ROM.

The talk is based on joint work with Alessandro Chiesa.

Efficient Multiparty Interactive Coding for Insertions, Deletions and Substitutions
Show and Hide Ext. Content for Efficient Multiparty Interactive Coding for Insertions, Deletions and Substitutions
Wed, April 13
Ran Gelles

Interactive coding allows two or more parties to carry out a distributed computation over a communication network that may be noisy. The ultimate goal is to develop efficient coding schemes that tolerate a high level of noise while increasing the communication by only a constant factor (i.e., constant rate). In this work we provide computationally efficient, constant rate schemes that conduct any computation on arbitrary networks, and succeed with high probability in the presence of adversarial noise that can insert, delete, or alter communicated messages. Our schemes are the first computationally efficient schemes in the multiparty setting that are resilient to adversarial noise and the first to resist insertions and deletions in the multiparty setting.
We first show a scheme that resist a fraction $\epsilon / m$ of adversarial noise, where $m$ is the number of links in the network and $\epsilon$ is some small constant. This scheme makes two assumptions: (a) every two parties hold a common random string, and (b) the noise is oblivious—it is fixed at the onset of the execution and is independent of the inputs and randomness held by the parties.
We then show how to eliminate both assumptions albeit at the cost of slightly decreasing the resilience of our scheme to $\epsilon / m log m$.

Public-Key Quantum Money with a Classical Bank
Show and Hide Ext. Content for Public-Key Quantum Money with a Classical Bank
Wed, April 6
Omri Shmueli

Quantum money is a main primitive in quantum cryptography, that enables a bank to distribute to parties in the network, called wallets, unclonable quantum banknotes that serve as a medium of exchange between wallets. While quantum money suggests a theoretical solution to some of the fundamental problems in currency systems, it still requires a strong model to be implemented; quantum computation and a quantum communication infrastructure. A central open question in this context is whether we can have a quantum money scheme that uses “minimal quantumness”, namely, local quantum computation and only classical communication.

Public-key semi-quantum money (Radian and Sattath, AFT 2019) is a quantum money scheme where the algorithm of the bank is completely classical, and quantum banknotes are publicly verifiable on any quantum computer. In particular, such scheme relies on local quantum computation and only classical communication. The only known construction of public-key semi-quantum is based on quantum lightning (Zhandry, EUROCRYPT 2019), which is based on a computational assumption that is now known to be broken.

In this work, we construct public-key semi-quantum money, based on quantum-secure indistinguishability obfuscation and the sub-exponential hardness of the Learning With Errors problem. The technical centerpiece of our construction is a new 3-message protocol, where a classical computer can delegate to a quantum computer the generation of a quantum state that is both, unclonable and publicly verifiable.

Locality-Preserving Hashing for Shifts with Connections to Cryptography
Show and Hide Ext. Content for Locality-Preserving Hashing for Shifts with Connections to Cryptography
Wed, March 30
Ohad Klein

We discuss the following problem, its solution, and its applications to Cryptography:
Alice receives a non-periodic string (such as ABCDEF), while Bob receives a string (such as CDEFAB), obtained by applying a hidden cyclic shift to Alice’s string.
Alice and Bob query their strings in a small number of positions (sublinear in the amount of shifting) and then exchange a single short message.
How can they detect the shift with minimal error probability?

Based on Joint works with Elette Boyle, Itai Dinur, Niv Gilboa, Yuval Ishai, Nathan Keller.

On the Complexity of Two-Party Differential Privacy
Show and Hide Ext. Content for On the Complexity of Two-Party Differential Privacy
Wed, March 23
Eliad Tsfadia

In distributed differential privacy, the parties perform analysis over their joint data while preserving the privacy for both datasets. Interestingly, for a few fundamental two-party functions such as inner product and Hamming distance, the accuracy of the distributed solution lags way behind what is achievable in the client-server setting. McGregor, Mironov, Pitassi, Reingold, Talwar, and Vadhan [FOCS ’10] proved that this gap is inherent, showing upper bounds on the accuracy of (any) distributed solution for these functions. These limitations can be bypassed when settling for computational differential privacy, where the data is differentially private only in the eyes of a computationally bounded observer, using public-key cryptography primitives.

We prove that the use of public-key cryptography is necessary for bypassing the limitation of McGregor et al., showing that a non-trivial solution for the inner-product, or the Hamming distance, implies the existence of a key-agreement protocol. Our bound implies a combinatorial proof for the fact that non-Boolean inner product of independent (strong) Santha-Vazirani sources is a good condenser. We obtain our main result by showing that the inner-product of a (single, strong) SV source with a uniformly random seed is a good condenser, even when the seed and source are dependent.

TBD
Show and Hide Ext. Content for TBD
Wed, January 26
Ran Gelles

TBD

TBD
Show and Hide Ext. Content for TBD
Wed, January 19
Omri Shmueli

TBD

TBD
Show and Hide Ext. Content for TBD
Wed, January 12
Ohad Klein

TBD

TBD
Show and Hide Ext. Content for TBD
Wed, January 5
Eliad Tsfadia

TBD

2021

Round&Round: An Improved Algorithm for 2-Dimensional Vector Bin Packing
Show and Hide Ext. Content for Round&Round: An Improved Algorithm for 2-Dimensional Vector Bin Packing
Wed, December 29
Ariel Kulik

We study the 2-Dimensional Vector Bin Packing Problem (2VBP), a
generalization of classic Bin Packing that is widely applicable in
resource allocation and scheduling. In 2BVP we are given a set of items,
where each item is associated with a two-dimensional volume vector. The
objective is to partition the items into a minimal number of subsets
(bins), such that the total volume of items in each subset is at most 1
in each dimension.

We give an asymptotic 25/18+ eps ~ 1.389-approximation for the problem,
thus improving upon the best known asymptotic ratio of (1+ln 3/2+ eps)
~1.406 due to Bansal, Elias and Khan. Our algorithm applies a novel
Round&Round approach which iteratively solves a configuration-LP
relaxation for the residual instance and samples a small number of
configurations based on the solution for the configuration-LP. For the
analysis we derive an iteration-dependent upper bound on the solution
size for the configuration-LP, which holds with high probability. We
obtain the improved ratio by exploiting key structural properties of
2VBP instances in a fractional manner, using new ideas and techniques
which are of independent interest.

Secret Sharing, Slice Formulas, and Monotone Real Circuits
Show and Hide Ext. Content for Secret Sharing, Slice Formulas, and Monotone Real Circuits
Wed, December 22
Benny Applebaum

A secret-sharing scheme allows to distribute a secret $s$ among $n$ parties such that only some predefined “authorized” sets of parties can reconstruct the secret $s$, and all other “unauthorized” sets learn nothing about $s$. For over 30 years, it was known that any (monotone) collection of authorized sets can be realized by a secret-sharing scheme whose shares are of size $2^{n-o(n)}$ and until recently no better scheme was known. In a recent breakthrough, Liu and Vaikuntanathan (STOC 2018) have reduced the share size to $2^{0.994n+o(n)}$, and this was further improved by several follow-ups accumulating in an upper bound of $1.5^{n+o(n)}$ (Applebaum and Nir, CRYPTO 2021). Following these advances, it is natural to ask whether these new approaches can lead to a truly sub-exponential upper-bound of $2^{n^{1-\epsilon}}$ for some constant $\epsilon>0$, or even all the way down to polynomial upper-bounds.

In this paper, we relate this question to the complexity of computing monotone Boolean functions by monotone real circuits (MRCs) — a computational model that was introduced by Pudl\'{a}k (J. Symb. Log., 1997) in the context of proof complexity. We introduce a new notion of “separable” MRCs that lies between monotone real circuits and monotone real formulas (MRFs). As our main results, we show that recent constructions of general secret-sharing schemes implicitly give rise to separable MRCs for general monotone functions of similar complexity and that some monotone functions (in monotone NP) cannot be computed by sub-exponential size separable MRCs. Interestingly, it seems that proving similar lower-bounds for general MRCs is beyond the reach of current techniques.

We use this connection to obtain lower-bounds against a natural family of secret-sharing schemes, as well as new non-trivial upper-bounds for MRCs. Specifically, we conclude that recent approaches for secret-sharing schemes cannot achieve sub-exponential share size and that every monotone function can be realized by an MRC (or even MRF) of complexity $1.5^{n+o(n)}$. To the best of our knowledge, this is the first improvement over the trivial $2^{n-o(n)}$ upper-bound. Along the way, we show that the recent constructions of general secret-sharing schemes implicitly give rise to Boolean formulas over slice functions and prove that such formulas can be simulated by separable MRCs of similar size. On a conceptual level, our paper continues the rich line of study that relates the share size of secret-sharing schemes to monotone complexity measures.

Based on joint work with Amos Beimel, Oded Nir, Naty Peter, and Toniann Pitassi.

Extending Generalization Theory Towards Addressing Modern Challenges in Machine Learning
Show and Hide Ext. Content for Extending Generalization Theory Towards Addressing Modern Challenges in Machine Learning
Wed, December 15
Shay Moran

Recent years have witnessed tremendous progress in the field of Machine Learning (ML).
However, many of the recent breakthroughs demonstrate phenomena that lack explanations, and sometimes even contradict conventional wisdom.
One main reason for this is because classical ML theory adopts a worst-case perspective which seems too pessimistic to explain practical ML:
in reality data is rarely worst-case, and experiments indicate that often much less data is needed than predicted by traditional theory.

In this talk we will discuss two variations of classical PAC learning theory.
These variants are based on a distribution- and data-dependent perspective
which complements the distribution-free worst-case perspective of classical theory,
and is suitable for exploiting specific properties of a given learning task.

Based on two joint works with Noga Alon, Olivier Bousquet, Steve Hanneke,
Ron Holzman, Ramon van Handel, and Amir Yehudayoff.

relevant publications: publication 1, publication 2
Extremely Efficient Distance Computations using Distributed Sparsity-Aware Algorithms
Show and Hide Ext. Content for Extremely Efficient Distance Computations using Distributed Sparsity-Aware Algorithms
Wed, December 8
Dean Leitersdorf

Given a distributed network, represented by a graph of computation nodes connected by communication edges, a fundamental problem is computing distances between nodes. Our recent line of works show that while exactly computing all distances (All-Pairs-Shortest-Paths, APSP) in certain distributed settings currently has O(n^{1/3}) complexity, it is possible to find very good distance approximations even with an O(1) complexity. This talk will present a unified view of our various papers developing these interesting advances in distributed computing.

The central theme uniting these developments is designing sparsity-aware algorithms, and then applying them to problems on general, non-sparse, graphs.

Primarily, the main tools used are a series of distributed sparse matrix multiplication algorithms we develop. We then use these tools in novel manners to develop a toolkit for basic distance computations, such as computing for each node the distances to the O(n^{2/3}) nodes closest to it. Next, we use these tools to solve end-goal distance problems, in O(log^2 n) rounds.

Subsequently, we adapt the load-balancing techniques used in the matrix multiplication algorithms to show combinatorial algorithms which directly compute APSP approximations, without passing through other intermediate distance tools. This allows us to show O(1) round algorithms.

Finally, the talk concludes with observing the connection of the above addressed works to other developments in the realm of distributed computing, specifically MST computation and subgraph existence problems.

On Parameterized Analysis and the Disjoint Paths Problem
Show and Hide Ext. Content for On Parameterized Analysis and the Disjoint Paths Problem
Wed, November 17
Meirav Zehavi

Parameterized Analysis leads both to a deeper understanding of intractability results and to practical solutions for many NP-hard problems. Informally speaking, Parameterized Analysis is a mathematical paradigm to answer the following fundamental question: What makes an NP-hard problem hard? Specifically, how do different parameters (being formal quantifications of structure) of an NP-hard problem relate to its inherent difficulty? Can we exploit these relations algorithmically, and to what extent? Over the past three decades, Parameterized Analysis has grown to be a mature field of outstandingly broad scope with significant impact from both theoretical and practical perspectives on computation.

In this talk, I will first give a brief introduction to the field of Parameterized Analysis. Additionally, I will zoom in to a specific result, namely, the first single-exponential time parameterized algorithm for the Disjoint Paths problem on planar graphs. An efficient algorithm for the Disjoint Paths problem in general, and on “almost planar” graphs in particular, is a critical part in the quest for the establishment of an Efficient Graph Minors Theory. As the Graph Minors Theory is the origin of Parameterized Analysis and is ubiquitous in the design of parameterized algorithms, making this theory efficient is a holy grail in the field.

Settling the complexity of Nash equilibrium in congestion games
Show and Hide Ext. Content for Settling the complexity of Nash equilibrium in congestion games
Wed, November 10
Yakov Babichenko

We consider
(i) the problem of finding a (possibly mixed) Nash equilibrium in congestion games, and
(ii) the problem of finding an (exponential precision) fixed point of the gradient descent dynamics of a smooth function f:[0,1]^n -> R
We prove that these problems are equivalent.
Our result holds for various explicit descriptions of f, ranging from (almost general) arithmetic circuits, to degree-5 polynomials.
By a very recent result of [Fearnley, Goldberg, Hollender, Savani ’21] this implies that these problems are PPAD \cap PLS-complete.
As a corollary, we also obtain the following equivalence of complexity classes:
CCLS = PPAD \cap PLS.

Joint work with Aviad Rubinstein.

On edit distance oracles and planar distance oracles
Show and Hide Ext. Content for On edit distance oracles and planar distance oracles
Wed, November 3
Oren Weimann

In this talk, I will discuss two highly related problems:

(1) Edit distance oracles: preprocess two strings S and T of total length n into a data structure that can quickly report the optimal edit distance between any substring of S and any substring of T. I will describe a data structure with query time Õ(1), and construction time and space n^(2+o(1)). This is optimal up to subpolynomial factors since computing just the edit distance between S and T requires quadratic time (assuming the strong exponential time hypothesis).

(2) Planar distance oracles: preprocess a directed weighted planar graph into a data structure that can quickly report the distance between any two vertices. Using the concept of Voronoi diagrams, dramatic progress has been made on planar distance oracles in the past few years. I will describe an oracle of almost linear n^{1+o(1)} size that answer queries in Õ(1) time. This is again optimal up to subpolynomial factors. However, the construction time is currently O(n^{1.5}), which is not known to be optimal.

I will describe how ideas developed for planar distance oracles propagate to edit distance oracles. The structure of the edit distance graph allows for a simpler presentation of some of the techniques involved, and is further exploited to obtain a faster construction time. It is not uncommon that techniques originally developed for edit distance are later extended and generalized to planar graphs. Here, ideas propagate in the opposite direction; techniques originally developed for planar graphs are specialized to edit distance.

Based on joint works with Panos Charalampopoulos, Pawel Gawrychowski and Shay Mozes (STOC 2019, ICALP 2021)

2020

The Metric Relaxation for 0-Extension Admits an ?(log²⸍³?) Gap
Show and Hide Ext. Content for The Metric Relaxation for 0-Extension Admits an ?(log²⸍³?) Gap
Wed, December 30
Nitzan Tur, Technion

We consider the 0-Extension problem, where we are given an undirected graph G=(V,E) equipped with non-negative edge weights w: E→ℝ+, a collection T={t1,…,tk}⊆V of k special vertices called terminals, and a semi-metric D over T.

The goal is to assign every non-terminal vertex to a terminal while minimizing the sum over all edges of the weight of the edge multiplied by the distance in D between the terminals to which the endpoints of the edge are assigned.

0-Extension admits two known algorithms, achieving approximations of O(logk) [Călinescu-Karloff-Rabani SICOMP ’05] and O(logk/loglogk) [Fakcharoenphol-Harrelson-Rao-Talwar SODA ’03].

Both known algorithms are based on rounding a natural linear programming relaxation called the metric relaxation, in which D is extended from T to the entirety of V. The current best known integrality gap for the metric relaxation is Ω(√logk).

In this work we present an improved integrality gap of Ω(log2/3k) for the metric relaxation. Our construction is based on the randomized extension of one graph by another, a notion that captures lifts of graphs as a special case and might be of independent interest.

Inspired by algebraic topology, our analysis of the gap instance is based on proving no continuous section (in the topological sense) exists in the randomized extension.

Masters seminar
Maximizing the Correlation: Extending Grothendieck’s Inequality to Large Domains
Show and Hide Ext. Content for Maximizing the Correlation: Extending Grothendieck’s Inequality to Large Domains
Wed, December 23
Dor Katzelnick, Technion

Correlation Clustering is an elegant model where given a graph with edges labeled + or −, the goal is to produce a clustering that agrees the most with the labels: + edges should reside within clusters and − edges should cross between clusters.

In this work we study the MaxCorr objective, aiming to find a clustering that maximizes the difference between edges classified correctly and incorrectly. We focus on the case of bipartite graphs and present an improved approximation of 0.254, improving upon the known approximation of 0.219 given by Charikar and Wirth (FOCS’04) and going beyond the 0.2296 barrier imposed by their technique.

Our algorithm is inspired by Krivine’s method for bounding Grothendieck’s constant, and we extend this method to allow for more than two clusters in the output.

Moreover, our algorithm leads to two additional results: (1) the first known approximation guarantees for MaxCorr where the output is constrained to have a bounded number of clusters; and (2) a natural extension of Grothendieck’s inequality to large domains.

Masters seminar
A (1–1/?–?)-Approximation for the Monotone Submodular Multiple Knapsack Problem
Show and Hide Ext. Content for A (1–1/?–?)-Approximation for the Monotone Submodular Multiple Knapsack Problem
Wed, December 9
Yaron Fairstein, Technion

We study the problem of maximizing a monotone submodular function subject to a Multiple Knapsack constraint (SMKP). The input is a set I of items, each associated with a non-negative weight, and a set of bins, each having a capacity. Also, we are given a submodular, monotone and non-negative function f over subsets of the items. The objective is to find a subset of items AI and a packing of the items in the bins, such that f(A) is maximized. SMKP is a natural extension of both Multiple Knapsack and the problem of monotone submodular maximization subject to a knapsack constraint.

Our main result is a nearly optimal polynomial time (1−e–1ε)-approximation algorithm for the problem, for any ε>0. Our algorithm relies on a refined analysis of techniques for constrained submodular optimization combined with sophisticated application of tools used in the development of approximation schemes for packing problems.

Joint work with Ariel Kulik, Seffi Naor, Danny Raz, and Hadas Shachnai.

relevant publications: publication 1, publication 2
Bayesian Persuasion under Ex Ante and Ex Post Constraints
Show and Hide Ext. Content for Bayesian Persuasion under Ex Ante and Ex Post Constraints
Wed, December 2
Konstantin Zabarnyi, Technion

Bayesian persuasion, as introduced by Kamenica and Gentzkow in 2011, is the study of information sharing policies among strategic agents. A prime example is signaling in online ad auctions: what information should a platform signal to an advertiser regarding a user when selling the opportunity to advertise to her? Practical considerations such as preventing discrimination, protecting privacy or acknowledging limited attention of the information receiver impose constraints on information sharing. Despite their importance in real-life applications, such constraints are not usually taken into account in Bayesian persuasion literature. In our work, we propose and analyze a simple way to mathematically model such constraints as restrictions on Receiver’s admissible posterior beliefs.

We consider two families of constraints — ex ante and ex post, where the latter limits each instance of Sender-Receiver communication, while the former more general family can also pose restrictions in expectation. For both families, we show existence of a simple optimal signaling scheme in the sense of a small number of signals; our bounds for signal numbers are tight. We then provide an additive bi-criteria FPTAS for an optimal constrained signaling scheme when the number of states of nature is constant; we improve the approximation to single-criteria under a Slater-like regularity condition. The FPTAS holds under standard assumptions, and more relaxed assumptions yield a PTAS. Finally, we bound the ratio between Sender’s optimal utility under convex ex ante constraints and the corresponding ex post constraints. We demonstrate how this bound can be applied to find an approximately welfare-maximizing constrained signaling scheme in ad auctions.
Joint work with Yakov Babichenko (Technion IE&M) and Inbal Talgam-Cohen (Technion CS).
Masters seminar
High quality sets of questions in the “20 Questions” game
Show and Hide Ext. Content for High quality sets of questions in the “20 Questions” game
Wed, November 25
Idan Mehalel, Technion

In the “20 questions” game, Alice picks a probability distribution over the set {1,…,n}, and draws a secret element x from it. Bob’s goal is to construct a strategy to reveal x using only “Yes or No” questions, while minimizing the expected number of questions asked. Bob’s success is guaranteed if he uses Huffman code, but then he might ask any possible question.

In the paper “Twenty (simple) questions”, Dagan et al. showed that the minimal number of questions which allows to construct optimal strategies for all distributions, denoted q(n), is much smaller than the number of all questions. They provide lower and upper bounds on q(n) for any n, but calculate it exactly only for some n values.

We give a formula for the value of q(n) for any n, and present a new explicit lower bound on q(n). We also generalize some results of Dagan et al. to the “d-ary 20 questions” game, in which every question has d>2 possible answers, instead of only “Yes” and “No”.

Masters seminar
Efficient List-Decoding with Constant Alphabet and List Sizes
Show and Hide Ext. Content for Efficient List-Decoding with Constant Alphabet and List Sizes
Wed, November 18
Zeyu Guo, University of Haifa

The notion of list-decoding, introduced by Elias in the 1950s, is a natural generalization of unique decoding, where the decoder is allowed to output a list of codewords that contains the transmitted codeword instead of a single codeword. Besides being a fundamental concept in coding theory, list decoding has found diverse applications in theoretical computer science.

Error-correcting codes of rate R that are list-decodable up to the relative radius 1−Rε are said to achieve the list-decoding capacity. The first explicit capacity-achieving list-decodable codes, known as folded Reed-Solomon (FRS) codes, were constructed by Guruswami and Rudra in their celebrated paper [Guruswami and Rudra, 2005]. A disadvantage of FRS codes, however, is that the alphabet size is not constant and has to grow with the block length of the codes.

In this talk, I will present our recent explicit and efficient algebraic construction of capacity-achieving list-decodable codes whose list size and alphabet size are both constant, depending only on the gap to capacity ε.

The construction is based on algebraic-geometric (AG) codes. Our main observation is that the output list of AG codes with subfield evaluation points is contained in an affine shift of the image of a block-triangular-Toeplitz (BTT) matrix, and that the list size can be reduced to a constant by restricting the message space to a BTT evasive subspace.

This is joint work with Noga Ron-Zewi. No prior knowledge of algebraic-geometric codes is assumed.

Analysis of Two-variable Recurrence Relations with Application to Parameterized Approximations
Show and Hide Ext. Content for Analysis of Two-variable Recurrence Relations with Application to Parameterized Approximations
Wed, November 11
Ariel Kulik, Technion

In this paper we introduce randomized branching as a tool for parameterized approximation and develop the mathematical machinery for its analysis. Our algorithms improve the best known running times of parameterized approximation algorithms for Vertex Cover and 3-Hitting Set for a wide range of approximation ratios. One notable example is a simple parameterized random 1.5-approximation algorithm for Vertex Cover, whose running time of
O*(1.01657k) substantially improves the best known running time of O*(1.0883k) [Brankovic and Fernau, 2013]. For 3-Hitting Set we present a parameterized random 2-approximation algorithm with running time of O*(1.0659k), improving the best known O*(1.29k) algorithm of [Brankovic and Fernau, 2012].

The running times of our algorithms are derived from an asymptotic analysis of a wide class of two-variable recurrence relations. We show an equivalence between these recurrences and a stochastic process, which we analyze using the Method of Types, by introducing an adaptation of Sanov’s theorem to our setting. We believe our novel analysis of recurrence relations, which is of independent interest, is a main contribution of this paper.

Polynomial protocols for range proofs
Show and Hide Ext. Content for Polynomial protocols for range proofs
Wed, November 4
Ariel Gabizon, Aztec
In a polynomial protocol a prover sends messages that are polynomials, and the verifier is allowed to check polynomial identities between these polynomials. The prover complexity is measured as the sum of degrees of the polynomials sent. The motivation for the definition is to capture prover complexity in zero knowledge proofs systems based on polynomial commitment schemes.
We will present and illustrate this notion; and present an open question on improved protocols for “range proofs” – where given a committed polynomial f, and subset H of the field, we wish to prove f‘s values on H, are in a bounded domain [1,…,M].

We will also attempt to give intuition as to why such range proofs are a crucial component in practical zero-knowledge systems.

(Joint work with Zachary J. Williamson)

On computing multilinear polynomials using depth four circuits of bounded individual degree
Show and Hide Ext. Content for On computing multilinear polynomials using depth four circuits of bounded individual degree
Wed, October 28
Suryajith Chillara, University of Haifa

The complexity of Iterated Matrix Multiplication is a central theme in Computational Complexity theory, as the problem is closely related to the problem of separating various complexity classes within P. In this talk, we will talk about the complexity of computing an entry in the product of d many n×n generic matrices (denoted IMMn,d) by depth four circuits of bounded individual degree.

We show that for all rna for a fixed constant a, any “syntactic” depth four circuit of bounded individual degree r computing IMMn,d must have size n?(√d) when dnb (for a fixed constant ba). This improves upon a previous result of Kayal, Saha and Tavenas [STOC 2016 & Theory of Computing, 2018] who proved a lower bound of (n/r1.1)?(√d/r) for the same.

This talk assumes no relevant background. We will introduce the model of computation, the motivation to study computation of multilinear polynomials by circuits of bounded individual degree and briefly survey relevant results to this restricted model of computation.

Online talk
Strong Average-Case Circuit Lower Bounds from Non-trivial Derandomization
Show and Hide Ext. Content for Strong Average-Case Circuit Lower Bounds from Non-trivial Derandomization
Wed, January 29
Lijie Chen

We prove that for all constants a, NQP = NTIME[n^polylog(n)] cannot be (1/2 + 2^(-log^a n) )-approximated by 2^(log^a n)-size ACC^0 circuits. Previously, it was even open whether E^NP can be (1/2+1/sqrt(n))-approximated by AC^0[2] circuits. As a straightforward application, we obtain an infinitely often non-deterministic pseudorandom generator for poly-size ACC^0 circuits with seed length 2^{log^eps n}, for all eps > 0.

More generally, we establish a connection showing that, for a typical circuit class C, non-trivial nondeterministic CAPP algorithms imply strong (1/2 + 1/n^{omega(1)}) average-case lower bounds for nondeterministic time classes against C circuits. The existence of such (deterministic) algorithms is much weaker than the widely believed conjecture PromiseBPP = PromiseP.

Our new results build on a line of recent works, including [Murray and Williams, STOC 2018], [Chen and Williams, CCC 2019], and [Chen, FOCS 2019]. In particular, it strengthens the corresponding (1/2 + 1/polylog(n))-

inapproximability average-case lower bounds in [Chen, FOCS 2019]. The two important technical ingredients are techniques from Cryptography in NC^0 [Applebaum et al., SICOMP 2006], and Probabilistic Checkable Proofs of Proximity with NC^1-computable proofs.

This is joint work with Hanlin Ren from Tsinghua University.

Foundations of Distributed Computing in the 2020s
Show and Hide Ext. Content for Foundations of Distributed Computing in the 2020s
Wed, January 22
Jukka Suomela

In this talk I will review some major advances in the theory of distributed computing in the past decade and discuss key research challenges for the 2020s, with the main focus on distributed graph algorithms. I will present promising new approaches for doing research in the field, and I will also highlight examples of seemingly elementary questions that are still beyond the reach of our current techniques.

Average sensitivity of graph algorithms
Show and Hide Ext. Content for Average sensitivity of graph algorithms
Wed, January 15
Nithin Varma

In modern applications of graph algorithms, where the graphs of interest are large and dynamic, it is unrealistic to assume that an input representation contains the full information of a graph being studied. For example, consider a social network, where a vertex corresponds to a user and an edge corresponds to a friendship relation. It is reasonable to assume that users do not always update new friendships on the social network, and that sometimes they do not fully disclose their friendship relations for security or privacy reasons. This motivates the design of graph algorithms that, in spite of being given only a (large) subgraph as input, output solutions that are close to the solutions output when the whole graph is available. In this talk, I will introduce a notion of sensitivity of graph algorithms that formalizes this desirable feature. After discussing the basic properties of our sensitivity definition, I will give an overview of the key ideas used in the design of algorithms with low sensitivity for the maximum matching problem, the minimum cut problems, and the balanced cut problem.

Nearly Instance-Optimal Mechanisms in Differential Privacy
Show and Hide Ext. Content for Nearly Instance-Optimal Mechanisms in Differential Privacy
Wed, January 8
Hilal Asi

We develop differentially private mechanisms that achieve nearly instance-optimal losses, achieving lower loss than all appropriately unbiased mechanisms for any possible instance. We show that our mechanisms, with a modest increase in sample size (logarithmic or constant), are instance-optimal for a large family of functions. In contrast to existing mechanisms, which use the global or local sensitivity of the function being estimated, and so are necessarily instance suboptimal, the key to our construction is to use the inverse of the sensitivity. This allows a simple instance-optimal algorithm, and we develop several representative private mechanisms, including for the median and regression problems.

2019

An Adaptive Step Toward the Multiphase Conjecture
Show and Hide Ext. Content for An Adaptive Step Toward the Multiphase Conjecture
Mon, December 23
Omri Weinstein

In 2010, Patrascu proposed the Multiphase problem, as a candidate for proving polynomial lower bounds on the operational time of dynamic data structures. Patrascu conjectured that any data structure for the Multiphase problem must make n cell-probes in either the update or query phase, and showed that this would imply similar unconditional lower bounds on many important dynamic data structure problems. Alas, there has been almost no progress on this conjecture in the past decade since its introduction.

We show an ~Omega(sqrt{n}) cell-probe lower bound on the Multiphase problem for data structures with general (adaptive) updates, and queries with unbounded but “layered” adaptivity. This result captures all known set-intersection data structures and significantly strengthens previous Multiphase lower bounds, which only captured non-adaptivedata structures. Our main technical result is a communication lower bound on a 4-party variant of Patrascu’s Number-On-Forehead Multiphase game, using information complexity techniques.

Joint work with Young Kun Ko.