Theory of Computation Seminar
We meet every Tuesday at Schreiber 309.
Light lunch is served from 12:45, and the talk starts at 13:10
and lasts 12 hours (with a 10 minute break).
The seminar also has a mailing list. In order to join or leave the list, fill the form on this page, or contact Dean Doron, the seminar coordinator.
Schedule
Archive of Previous Talks
Spring 2018

March 6, 2018
Jad Silbak (TAU)
Computational TwoParty Correlation
We prove a dichotomy theorem for twoparty protocols, and show that for every polytime twoparty protocol with singlebit output, at least one of following holds:
 The protocol can be used to construct a keyagreement protocol.
 For every constant Ï > 0 the parties' output is Ïuncorrelated: let (X; Y; T) denote the parties' outputs and the protocol's transcript respectively. A protocol is Ïuncorrelated if there exists an efficient "decorralizer" algorithm Decor, that when given a random transcript T, produces two numbers PA; PB, such that no efficient algorithm can distinguish (UPA ;UPB ;T) (where Up denotes a biassed coin with bias p from (X; Y; T), with distinguishing advantage larger than Ï.
 March 13, 2018 Karthik C.S. (Weizmann) A Framework for Parameterized Hardness of Approximation In this talk we will see a framework to show inapproximability of parameterized problems. This framework generalizes the 'Distributed PCPs' framework recently introduced by Abboud et al. [FOCS'17]. By applying the gadget reductions given by Chalermsook et al. [FOCS'17] to this framework, we settle the inapproximability of parameterized dominating set under various time hypotheses. Joint work with Bundit Laekhanukit and Pasin Manurangsi. Preprint available at https://eccc.weizmann.ac.il/report/2017/186/
 March 20, 2018 William Hoza (UT Austin) TypicallyCorrect Derandomization for Small Time and Space Suppose a language L can be decided by a boundederror randomized algorithm that runs in space S and time n * poly(S). We give a randomized algorithm for L that still runs in space O(S) and time n * poly(S) that uses only O(S) random bits; our algorithm has a low failure probability on all but a negligible fraction of inputs of each length. An immediate corollary is a deterministic algorithm for L that runs in space O(S) and succeeds on all but a negligible fraction of inputs of each length. We also discuss additional complexitytheoretic applications of our technique.
 March 27, 2018 Daniel Grier (MIT) New Hardness Results for the Permanent Using Linear Optics One of the great accomplishments in complexity theory was Valiant's 1979 proof that the permanent of a matrix is #Phard to compute. Subsequent work simplified Valiant's ideas and even began to recast them as problems in quantum computing. In 2011, this culminated in a striking proof by Aaronson, based solely on quantum linear optics, of the #Phardness of the permanent. Although this simplified (at least for physicists) aspects of Valiant's proof by offloading its difficulty onto central and wellknown theorems in linear optics, the question remained: what else was gained by converting Valiant's combinatorial proof into a linear optical one? In this talk I'll give one possible answer to this questionnamely, that these quantum techniques are useful for proving hardness of permanents for matrices that obey classical Lie group structure. In particular, I will prove that computing the permanent of real orthogonal matrices is #Phard. The hardness result translates to permanents of orthogonal matrices over the finite fields for any characteristic not equal to 2 or 3. Interestingly, this characterization is tight: in fields of characteristic 2, the permanent coincides with the determinant; in fields of characteristic 3, one can efficiently compute the permanent of an orthogonal matrix by a nontrivial result of Kogan. The talk will begin with an overview linear optics and the extremely nice correspondence between linear optical circuits and matrix permanents. Along the way, we will reconstruct Aaronson's proof, and then show how to modify it to obtain our results. Some knowledge of quantum computing will be useful, but not strictly required. No prior knowledge of linear optics is necessary.
 April 10, 2018 Nikolaos Makriyannis (TAU) Tighter Bounds on MultiParty Coin Flipping via Augmented Weak Martingales and Differentially Private Sampling In his seminal work, Cleve [STOC '86] has proved that any $r$round coinflipping protocol can be efficiently biased by $\Theta(1/r)$. This lower bound was met for the twoparty case by Moran, Naor, and Segev [Journal of Cryptology '16], and the threeparty case (up to a $\polylog$ factor) by Haitner and Tsfadia [SICOMP '17], and was approached for $n$party protocols when $n \loglog r$, however, the best bias for $n$party coinflipping protocols remains $O(n/\sqrt{r})$ achieved by the majority protocol of Awerbuch, Blum, Chor, Goldwasser, and Micali [Manuscript '85]. In this talk, we will present a tighter lower bound on the bias of coinflipping protocols, showing that, for every constant $\eps >0$, an $r^{\eps}$party $r$round coinflipping protocol can be efficiently biased by $\tilde{\Omega}(1/\sqrt{r})$. As far as we know, this is the first improvement of Cleve's bound, and is only $n=r^{\eps}$ (multiplicative) far from the aforementioned upper bound of Awerbuch et al. We prove the above bound using two new results that we believe are of independent interest. The first result is that a sequence of (``augmented'') weak martingales have large gap: with constant probability there exists two adjacent variables whose gap is at least the ratio between the gap between the first and last variables and the square root of the number of variables. This generalizes over the result of Cleve and Impagliazzo [Manuscript '93], who showed that the above holds for strong martingales, and allows in some setting to exploit this gap by efficient algorithms. The second result is a new sampling algorithm that uses a differentially private mechanism to minimize the effect of data divergence. Joint work with Amos Beimel, Iftach Haitner and Eran Omri
 April 17, 2018 Dana Moshkovitz (UT Austin) What Cannot Be Learned With Bounded Memory How does computational learning change when one cannot store all the examples one sees in memory? This question has seen a burst of interest in the past couple of years, leading to the surprising theorem that there exist simple concepts (parities) that require an extraordinary amount of time to learn unless one has quite a lot of memory. In this work we show that in fact most concepts cannot be learned without sufficient memory. This subsumes the aforementioned theorem and implies similar results for other concepts of interest. The new results follow from a general combinatorial framework that we developed to prove lower bounds for space bounded learning. Joint work with Michal Moshkovitz, Hebrew University.
 April 24, 2018 Nir Bitansky (TAU) On RoundOptimal ZeroKnowledge Protocols and Compressing Collisions The round complexity of zeroknowledge protocols is a longstanding open question in cryptography. Its study over the past three decades has revealed connections to other fundamental concepts such as nonblackbox reductions and succinct proof systems. In this talk, I will first discuss the history of the question and try to give you a sense of why it is challenging. In the second part, I will present a recent result that resolves the question under a new hardness assumption that generalizes the classical notion of collisionresistant hash functions. Roughly, the assumption asserts the existence of shrinking hash functions such that no polynomialtime algorithm, with advice of size k, can output much more than k colliding inputs. I will motivate this assumption, discuss some of its additional applications, and future directions. The talk is based on joint works with Zvika Brakerski, Yael Kalai, Omer Paneth and Vinod Vaikuntanathan. https://eprint.iacr.org/2017/488 https://eprint.iacr.org/2016/213 I will not assume any prior knowledge in cryptography.
 May 1, 2018 Izhar Oppenheim (BGU) New Constructions of local spectral high dimensional expanders A simplicial complex X will be called a local spectral high dimensional expander if the oneskeletons of all the links of X have good spectral gaps (including the one skeleton of X itself). This property is desirable because it leads to several global expansion phenomena. In my talk, I will explain and try to motivate this definition and then present an elementary construction of local spectral high dimensional expanders. The construction is elementary in the sense that describing it requires only basic matrix theory (previous, much less elementry, constructions were known before). This talk is based on a joint work with Tali Kaufman.
 May 8, 2018 Leonard Schulman (Caltech) Explicit Binary Tree Codes with Polylogarithmic Size Alphabet This paper makes progress on the problem of explicitly constructing a binary tree code with constant distance and constant alphabet size. For every constant delta smaller than 1 we give an explicit binary tree code with distance delta and alphabet size poly(log n), where n is the depth of the tree. This is the first improvement over a twodecadeold construction that has an exponentially larger alphabet of size poly(n). As part of the analysis, we prove a bound on the number of positive integer roots a real polynomial can have in terms of its sparsity with respect to the Newton basisa result of independent interest. Joint work with G. Cohen and B. Haeupler
 May 15, 2018 Dor Minzer (TAU) Hardness of 2to2 Games via Expansion on the Grassmann Graph The PCP theorem is a vital tool in almost all hardness of approximation results. Despite considerable efforts, however, tight hardness results for some problems remain elusive via the PCP theorem and the body of accompanying set of techniques. If one is willing to assume stronger conjectures, the UniqueGames Conjecture can be often used to obtain improved hardness results. The UniqueGames problem asks, given a system of linear equations over a finite field in which each equation contains 2 variables, to distinguish between the case that it is nearly satisfiable (i.e. all but 1% of the equations can be satisfied), and the case in which almost no equations can be satisfied (i.e. at most 1%). The UniqueGames Conjecture, posed by Khot in 2002, postulates that this problem is NPhard in the worst case. Since then it has become a central open problem theoretical computer science. A recent line of study pursued a new attack on a closely related conjecture, called the 2to2 Games Conjecture. The approach is based on a mathematical object called the Grassmann Graph, and relies on the study of (edge) expansion properties on it. Recently this line of study was concluded, thereby proving the 2to2 Games Conjecture.
 May 22, 2018 Shai Evra (HUJI) Bounded degree high dimensional expanders In the recent theory of high dimensional expanders, the following open problem was raised by Gromov: Are there bounded degree high dimensional expanders? Where for the definition of high dimensional expanders, we shall follow the pioneers of this field, and consider the notions of coboundary expanders (LinialMeshulam) and topological expanders (Gromov). In a recent work, building on an earlier work of KaufmanKazhdanLubotzky in dimension 2, we were able to prove the existence of bounded degree expanders according to Gromov, in every dimension. In this talk I will present an outline of the proof of this result, focusing on a new localtoglobal criterion of highdimensional expansion. This is a joint work with Tali Kaufman
 May 29, 2018 Roei Tell (Weizmann) Proving that promiseBPP=promiseP implies superpolynomial circuit lower bounds for "almost NP" This talk is focused on an exciting new development in the lineofwork that studies the relationship between derandomization and circuit lower bounds. The new development is concerned with the "highend" side of this relationship, relating extremely strong derandomization results to extremely strong circuit lower bounds. Specifically, the focus is the new result that polynomialtime derandomization of promiseBPP (i.e., any proof that prBPP=prP) implies superpolynomial circuit lower bounds for "almost NP" (i.e., implies that NTIME[ n^{\omega(1)}] is not contained in P/poly); this is a dramatic improvement on a previous celebrated result of Impagliazzo, Kabanets, and Wigderson (2002). In the talk we will first survey the new landscape of the relationship between derandomization and circuit lower bounds that follows, and highlight a few very puzzling questions that this new landscape poses. We will (hopefully) see overviews of two proofs for the new result: One that relies on a circuit lower bound of Santhanam (2009), and another that follows from the recent results of Murray and Williams (2018). The talk is based on a paper of mine that is available online.
 June 5, 2018 Irit Dinur (Weizmann) Double samplers and how to get them from high dimensional expanders
 June 12, 2018 Dean Doron (TAU) NearOptimal Erasure ListDecodable Codes For every small Îµ, we explicitly construct binary erasure listdecodable codes that can be listdecoded from 1Îµ fraction of erasures, with nearoptimal listsize poly(log(1/Îµ)) and nearoptimal rate O(Îµ^(1+Î³)) where Î³>0 is any constant. This is the first construction to break the Îµ^2 rate barrier, solving a longstanding open problem raised by Guruswami and Indyk, and it does so with a remarkably small list size (we remark that Guruswami showed such a result is impossible using linear codes, and indeed our code is not linear). Equivalently, in terms of dispersers, we construct Îµerror onebit strong dispersers with nearoptimal seedlength and nearoptimal entropyloss. The main ingredient in the construction is a new (and nearly optimal) unbalanced twosource extractor. The extractor extracts one bit with constant error from two independent sources, where one source has length n and tiny minentropy O(loglog n) and the other source has length O(log n) and arbitrarily small constant minentropy rate.