Blackboard, Winslow Homer (1877)

Theory of Computation Seminar

We meet every Tuesday at Schreiber 309.
Light lunch is served from 12:45, and the talk starts at 13:10 and lasts 1-2 hours (with a 10 minute break).

The seminar also has a mailing list. In order to join or leave the list, fill the form on this page, or contact Dean Doron, the seminar coordinator.

Schedule

Archive of Previous Talks

Spring 2018

  • June 12, 2018 Dean Doron (TAU) Near-Optimal Erasure List-Decodable Codes For every small ε, we explicitly construct binary erasure list-decodable codes that can be list-decoded from 1-ε fraction of erasures, with near-optimal list-size poly(log(1/ε)) and near-optimal rate O(ε^(1+γ)) where γ>0 is any constant. This is the first construction to break the ε^2 rate barrier, solving a long-standing open problem raised by Guruswami and Indyk, and it does so with a remarkably small list size (we remark that Guruswami showed such a result is impossible using linear codes, and indeed our code is not linear). Equivalently, in terms of dispersers, we construct ε-error one-bit strong dispersers with near-optimal seed-length and near-optimal entropy-loss. The main ingredient in the construction is a new (and nearly optimal) unbalanced two-source extractor. The extractor extracts one bit with constant error from two independent sources, where one source has length n and tiny min-entropy O(loglog n) and the other source has length O(log n) and arbitrarily small constant min-entropy rate.
  • June 5, 2018 Irit Dinur (Weizmann) Double samplers and how to get them from high dimensional expanders
  • May 29, 2018 Roei Tell (Weizmann) Proving that promiseBPP=promiseP implies superpolynomial circuit lower bounds for "almost NP" This talk is focused on an exciting new development in the line-of-work that studies the relationship between derandomization and circuit lower bounds. The new development is concerned with the "high-end" side of this relationship, relating extremely strong derandomization results to extremely strong circuit lower bounds. Specifically, the focus is the new result that polynomial-time derandomization of promiseBPP (i.e., any proof that prBPP=prP) implies super-polynomial circuit lower bounds for "almost NP" (i.e., implies that NTIME[ n^{\omega(1)}] is not contained in P/poly); this is a dramatic improvement on a previous celebrated result of Impagliazzo, Kabanets, and Wigderson (2002). In the talk we will first survey the new landscape of the relationship between derandomization and circuit lower bounds that follows, and highlight a few very puzzling questions that this new landscape poses. We will (hopefully) see overviews of two proofs for the new result: One that relies on a circuit lower bound of Santhanam (2009), and another that follows from the recent results of Murray and Williams (2018). The talk is based on a paper of mine that is available online.
  • May 22, 2018 Shai Evra (HUJI) Bounded degree high dimensional expanders In the recent theory of high dimensional expanders, the following open problem was raised by Gromov: Are there bounded degree high dimensional expanders? Where for the definition of high dimensional expanders, we shall follow the pioneers of this field, and consider the notions of coboundary expanders (Linial-Meshulam) and topological expanders (Gromov). In a recent work, building on an earlier work of Kaufman-Kazhdan-Lubotzky in dimension 2, we were able to prove the existence of bounded degree expanders according to Gromov, in every dimension. In this talk I will present an outline of the proof of this result, focusing on a new local-to-global criterion of high-dimensional expansion. This is a joint work with Tali Kaufman
  • May 15, 2018 Dor Minzer (TAU) Hardness of 2-to-2 Games via Expansion on the Grassmann Graph The PCP theorem is a vital tool in almost all hardness of approximation results. Despite considerable efforts, however, tight hardness results for some problems remain elusive via the PCP theorem and the body of accompanying set of techniques. If one is willing to assume stronger conjectures, the Unique-Games Conjecture can be often used to obtain improved hardness results. The Unique-Games problem asks, given a system of linear equations over a finite field in which each equation contains 2 variables, to distinguish between the case that it is nearly satisfiable (i.e. all but 1% of the equations can be satisfied), and the case in which almost no equations can be satisfied (i.e. at most 1%). The Unique-Games Conjecture, posed by Khot in 2002, postulates that this problem is NP-hard in the worst case. Since then it has become a central open problem theoretical computer science. A recent line of study pursued a new attack on a closely related conjecture, called the 2-to-2 Games Conjecture. The approach is based on a mathematical object called the Grassmann Graph, and relies on the study of (edge) expansion properties on it. Recently this line of study was concluded, thereby proving the 2-to-2 Games Conjecture.
  • May 8, 2018 Leonard Schulman (Caltech) Explicit Binary Tree Codes with Polylogarithmic Size Alphabet This paper makes progress on the problem of explicitly constructing a binary tree code with constant distance and constant alphabet size. For every constant delta smaller than 1 we give an explicit binary tree code with distance delta and alphabet size poly(log n), where n is the depth of the tree. This is the first improvement over a two-decade-old construction that has an exponentially larger alphabet of size poly(n). As part of the analysis, we prove a bound on the number of positive integer roots a real polynomial can have in terms of its sparsity with respect to the Newton basis---a result of independent interest. Joint work with G. Cohen and B. Haeupler
  • May 1, 2018 Izhar Oppenheim (BGU) New Constructions of local spectral high dimensional expanders A simplicial complex X will be called a local spectral high dimensional expander if the one-skeletons of all the links of X have good spectral gaps (including the one skeleton of X itself). This property is desirable because it leads to several global expansion phenomena. In my talk, I will explain and try to motivate this definition and then present an elementary construction of local spectral high dimensional expanders. The construction is elementary in the sense that describing it requires only basic matrix theory (previous, much less elementry, constructions were known before). This talk is based on a joint work with Tali Kaufman.
  • April 24, 2018 Nir Bitansky (TAU) On Round-Optimal Zero-Knowledge Protocols and Compressing Collisions The round complexity of zero-knowledge protocols is a long-standing open question in cryptography. Its study over the past three decades has revealed connections to other fundamental concepts such as non-black-box reductions and succinct proof systems. In this talk, I will first discuss the history of the question and try to give you a sense of why it is challenging. In the second part, I will present a recent result that resolves the question under a new hardness assumption that generalizes the classical notion of collision-resistant hash functions. Roughly, the assumption asserts the existence of shrinking hash functions such that no polynomial-time algorithm, with advice of size k, can output much more than k colliding inputs. I will motivate this assumption, discuss some of its additional applications, and future directions. The talk is based on joint works with Zvika Brakerski, Yael Kalai, Omer Paneth and Vinod Vaikuntanathan. https://eprint.iacr.org/2017/488 https://eprint.iacr.org/2016/213 I will not assume any prior knowledge in cryptography.
  • April 17, 2018 Dana Moshkovitz (UT Austin) What Cannot Be Learned With Bounded Memory How does computational learning change when one cannot store all the examples one sees in memory? This question has seen a burst of interest in the past couple of years, leading to the surprising theorem that there exist simple concepts (parities) that require an extraordinary amount of time to learn unless one has quite a lot of memory. In this work we show that in fact most concepts cannot be learned without sufficient memory. This subsumes the aforementioned theorem and implies similar results for other concepts of interest. The new results follow from a general combinatorial framework that we developed to prove lower bounds for space bounded learning. Joint work with Michal Moshkovitz, Hebrew University.
  • April 10, 2018 Nikolaos Makriyannis (TAU) Tighter Bounds on Multi-Party Coin Flipping via Augmented Weak Martingales and Differentially Private Sampling In his seminal work, Cleve [STOC '86] has proved that any $r$-round coin-flipping protocol can be efficiently biased by $\Theta(1/r)$. This lower bound was met for the two-party case by Moran, Naor, and Segev [Journal of Cryptology '16], and the three-party case (up to a $\polylog$ factor) by Haitner and Tsfadia [SICOMP '17], and was approached for $n$-party protocols when $n \loglog r$, however, the best bias for $n$-party coin-flipping protocols remains $O(n/\sqrt{r})$ achieved by the majority protocol of Awerbuch, Blum, Chor, Goldwasser, and Micali [Manuscript '85]. In this talk, we will present a tighter lower bound on the bias of coin-flipping protocols, showing that, for every constant $\eps >0$, an $r^{\eps}$-party $r$-round coin-flipping protocol can be efficiently biased by $\tilde{\Omega}(1/\sqrt{r})$. As far as we know, this is the first improvement of Cleve's bound, and is only $n=r^{\eps}$ (multiplicative) far from the aforementioned upper bound of Awerbuch et al. We prove the above bound using two new results that we believe are of independent interest. The first result is that a sequence of (``augmented'') weak martingales have large gap: with constant probability there exists two adjacent variables whose gap is at least the ratio between the gap between the first and last variables and the square root of the number of variables. This generalizes over the result of Cleve and Impagliazzo [Manuscript '93], who showed that the above holds for strong martingales, and allows in some setting to exploit this gap by efficient algorithms. The second result is a new sampling algorithm that uses a differentially private mechanism to minimize the effect of data divergence. Joint work with Amos Beimel, Iftach Haitner and Eran Omri
  • March 27, 2018 Daniel Grier (MIT) New Hardness Results for the Permanent Using Linear Optics One of the great accomplishments in complexity theory was Valiant's 1979 proof that the permanent of a matrix is #P-hard to compute. Subsequent work simplified Valiant's ideas and even began to recast them as problems in quantum computing. In 2011, this culminated in a striking proof by Aaronson, based solely on quantum linear optics, of the #P-hardness of the permanent. Although this simplified (at least for physicists) aspects of Valiant's proof by off-loading its difficulty onto central and well-known theorems in linear optics, the question remained: what else was gained by converting Valiant's combinatorial proof into a linear optical one? In this talk I'll give one possible answer to this question--namely, that these quantum techniques are useful for proving hardness of permanents for matrices that obey classical Lie group structure. In particular, I will prove that computing the permanent of real orthogonal matrices is #P-hard. The hardness result translates to permanents of orthogonal matrices over the finite fields for any characteristic not equal to 2 or 3. Interestingly, this characterization is tight: in fields of characteristic 2, the permanent coincides with the determinant; in fields of characteristic 3, one can efficiently compute the permanent of an orthogonal matrix by a nontrivial result of Kogan. The talk will begin with an overview linear optics and the extremely nice correspondence between linear optical circuits and matrix permanents. Along the way, we will reconstruct Aaronson's proof, and then show how to modify it to obtain our results. Some knowledge of quantum computing will be useful, but not strictly required. No prior knowledge of linear optics is necessary.
  • March 20, 2018 William Hoza (UT Austin) Typically-Correct Derandomization for Small Time and Space Suppose a language L can be decided by a bounded-error randomized algorithm that runs in space S and time n * poly(S). We give a randomized algorithm for L that still runs in space O(S) and time n * poly(S) that uses only O(S) random bits; our algorithm has a low failure probability on all but a negligible fraction of inputs of each length. An immediate corollary is a deterministic algorithm for L that runs in space O(S) and succeeds on all but a negligible fraction of inputs of each length. We also discuss additional complexity-theoretic applications of our technique.
  • March 13, 2018 Karthik C.S. (Weizmann) A Framework for Parameterized Hardness of Approximation In this talk we will see a framework to show inapproximability of parameterized problems. This framework generalizes the 'Distributed PCPs' framework recently introduced by Abboud et al. [FOCS'17]. By applying the gadget reductions given by Chalermsook et al. [FOCS'17] to this framework, we settle the inapproximability of parameterized dominating set under various time hypotheses. Joint work with Bundit Laekhanukit and Pasin Manurangsi. Preprint available at https://eccc.weizmann.ac.il/report/2017/186/
  • March 6, 2018 Jad Silbak (TAU) Computational Two-Party Correlation We prove a dichotomy theorem for two-party protocols, and show that for every poly-time two-party protocol with single-bit output, at least one of following holds:
    • The protocol can be used to construct a key-agreement protocol.
    • For every constant ρ > 0 the parties' output is ρ-uncorrelated: let (X; Y; T) denote the parties' outputs and the protocol's transcript respectively. A protocol is ρ-uncorrelated if there exists an efficient "decorralizer" algorithm Decor, that when given a random transcript T, produces two numbers PA; PB, such that no efficient algorithm can distinguish (UPA ;UPB ;T) (where Up denotes a biassed coin with bias p from (X; Y; T), with distinguishing advantage larger than ρ.
    Namely, if the protocol cannot be used to construct key-agreement, then its output distribution (X; Y; T) is trivial: it can be simulated non-interactively by the parties given public randomness (used to sample T). (The precise statement also has qualifiers of the form: "on infinitely many choices of the security parameter"). We use the above characterization to prove that (α= 24ε^2)-correct differentially private symmetric protocol for computing XOR, implies the existence of key-agreement protocol. The above dependency between α and ε is tight since an Θ(ε^2)-correct "-differentially private protocol for computing XOR is known to exists unconditionally. It also improves, in the (ε,α)dependency aspect, upon Goyal et al. [ICALP '16] who showed that, for some constant c > 0, a c-correct "-differentially private protocol for computing XOR implies oblivious transfer. Our result extends to a weaker notion of differential privacy in which the privacy only requires to hold against external observer. Interestingly, the reductions used for proving the above results are non black box. Joint work with Iftach Haitner, Eran Omri, Kobbi Nissim and Ronen Shaltiel.
Loading...