Blackboard, Winslow Homer (1877)

Theory of Computation Seminar

We meet every Tuesday at Schreiber 309.
Light lunch is served from 12:45, and the talk starts at 13:10 and lasts 1-2 hours (with a 10 minute break).

The seminar also has a mailing list. In order to join or leave the list, fill the form on this page, or contact Dean Doron, the seminar coordinator.

Schedule

Archive of Previous Talks

Spring 2019

  • March 5, 2019 Ellette Boyle (IDC) Homomorphic Secret Sharing A homomorphic secret-sharing (HSS) scheme is a secret-sharing scheme that allows locally mapping shares of a secret x to compact shares of a function of x. The talk will survey the current state of the art on HSS, covering efficient constructions, applications in cryptography and complexity theory, and open questions.
  • March 12, 2019 Noga Ron-Zewi (Haifa University) Improved List Decoding of Algebraic Codes We show that Folded Reed-Solomon codes achieve list decoding capacity with constant list sizes, independent of the block length. Prior work yielded list sizes that are polynomial in the block length, and relied on elaborate subspace evasive machinery to reduce the list sizes to constant. We further show that multiplicity codes exhibit similar behavior, and use this to obtain capacity achieving locally list decodable codes with query complexity significantly lower than was known before. Based on joint work with Swastik Kopparty, Shubhangi Saraf, and Mary Wootters. Website: https://sites.google.com/site/nogaronzewi1/
  • March 19, 2019 Eylon Yogev (Technion) The Power of Distributed Verifiers in Interactive Proofs We explore the power of interactive proofs with a distributed verifier. In this setting, the verifier consists of n nodes and a graph G that defines their communication pattern. The prover is a single entity that communicates with all nodes by short messages. The goal is to verify that the graph G belongs to some language in a small number of rounds, and with small communication bound, i.e., the proof size. In this work, we provide a new general framework for distributed interactive proofs that allows one to translate standard interactive protocols (i.e., with a centralized verifier) to ones where the verifier is distributed with a proof size that depends on the computational complexity of the verification algorithm run by the centralized verifier. As our main result, we show that every (centralized) computation performed in time O(n) on a RAM can be translated into a three-round distributed interactive protocol with O(log n) proof size. This implies that many graph problems for sparse graphs have succinct proofs (e.g., testing planarity). We extend this compiler to hand languages that can be computed in small space or by a small depth circuit as well. We further demonstrate the power of our compiler for specific problems such as Graph Non-Isomorphism, Clique and Leader Election (for the latter two problems we get O(1) proof size). Joint work with Merav Parter and Moni Naor. Website: https://www.eylonyogev.com
  • March 26, 2019 Or Meir (Haifa University) The KRW Conjecture: Results and Open Problems Proving super-logarithmic lower bounds on the depth of circuits is one of the main frontiers of circuit complexity. In 1991, Karchmer, Raz and Wigderson observed that we could resolve this question by proving the following conjecture: Given two boolean functions f,g, the depth complexity of their composition is about the sum of their individual depth complexities. While we cannot prove the conjecture yet, there has been some exciting progress toward such a proof, some of it in the last few years. In this talk, I will survey the known results and propose future directions for research on the KRW conjecture. In particular, I will present some open problems which I believe are both tractable and important toward making further progress on the conjecture. Website: http://cs.haifa.ac.il/~ormeir/
  • April 2, 2019 Alex Bredariol Grilo (CWI and QuSoft) Stoquastic PCP vs. Randomness The derandomization of MA, the probabilistic version of NP, is a long standing open question. In this work, we connect this problem to a variant of another major problem: the quantum PCP conjecture. Our connection goes through the surprising quantum characterization of MA by Bravyi and Terhal. They proved the MA-completeness of the problem of deciding whether the groundenergy of a uniform stoquastic local Hamiltonian is zero or inverse polynomial. We show that the gapped version of this problem, i.e. deciding if a given uniform stoquastic local Hamiltonian is frustration-free or has energy at least some constant ϵ, is in NP. Thus, if there exists a gap-amplification procedure for uniform stoquastic Local Hamiltonians (in analogy to the gap amplification procedure for constraint satisfaction problems in the original PCP theorem), then MA = NP (and vice versa). Furthermore, if this gap amplification procedure exhibits some additional (natural) properties, then P = RP. We feel this work opens up a rich set of new directions to explore, which might lead to progress on both quantum PCP and derandomization. As a small side result, we also show that deciding if commuting stoquastic Hamiltonian is frustration free is in NP Joint work with Dorit Aharonov
  • April 30, 2019 Guy Rothblum (Weizmann Institute) Gentle Measurement of Quantum States and Differential Privacy We prove a new connection between gentle measurement (where one wants to measure n quantum states, in a way that damages the states only by a little) and differential privacy (where one wants to query a database about n users, in a way that reveals only a little about any individual user). The connection is bidirectional, though with loss of parameters in going from DP to gentle measurement. Exploiting this connection, we present a new algorithm for approximating the outcomes of many measurements on a collection of quantum states, a task called "shadow tomography". The new algorithm has the advantages of being online (the measurements can be chosen adaptively) and gentle. Joint work with Scott Aaronson.
  • May 7, 2019 Yannai A. Gonczarowski (Tel Aviv University) The Menu-Size of Approximately Optimal Auctions We consider a monopolist who wishes to sell n goods to a single additive buyer, where the buyer's valuations for the goods are drawn according to independent distributions. It is well known that — unlike in the single item case — the revenue-optimal auction (a pricing scheme) may be complex, sometimes requiring a continuum of menu entries, that is, offering the buyer a choice between a continuum of lottery tickets. It is also known that simple auctions with a finite bounded number of menu entries (lotteries for the buyer to choose from) can extract a constant fraction of the optimal revenue, as well as that for the case of bounded distributions it is possible to extract an arbitrarily high fraction of the optimal revenue via a finite bounded menu size. Nonetheless, the question of the possibility of extracting an arbitrarily high fraction of the optimal revenue via a finite menu size, when the valuation distributions possibly have unbounded support (or via a finite bounded menu size when the support of the distributions is bounded by an unknown bound), remained open since the seminal paper of Hart and Nisan (2013), and so has the question of any lower-bound on the menu-size that suffices for extracting an arbitrarily high fraction of the optimal revenue when selling a fixed number of goods, even for two goods and even for i.i.d. bounded distributions. In this talk, we resolve both of these questions. We first show that for every n and for every ε>0, there exists a menu-size bound C=C(n,ε) such that auctions of menu size at most C suffice for extracting a (1-ε) fraction of the optimal revenue from any valuation distributions, and give an upper bound on C(n,ε), even when the valuation distributions are unbounded and nonidentical. We then proceed to giving two lower bounds, which hold even for bounded i.i.d. distributions: one on the dependence on n when ε=1/n and n grows large, and the other on the dependence on ε when n is fixed and ε grows small. Finally, we apply these upper and lower bounds to derive results regarding the deterministic communication complexity required to run an auction that achieves such an approximation. Based upon: * The Menu-Size Complexity of Revenue Approximation --- joint work with Moshe Babaioff and Noam Nisan, STOC 2017 * Bounding the Menu-Size of Approximately Optimal Auctions via Optimal-Transport Duality, STOC 2018
  • May 14, 2019 Orr Paradise (Weizmann Institute) Smooth and Strong PCPs Probabilistically checkable proofs (PCPs) can be verified based only on a constant amount of random queries, such that any correct claim has a proof that is always accepted, and any incorrect claim is rejected with high probability regardless of the given alleged proof. But what about *incorrect proofs* of *correct claims*? In a *strong* PCP, an alleged proof is rejected with probability proportional to its distance from a correct proof. For applications to hardness of approximation, it is useful to have strong PCPs in which each bit in the proof is equally likely to be queried. This natural property is called *smoothness*. Recently, Dinur, Gur and Goldreich (ITCS, 2019) showed that any NP language has strong PCPs of polynomial length. However, their construction is inherently non-smooth. We construct polynomial-length PCPs for NP that are simultaneously smooth and strong. This is achieved by refining the proof of the PCP theorem of Arora and Safra (JACM, 1998) and Arora, Lund, Motwani, Sudan and Szegedy (JACM, 1998), and enhancing it with new tools. In this talk, I will introduce smooth and strong PCPs, explain the main result and some of the ideas in its proof. I will also talk about its applications to hardness of approximation. No prior knowledge of PCPs is assumed. Based on “Smooth and Strong PCPs”, Orr Paradise, 2019. https://eccc.weizmann.ac.il/report/2019/023/
  • May 21, 2019 Adi Livnat (Haifa University) The role of sex in evolution and the nature of mutation The computational worldview is relevant to multiple fields of investigation outside of computer science, including evolutionary theory. Evolution is a creative process that generates organisms as well as behavioral programs, which can be thought of as highly sophisticated pieces of natural machinery and algorithms. But how does evolution work? For nearly a century it has been thought that genetic change occurs by accident as a disruption to the genome; that each change can be thought of as beneficial or deleterious largely in and of itself; and that natural selection favors beneficial accidental changes. However, this framework has left open multiple fundamental problems. For example, what is the role of the sexual shuffling of the genes in evolution? How can we understand the many puzzling modern findings on the empirical nature of mutation? In this talk, I will first give a brief overview of a series of papers in collaboration with computer scientists Christos Papadimitriou, Umesh Vazirani, Nick Pippenger and Erick Chastain, and evolutionary biologists Marc Feldman, Liudmyla Vasylenko and Jonathan Dushoff, examining the problem of the role of sex from a new angle. I will show that sex favors genes that perform well across a wide variety of different genetic combinations, and that this insight leads to connections between evolutionary theory, machine learning, and game theory. Second, new empirical evidence shows that mutation is a complex event brought about by biological machinery. I will offer a new conceptual framework with which to interpret this evidence, which extends the relevance of the algorithmic lens for our understanding of nature.
  • May 28, 2019 Eliad Tsfadia (Tel Aviv University) A Tight Parallel-Repetition Theorem for Random-Terminating Interactive Arguments Soundness amplification is a central problem in the study of interactive protocols. While "natural" parallel repetition transformation is known to reduce the soundness error of some special cases of interactive arguments: three-message protocols and public-coin protocols, it fails to do so in the general case. The only known round-preserving approach that applies to the general case of interactive arguments is Haitner's "random-terminating" transform [FOCS '09, SiCOMP '13]. Roughly speaking, a protocol \pi is first transformed into a new slightly modified protocol \widetilde{\pi}, referred as the random terminating variant of \pi, and then parallel repetition is applied. Haitner's analysis shows that the parallel repetition of \widetilde{\pi} does reduce the soundness error at a weak exponential rate. More precisely, if \pi has m rounds and soundness error 1-\epsilon, then \widetilde{\pi}^k, the k-parallel repetition of \widetilde{\pi}, has soundness error (1-\epsilon)^{\epsilon k / m^4}. Since the security of many cryptographic protocols (e.g., binding) depends on the soundness of a related interactive argument, improving the above analysis is a key challenge in the study of cryptographic protocols. In this work we introduce a different analysis for Haitner's method, proving that parallel repetition of random terminating protocols reduces the soundness error at a much stronger exponential rate: the soundness error of \widetilde{\pi}^k is (1-\epsilon)^{k / m}, only an m factor from the optimal rate of (1-\epsilon)^k, achievable in public-coin and three-message protocols. We prove the tightness of our analysis by presenting a matching protocol. Joint work with Itay Berman and Iftach Haitner.
  • June 4, 2019 Shir Peleg (Tel Aviv University) Sylvester-Gallai type theorem for quadratic polynomials One formulation of the famous Sylvester-Gallai theorem states that if a finite set of linear forms L satisfy that for every two forms, L1, L2 in L there is a subset I of L, such that L1, L2 not in I and whenever L1 and L2 vanish then also the product of Li vanishes, then the linear span of L has dimension at most 3. This version of the Sylvester-Gallai theorem was used to device deterministic algorithms for testing depth-3 polynomial identities. In this work we prove a similar statement in which the functions under consideration are quadratic polynomials. Specifically, we prove that if a finite set of irreducible quadratic polynomials Q satisfy that for every two polynomials Q1,Q2 in Q there is a subset I of Q, such that Q1,Q2 not in I and whenever Q1 and Q2 vanish then also the product of Qi vanishes, then the linear span of the polynomials in Q has dimension O(1). This extends an earlier result by [Shp18]. This result brings us one step closer towards obtaining a deterministic polynomial time algorithm for the polynomial identity question of certain depth-4 identities. This is joint work with Amir Shpilka.
  • June 4, 2019 Shir Peleg (Tel Aviv University) Sylvester-Gallai type theorem for quadratic polynomials One formulation of the famous Sylvester-Gallai theorem states that if a finite set of linear forms L satisfy that for every two forms, L1, L2 in L there is a subset I of L, such that L1, L2 not in I and whenever L1 and L2 vanish then also the product of Li vanishes, then the linear span of L has dimension at most 3. This version of the Sylvester-Gallai theorem was used to device deterministic algorithms for testing depth-3 polynomial identities. In this work we prove a similar statement in which the functions under consideration are quadratic polynomials. Specifically, we prove that if a finite set of irreducible quadratic polynomials Q satisfy that for every two polynomials Q1,Q2 in Q there is a subset I of Q, such that Q1,Q2 not in I and whenever Q1 and Q2 vanish then also the product of Qi vanishes, then the linear span of the polynomials in Q has dimension O(1). This extends an earlier result by [Shp18]. This result brings us one step closer towards obtaining a deterministic polynomial time algorithm for the polynomial identity question of certain depth-4 identities. This is joint work with Amir Shpilka.
  • June 4, 2019 Karthik C.S. (Weizmann) Inapproximability of Clustering in L_p metrics The two most popular objectives optimized in clustering algorithms are k-means and k-median. The k-means (resp. k-median) problem in the L_p-metric is specified by n points as input and the goal is to classify the input point-set into k clusters such that the k-means (resp. k-median) objective is minimized. The best-known inapproximability factor in literature for these NP-hard problems in L_p-metrics were well-below 1.01. In this talk, we take a significant step to improve the hardness of approximating these problems in various L_p-metrics. Our techniques are inspired by the tools and perspectives developed in fine-grained complexity in the last couple of years. The talk is based on a joint work with Vincent Cohen-Addad.
Loading...