Blackboard, Winslow Homer (1877)

Theory of Computation Seminar

We meet every Tuesday at Schreiber 309.
Light lunch is served from 12:45, and the talk starts at 13:10 and lasts 1-2 hours (with a 10 minute break).

The seminar also has a mailing list. In order to join or leave the list, fill the form on this page, or contact Noam Parzanchevski, the seminar coordinator.

Schedule

Archive of Previous Talks

Winter 2019-2010

  • October 29, 2019 Ori Sberlo (TAU) On the performance of Reed-Muller codes under random erasures and random errors. The Reed-Muller code is constructed by taking the evaluation vectors of bounded degree multivariate polynomials over GF(2). Despite being intensively studied for over half a century, many of its properties still remain a mystery. A classic setting in coding theory is that of random erasures and random errors in which every bit is randomly and independently flipped/erased with some probability. Understanding the behavior of RM codes in these two natural settings is still a major open problem for a wide range of parameters. This is also intimately related to the weight distribution of RM codes (the weight distribution is the function that given positive integer W outputs the number of codewords with exactly W 1’s). I will present several results regarding these two problems and explain the key ideas. Joint work with Amir Shpilka
  • November 5, 2019 Roni Con (TAU) Explicit and Efficient Constructions of Coding Schemes for the Binary Deletion Channel. In the binary deletion channel with parameter $p$ (BDC$_p$) every bit is deleted independently with probability $p$. Mitzemacher and Drinea proved the existence of a code of rate $(1-p)/9$ for the BDC$_p$, yet their construction is not explicit. Guruswami and Lee gave an explicit construction of a code of rate $(1-p)/120$ for the same channel. In this talk we shall describe an explicit family of codes of rate $(1-p)/16$, for every $p$ that have efficient encoding and decoding schemes, taking us closer to the non-explicit result of Mitzemacher and Drinea. Joint work with Amir Shpilka
  • November 19, 2019 Ronen Shaltiel (Haifa University) Quasilinear time list-decodable codes for space bounded channels. We consider codes for space bounded channels. This is a model for communication under noise that was studied by Guruswami and Smith (J. ACM 2016) and lies between the Shannon (random) and Hamming (adversarial) models. In this model, a channel is a space bounded procedure that reads the codeword in one pass, and modifies at most a $p$ fraction of the bits of the codeword. Guruswami and Smith, and later work by Shaltiel and Silbak (RANDOM 2016), gave constructions of list-decodable codes with rate approaching $1-H(p)$ against channels with space $s=c \log n$, with encoding/decoding time $\poly(2^s)=\poly(n^c)$. In this paper, we construct codes that handle channel of space $s=n^{\Omega(1)$ for the same rate, with encoding and decoding algorithms that run in quasilinear time. Along the way, we construct linear codes (for Hamming channels) which are efficiently list-decodable from almost a half fraction of errors, and have large dual distance of $n^{\Omega(1)}$. In the talk I will survey past work on different kinds of channels and some of the general ideas that go into handling computationally bounded channels. This is joint work with Swastik Kopparty and Jad Silbak.
  • November 26, 2019 Klim Efremenko (Ben Gurion University) Interactive Coding Over the Noisy Broadcast Channel. A set of n players, each holding a private input bit, communicate over a noisy broadcast channel. Their mutual goal is for all players to learn all inputs. At each round one of the players broadcasts a bit to all the other players, and the bit received by each player is flipped with a fixed constant probability (independently for each recipient). How many rounds are needed? This problem was first suggested by El Gamal in 1984. In 1988, Gallager gave an elegant noise-resistant protocol requiring only O(n \log \log n) rounds. The problem got resolved in 2005 by a seminal paper of Goyal, Kindler, and Saks, proving that Gallager's protocol is essentially optimal. We revisit the above noisy broadcast problem and show that O(n) rounds suffice. This is possible due to a relaxation of the model assumed by the previous works. We no longer demand that exactly one player broadcasts in every round, but rather allow any number of players to broadcast. However, if it is not the case that exactly one player chooses to broadcast, each of the other players gets an adversely chosen bit. We generalized the above result and initiate the study of interactive coding over the noisy broadcast channel. We show that any interactive protocol that works over the noiseless broadcast channel can be simulated over our restrictive noisy broadcast model with constant blowup of the communication.
  • December 3, 2019 Or Meir (Haifa University) Query-to-Communication Lifting Using Low-Discrepancy Gadgets. Lifting theorems are theorems that relate the query complexity of a function $ f:{0,1}^n \to {0,1}$ to the communication complexity of the composed function fgn, for some “gadget” $g:{0,1}^b \times {0,1}^b \to {0,1}$ . Such theorems allow transferring lower bounds from query complexity to the communication complexity, and have seen numerous applications in the recent years. In addition, such theorems can be viewed as a strong generalization of a direct-sum theorem for the gadget g. We prove a new lifting theorem that works for all gadgets g that have logarithmic length and exponentially-small discrepancy, for both deterministic and randomized communication complexity. Thus, we significantly increase the range of gadgets for which such lifting theorems hold. Our result has two main motivations: First, allowing a larger variety of gadgets may support more applications. In particular, our work is the first to prove a randomized lifting theorem for logarithmic-size gadgets, thus improving some applications of the theorem. Second, our result can be seen as a strong generalization of a direct-sum theorem for functions with low discrepancy. Joint work with Arkadev Chattopadhyay, Yuval Filmus, Sajin Koroth, and Toniann Pitassi
  • December 10, 2019 Roei Tell (Weizmann) On Exponential-Time Hypotheses, Derandomization, and Circuit Lower Bounds. The Exponential-Time Hypothesis (ETH) and its various extensions (rETH, NETH, MAETH, #ETH...) have been widely influential across complexity theory. In this work we study their connections to derandomization and to circuit lower bounds. We prove a sequence of results, each of them of independent interest to long-standing questions, which together demonstrate that the connections between ETHs, derandomization, and circuit lower bounds are remarkably strong.
  • December 17, 2019 Amir Yehudayoff (Thechnion) Anti concentration in most directions. We shall discuss anti concentration properties of the inner product between two independent random vectors in Euclidean space. Based on work with Anup Rao.
  • December 24, 2019 Uma Girish (Princton) Quantum versus Randomized Communication Complexity, with Efficient Players. We study a new type of separation between quantum and classical communication complexity, separation which is obtained using quantum protocols where all parties are efficient, in the sense that they can be implemented by small quantum circuits with oracle access to their inputs. More precisely, we give an explicit partial Boolean function that can be computed in the quantum-simultaneous-with-entanglement model of communication, however, every interactive randomized protocol is of exponentially larger cost. Furthermore, all the parties in the quantum protocol can be implemented by quantum circuits of small size with blackbox access to the inputs. Our result qualitatively matches the strongest known separation between quantum and classical communication complexity and is obtained using a quantum protocol where all parties are efficient. Our proof technique is new in the context of communication complexity and is based on techniques from the recent oracle separation of BQP and PH.
  • January 7, 2019 Zachi Tamo (TAU) Combinatorial list-decoding of Reed-Solomon codes beyond the Johnson radius. List-decoding of Reed-Solomon (RS) codes beyond the so called Johnson radius has been one of the main open questions since the work of Guruswami and Sudan. It is now known by the work of Rudra and Wootters, using techniques from high dimensional probability, that over large enough alphabets most RS codes are indeed list-decodable beyond this radius. In this paper we take a more combinatorial approach which allows us to determine the precise relation (up to the exact constant) between the decoding radius and the list size. We prove a generalized Singleton bound for a given list size, and conjecture that the bound is tight for most RS codes over large enough finite fields. We also show that the conjecture holds true for list sizes 2 and 3, and as a by product show that most RS codes with a rate of at least 1/9 are list-decodable beyond the Johnson radius. Lastly, we give the first explicit construction of such RS codes. The main tools used in the proof are a new type of linear dependency between codewords of a code that are contained in a small Hamming ball, and the notion of cycle space from Graph Theory. Joint work with Chong Shangguan.
  • January 14, 2019 Gil Cohen (TAU) Pseudo-random pseudo-distributions. In this talk, we will discuss a new type of a pseudo-random object called a "pseudo-random pseudo-distribution". This object was introduced in the context of the BPL vs. L problem, and I will sketch a space-efficient construction of the latter for read-once branching programs that has near-optimal dependence on the error parameter. The talk is a distillation of a joint work with Mark Braverman and Sumegha Garg (the paper is available online: https://eccc.weizmann.ac.il/report/2017/161/).
  • January 16, 2019 Jonathan Shafer (Berkeley) Learning vs. Verifying. This talk will address the following question: In what cases does learning a good hypothesis require more resources than verifying a hypothesis proposed by someone else?** If verification is significantly cheaper than learning, that could have important practical implications for delegation of machine learning tasks in commercial settings, and might also have consequences for verification of scientific publications, and for AI safety. Two results will be discussed: (1) There exists a learning problem where verification requires quadratically less random samples than are required for learning. (2) The broad class of Fourier-sparse functions (which includes decision trees, for example) can be efficiently verified using random samples only, even though it is widely believed to be impossible to efficiently learn this class from random samples alone. * Jonathan is a PhD student at UC Berkeley. This is joint work with Shafi Goldwasser (UC Berkeley), Guy Rothblum (WIS), and Amir Yehudayoff (Technion). ** "Learning" -- as defined in Valiant's Probably Approximately Correct (PAC) framework.
  • January 21, 2020 Benny Appelbaum (TAU) Sampling Graphs without Forbidden Subgraphs and Unbalanced Expanders with Negligible Error. Suppose that you wish to sample a random graph $G$ over $n$ vertices and $m$ edges conditioned on the event that $G$ does not contain a ``small'' $t$-size graph $H$ (e.g., clique) as a subgraph. Assuming that most such graphs are $H$-free, the problem can be solved by a simple rejected-sampling algorithm (that tests for $t$-cliques) with an expected running time of $n^{O(t)}$. Is it possible to solve the problem in running time that does not grow polynomially with $n^t$? In this paper, we introduce the general problem of sampling a ``random looking'' graph $G$ with a given edge density that avoids some arbitrary predefined $t$-size subgraph $H$. As our main result, we show that the problem is solvable with respect to some specially crafted $k$-wise independent distribution over graphs. That is, we design a sampling algorithm for $k$-wise independent graphs that supports efficient testing for subgraph-freeness in time $f(t)\cdot n^c$ where $f$ is a function of $t$ and the constant $c$ in the exponent is independent of $t$. Our solution extends to the case where both $G$ and $H$ are $d$-uniform hypergraphs. We use these algorithms to obtain the first probabilistic construction of constant-degree polynomially-unbalanced expander graphs whose failure probability is \emph{negligible} in $n$ (i.e., $n^{-\omega(1)}$). In particular, given constants $d>c$, we output a bipartite graph that has $n$ left nodes, $n^c$ right nodes with right-degree of $d$ so that any right set of size at most $n^{\Omega(1)}$ expands by factor of $\Omega(d)$. This result is extended to the setting of unique expansion as well. We observe that such a negligible-error construction can be employed in many useful settings, and present applications in coding theory (batch codes and LDPC codes), pseudorandomness (low-bias generators and randomness extractors) and cryptography. Notably, we show that our constructions yield a collection of polynomial-stretch locally-computable cryptographic pseudorandom generators based on Goldreich's one-wayness assumption resolving a central open problem in parallel-cryptography (cf., Applebaum-Ishai-Kushilevitz, FOCS 2004; and Ishai-Kushilevitz-Ostrovsky-Sahai, STOC 2008). Joint work with Eliran Kachlon.
Loading...