Theory of Computation Seminar
We meet every Tuesday at Schreiber 309.
Light lunch is served from 12:45, and the talk starts at 13:10
and lasts 12 hours (with a 10 minute break).
The seminar also has a mailing list. In order to join or leave the list, fill the form on this page, or contact Dean Doron, the seminar coordinator.
Schedule
Archive of Previous Talks
Winter 20162017

January 24, 2017
Nathan Keller (BIU)
A structure theorem for biased Boolean functions with a small total influence
The influence of the k'th coordinate on a Boolean function f:{0,1}^n > {0,1} is the probability that flipping x_k changes the value f(x). The total influence I(f) is the sum of influences of the coordinates. The classical `Junta Theorem' of Friedgut (1998) asserts that if I(f)
and theoretical computer science.
For a biased function with E[f]=\mu, the edge isoperimetric inequality on the cube implies that I(f) >= 2\mu \log(1/\mu).
Kahn and Kalai (2006) asked, in the spirit of the Junta theorem, whether any f such that I(f) is within a constant factor of the minimum, can be (\epsilon \mu)approximated by a DNF of a `small' size (i.e., a union of a small number of subcubes). We answer the question by proving the following structure theorem: If I(f)
Joint work with David Ellis and Noam Lifshitz 
January 17, 2017
Ofer Grossman (MIT)
Bipartite Perfect Matching in PseudoDeterministic NC
Pseudodeterministic algorithms are randomized search algorithms that on different executions on the same input, output the same solution with high probability.
We will discuss how pseudodeterministic algorithms bridge the gap between randomized search and decision problems for problems in P and in NC. Next, we will show a pseudodeterministic NC algorithm for bipartite matching. Finally, we will show how pseudodeterminism can be used to save on random bits used by classical randomized algorithms, and apply the method to obtain an algorithm for RNC depth first search using only O(log^2 n) random bits. This is joint work with Shafi Goldwasser. 
January 10, 2017
Ron Rothblum (MIT)
How to Prove the Correctness of Computations
Efficient proof verification is at the heart of the study of computation. Seminal results such as the IP=PSPACE Theorem [LFKN92,Shamir92] and the PCP theorem [AS92,ALMSS92] show that even highly complicated statements can be verified extremely efficiently.
We study the complexity of proving statements using interactive protocols. Specifically, what statements can be proved by a polynomialtime prover to a superefficient verifier. Our main results show that these proofsystem are remarkably powerful: it is possible to prove the correctness of general computations such that (1) the prover runs in polynomialtime, (2) the verifier runs in lineartime (and in some conditions in sublineartime) and (3) the interaction consists of only a constant number of communication rounds (and in some settings just a single round).
These proofsystems are motivated by, and have applications for, delegating computations in a cloud computing environment, and guaranteeing that they were performed correctly.
In the first part of the talk I will give a high level overview of these results and in the second part we will discuss the technical details of one of these works in depth.  December 27, 2016 Gil Cohen (Princeton) Correlation Breakers, IndependencePreserving Mergers, and their Applications Pivotal to recent exciting progress in randomness extractors is a pair of new pseudorandom objectscorrelation breakers, and independencepreserving mergers. In this talk, we discuss these objects, their constructions, and applications.

December 20, 2016
Shachar Lovett (UC San Diego)
Robust sensitivity
The sensitivity conjecture is a famous open problem in the theory of boolean functions. Let f be a boolean function defined on the hypercube. The sensitivity of a node x is the number of its neighbours in the hypercube, for which f give the opposite value as that it does to x. The sensitivity conjecture speculates that if all nodes have low sensitivity, then the function f must be simple. Concretely, all its Fourier mass is supported on levels with low hamming weight.
Recently, Gopalan et al [CCC 2016] conjectured a robust analogue of the sensitivity conjecture: if most of the nodes have low sensitivity, then most of the Fourier mass is supported on levels with low hamming weight. They also proved it under the stronger assumption that all nodes have low sensitivity. In this work, we prove this conjecture, with near tight quantitative bounds.
Joint work with Avishay Tal (IAS) and Jiapeng Zhang (UCSD). 
December 20, 2016
Scott Aaronson (UT Austin)
ComplexityTheoretic Foundations of Quantum Supremacy Experiments
In the near future, there will likely be specialpurpose quantum computers
with 4050 highquality qubits. I'll discuss general theoretical
foundations for how to use such devices to demonstrate "quantum
supremacy": that is, a clear quantum speedup for *some* task,
motivated by the goal of overturning the Extended ChurchTuring Thesis
as confidently as possible.
First, we study the hardness of sampling the output distribution of a
random quantum circuit, along the lines of a recent proposal by the
Martinis group at Google. We show that there's a natural averagecase
hardness assumption, which has nothing to do with sampling, yet
implies that no polynomialtime classical algorithm can pass a
statistical test that the quantum sampling procedure's outputs do
pass. Compared to previous workfor example, on BosonSamplingthe
central advantage is that we can now talk directly about the observed
outputs, rather than about the distribution being sampled. In an
attempt to refute our hardness assumption, we also give a new
algorithm, inspired by Savitch's Theorem, for simulating a general
quantum circuit with n qubits and depth d in polynomial space and
d^O(n) time. We then discuss why this and other known algorithms fail
to refute our assumption. Other, more abstract results deal with the
SampBPP versus SampBQP question: for example, whether one can separate
these classes relative to an oracle in P/poly, and whether one can
separate them under the sole assumption that the polynomial hierarchy
is infinite.
Joint work with Lijie Chen (Tsinghua University). Coming soon to an
ECCC and arXiv near you. 
December 13, 2016
Dana Moshkovitz (UT Austin)
Candidate Hard Unique Game
We propose a candidate reduction for ruling out polynomialtime algorithms for unique games, either under plausible complexity assumptions, or unconditionally for Lasserre semidefinite programs with a constant number of rounds.
We analyze the completeness and Lasserre solution of our construction, and provide a soundness analysis in a certain setting of interest. The analysis goes through a new theorem on leakage resilience in two prover games. Addressing general settings is tightly connected to a question on Gaussian isoperimetry. 
December 06, 2016
Amir Shpilka (TAU)
Proof complexity lower bounds from algebraic circuit complexity
Proof complexity studies the complexity of mathematical proofs, with the aim of exhibiting (true) statements whose proofs are always necessarily long. One wellknown proof system is Hilberts Nullstellensatz, which shows that if the family F={f1,,fm} of nvariate polynomials have no common solution to the system f1==fm=0, then there is a proof of this fact of the following form: there are polynomials G={g1,,gm} such that f1.g1++fm.gm=1 is an identity. From the perspective of computer science, it is most natural to ask how succinctly one can express the proof G.
Substantial work on the Nullstellensatz system has measured the complexity of G in terms of their degree or sparsity, and obtained the desired lower bounds for these measures. Grochow and Pitassqi have recently argued that it is natural to measure the complexity of G by the size needed to express them as algebraic circuits. They called the resulting system the Ideal Proof System (IPS), and showed that it captures the power of wellknown strong proof systems such as the Frege proof system, as well as showing that certain natural lower bounds for the size of IPS proofs would imply VPVNP, an algebraic analogue of PNP. This is in contrast to other proof systems, where direct ties to computational lower bounds are often lacking.
Motivated by their work, we study the IPS proof system further. We first show that weak subsystems of IPS can be quite powerful. We then consider lower bounds for subclasses of IPS. We obtain certain extensions to existing lower bound techniques, obtaining "functional lower bounds" as well as "lower bounds for multiples". Using these extensions, we show that variants of the subsetsum axiom require superpolynomially large proofs to prove their unsatisfiability when the size of the algebraic circuits are measured as readonce algebraic branching programs, sums of powers of lowdegree polynomials, or multilinear formulas. No specific knowledge of proof complexity or algebraic circuit complexity will be assumed.
Joint work with Michael Forbes, Iddo Tzameret, and Avi Wigderson. 
November 22, 2016
Rohit Gurjar (TAU)
Title: Linear matroid intersection is in quasiNC
Given two matroids on the same ground set, the matroid intersection problem asks to find a common independent set of maximum size. We show that the linear matroid intersection problem is in quasiNC^2.
That is, it has uniform circuits of quasipolynomial size n^{O(\log n)}, and O(\log^2 n) depth.
This generalizes the similar result for the bipartite perfect matching problem.
We do this by an almost complete derandomization of the Isolation lemma for matroid intersection.
Our result also implies a blackbox singularity test for symbolic matrices of the form
A_0+A_1 z_1 +A_2 z_2+ ... +A_m z_m, where the matrices A_1,A_2,\dots,A_m are of rank 1. 
November 15, 2016
Dean Doron (TAU)
Explicit twosource extractors for nearlogarithmic minentropy
In this talk, we show an explicit construction of extractors for two independent sources of nearlogaritmic minentropy. Previous constructions required either polylog(n) minentropy or more than two sources.
The result extends the breakthrough result of Chattopadhyay and Zuckerman and also uses nonmalleable extractors. The main new ingredient is a somewhererandom condenser with a small entropy gap, used as a sampler.
Our construction can be seen as an efficient reduction to constructing nonmalleable extractors, so using very recent constructions of nonmalleable extractors (by Cohen and Li)  we will see how to obtain explicit twosource extractor for O(log n loglog n) minentropy and constant error. 
November 08, 2016
Dor Minzer (TAU)
Grassmann Graphs, Vertex Cover and 2to2 games.
In this talk we discuss recent progress on hardness of approximation of the VertexCover problem and $2$to$2$ games (a variant of Khot's well known UniqueGames Conjecture).
We present a candidate reduction from the $3$Lin problem to the $2$to$2$ Games problem and present a hypothesis regarding Grassmann graphs which, if correct, is sufficient to show the soundness of the reduction.
In particular, our reduction implies that assuming the hypothesis, it is NPhard to distinguish between the cases where an $n$vertex graph has an independent set of size roughly (1 1/\sqrt{2})n, and the case where every independent set has size o(n). Consequently, it is NPhard to approximate the Vertex Cover problem within a factor \sqrt{2}  o(1).
The talk is mostly based on a joint work with Subhash Khot and Muli Safra,
nevertheless, we will also mention results from a more recent extension, which is a joint work with Irit Dinur, Subhash Khot, Guy Kindler and Muli Safra. 
November 01, 2016
Benny Applebaum (TAU)
Algebraic Attacks against Random Local Functions and Their Countermeasures
Suppose that you have $n$ truly random bits $x=(x_1,\ldots,x_n)$ and you wish to use them to generate $m\gg n$ pseudorandom bits $y=(y_1,\ldots, y_m)$ using a local mapping, i.e., each $y_i$ should depend on at most $d=O(1)$ bits of $x$. In the polynomial regime of $m=n^s$, $s>1$, the only known solution, originates from (Goldreich, ECCC 2000), is based on \emph{Random Local Functions}: Compute $y_i$ by applying some fixed (public) $d$ary predicate $P$ to a random (public) tuple of distinct inputs $(x_{i_1},\ldots,x_{i_d})$. In this talk, we will try to understand, for any value of s, how the pseudorandomness of the resulting sequence depends on the choice of the underlying predicate.
Time permitting, I will describe a recent construction of Pseudorandom Functions with low complexity based on Goldreich's function.
Based on joint works with Lovett (STOC 2016, ECCC TR15172) and Raykov (TCC 2016B, ECCC TR16082).