Theory of Computation Seminar
We meet every Tuesday at Schreiber 309.
Light lunch is served from 12:45, and the talk starts at 13:10
and lasts 12 hours (with a 10 minute break).
The seminar also has a mailing list. In order to join or leave the list, fill the form on this page, or contact Dean Doron, the seminar coordinator.
Schedule
Archive of Previous Talks
Winter 20172018
 February 6, 2018 Christos Papadimitriou (Columbia) A computer scientist thinks about the Brain Understanding the ways in which the Brain gives rise to the Mind(memory, behavior, cognition, intelligence, language) is the most challenging problem facing science today. As the answer seems likely to be at least partly computational, computer scientists should work on this problem  except there is no obvious place to start. I shall recount recent work on a model for the formation and association of memories in humans, and reflect on why it may provide clues about the way we make language.

January 16, 2018
Omri Weinstein (Columbia)
Crossing the Logarithmic Barrier for Dynamic Boolean Data Structure Lower Bounds
We prove the first superlogarithmic lower bounds on the cell probe complexity of dynamic *boolean* (a.k.a. decision) data structure problems, a longstanding milestone in data structure lower bounds.
We introduce a new approach and use it to prove a ~ log^{1.5}(n) lower bound on the operational time of a wide collection of boolean data structure problems, most notably, on the query time of dynamic range counting over F_2 ([Patrascu07]). Proving a omega(log n) lower bound for this problem was explicitly posed as one of five important open problems in the late Mihai Patrascu's obituary [Tho13]. This result also implies the first Ï(log n) lower bound for the classical 2D range counting
problem, one of the most basic data structure problem in computational geometry and spatial databases. Our technical centerpiece is a new way of "weakly" simulating dynamic data structures using efficient oneway communication protocols with small advantage over random guessing. This simulation involves a surprising excursion to lowdegree (Chebychev) polynomials which may be of independent interest, and offers a new algorithmic angle on the "cell sampling" method of
Panigrahy et al. [PTW10].
Joint work with Kasper GreenLarsen and Huacheng Yu. 
January 9, 2018
Irit Dinur (Weizmann)
Agreement testing and high dimensional expanders
I will describe a connection between probabilistically checkable proofs and high dimensional expansion through the notion of agreement testing.
Agreement tests are a fundamental component in nearly all PCPs, and include as special cases well known tests such as line vs. line and plane vs. plane low degree tests, as well as direct product tests.
I will describe a recent result showing that every high dimensional expander gives rise to an agreement testing theorem.
Relying on existence of boundeddegree high dimensional expanders, we get very strong derandomization of direct product and direct sum testing.
Based on joint work with Tali Kaufman. 
January 2, 2018
Or Meir (Haifa Univesity)
Prediction from Partial Information and Hindsight, with Application to Circuit Lower Bounds
Consider a random sequence of n bits that has entropy at least nk, where k observation is that an average coordinate of this random sequence is close to being uniformly distributed, that
is, the coordinate ``looks random''. In this work, we prove a stronger result that says, roughly, that the average
coordinate looks random to an adversary that is allowed to query about n/k other coordinates of the sequence,
even if the adversary is nondeterministic.
As an application of this result, we prove a new result on depth3 circuits, which recovers as a direct corollary
the known lower bounds for the parity and majority functions,
as well as a lower bound on sensitive functions due to Boppana.

December 26, 2017
Omri BenEliezer (TAU)
Removal lemma for ordered graphs and matrices
The triangle removal lemma, proved by Ruzsa and Szemeredi in 1976, states that if a graph contains a small number of triangles then it can be made trianglefree by a small number of edge deletions. The triangle removal lemma found applications in several areas of computer science and mathematics.In a series of works, culminating in a result of Alon and Shapira from 2005, it was shown that in fact, any hereditary graph property P satisfies a removal lemma of this type: If most
subgraphs of G of a certain fixed size satisfy P, then we can make G satisfy P using a small number of edge insertions/deletions.
However, these results only applied for unordered graphs, as the proof methods relied heavily on the fact that in such graphs there is no order on the vertices. For ordered structures, such as vertexordered graphs and matrices (whose rows and columns are ordered), no removal lemmas have been known.In this talk we describe how to settle this issue, establishing an "order preserving" removal lemma for all hereditary properties of ordered graphs and matrices (and more generally, for
all 2D structures that are somewhat wellbehaved). The result has direct implications in property testing, showing that any hereditary property of these ordered structures is constantquery testable with onesided error.
Based on joint work with Noga Alon and Eldar Fischer.

December 19, 2017
Ben Lee Volk (TAU)
Unbalancing Sets and Lower Bounds for Multilinear Circuits
We will see a quadratic circuit lower bound for the model of syntactically multilinear arithmetic circuits, which is a natural computational model for computing multilinear polynomials. Along the way, we'll discuss a solution for a "discrepancy maximization" type problem in extremal set theory, known as Galvin's problem.
(Joint work with Noga Alon and Mrinal Kumar)

December 12, 2017
Noga RonZewi (Haifa University)
Highrate Locally List Recoverable Codes & Other Beasts
We give the first construction of highrate locally listrecoverable codes. Listrecovery has been an extremely useful building block in coding theory, and our motivation is to use these codes as such a building block. In particular, our construction gives the first capacityachieving locally listdecodable codes; the first capacity achieving globally listdecodable codes with nearly linear time list decoding algorithm; and a randomized construction of binary codes on the
GilbertVarshamov bound that can be uniquely decoded in nearlineartime, with higher rate than was previously known. Our approach is combinatorial, and relies on a list recovery algorithm for highrate tensor codes.
Joint work with Brett Hemenway and Mary Wootters.

December 5, 2017
Guy Rothblum (Weizmann)
The Key to Differential Privacyâs Success
Differential privacy provides a rigorous and robust privacy guarantee to individuals in the context of statistical data analysis. It has sparked a revolution in the study of privacypreserving data analysis, impacting fields from computer science to statistics to legal scholarship. A key factor underlying this success robustness under composition: when multiple differentially private algorithms are run on the same individualâs data, privacy degrades smoothly and gradually.
It is hard to overstate the importance of robustness under composition. In reality, individualsâ data are involved in multiple datasets and analyses. Privacy that does not compose offers only questionable protection. No less important, composition makes Differential Privacy programmable: differentially private algorithms for small or common tasks can be used as subroutines in larger more complex algorithms, and inherit the subroutinesâ privacy guarantees (up to some degradation).
Composition has been key to differential privacyâs success, and understanding how privacy degrades under composition is a core issue in the study of privacypreserving data analysis. The differential privacy community has attempted to achieve a more comprehensive understanding of composition. Recent works have made significant advances in this study, touching on issues in complexity theory, probability theory, and shedding new light on the behavior differentially private algorithms.
This talk will survey the study of composition in differentially private data analysis, from basic definitions and guarantees and all the way to the most recent developments and open questions in this exciting frontier.

November 28, 2017
Nikhil Mande (TIFR)
Weights at the Bottom Matter When the Top is Heavy
We exhibit a natural function F_n on n variables, computable by a linear sized decision list of "Equalities", but whose sign rank is exp(Omega(n^{1/4})). The simplicity of F_n yields the following two consequences.
1) F_n can be computed by linear sized depth2 threshold formulas when the weights of the threshold gates are unrestricted (THR o THR), but any THR o MAJ circuit computing it requires size exp(Omega(n^{1/4})). This provides the first exponential separation between the boolean circuit complexity classes THR o MAJ and THR o THR, answering a question of Amano and Maruoka [MFCS'05] and Hansen and Podolskii [CCC'10].
In contrast, Goldmann, Hastad and Razborov [Computational Complexity'92] had shown more than 25 years ago that functions efficiently computable by MAJ o THR circuits can also be efficiently computed by MAJ o MAJ circuits. Thus, while weights at the bottom layer do not matter when the top is light, we discover that they do when the top is heavy.
2) The function F_n lies in the communication complexity class P^MA under the natural partition of the inputs. Since F_n has large sign rank, this implies P^MA is not a subset of UPP, strongly resolving a conjecture of Goos, Pitassi and Watson [ICALP'16].
In order to prove our main result, we view F_n as an XOR function, and develop a technique to lower bound the sign rank of such functions. This requires novel arguments that allow us to conclude that polynomials of low Fourier weight cannot approximate certain functions, even if they are allowed unrestricted degree.
This is based on joint work with Arkadev Chattopadhyay (TIFR, Mumbai).

November 14, 2017
Itzhak Tamo (TAU)
Optimal repair of polynomials: Achieving the cutset bound
It is a well known fact that any k evaluation points of a polynomial P(x) of degree less than k suffice to
calculate (recover) the evaluation at any other point. Motivated by applications to error correcting codes for
storage systems, we consider the following question.
Is it possible to recover (P (\alpha) for some \alpha, by observing some partial information of d >= k evaluation
points P (\alpha_i)? More precisely, the observed partial information is f_i (P (\alpha_i)) for some function f_i which is not necessarily injective, and the goal
is to perform the recovery, while minimizing the total amount of observed partial information. 
November 7, 2017
Roei Tell (Weizmann)
Quantified derandomization of linear threshold circuits: A potential path towards separating TC0 from NEXP?
Following Williams' separation of ACC from NEXP, a prominent current challenge in complexity theory is the attempt to prove lower bounds for TC0, the class of constantdepth, polynomialsize circuits with majority gates. An appealing approach to prove such lower bounds is to construct a nontrivial derandomization algorithm for TC0.
In the talk we will see two results. The first is a *quantified* derandomization algorithm for TC0 circuits with a superlinear number of wires; the algorithm distinguishes between circuits with extremely high acceptance probability and circuits with extremely low acceptance probability (instead of distinguishing between acceptance probabilities 1/3 and 2/3, as in standard derandomization). The second result is that even a very *modest* improvement in the parameters of the foregoing
algorithm would yield a nontrivial algorithm for *standard* derandomization of *all of TC0*, and would consequently separate TC0 from NEXP.

October 24, 2017
Scott Aaronson (UT Austin)
New Results on Learning and Reconstruction of Quantum States
Given an unknown Ddimensional quantum state rho, as well as M
twooutcome measurements E_1,...,E_M, how many copies of rho do we
need, if we want to learn the approximate probability that E_i accepts
rho for *every* i? In this talk, I'll prove the surprising result 
I didn't believe it myself at first  that one can achieve this using
a number of copies that's polylogarithmic in both M and D. So, e.g.,
one can learn whether *every* sizen^3 quantum circuit accepts or
rejects an nqubit state, given only poly(n) copies of the state. To
prove this will require first surveying previous results on measuring
quantum states and succinctly describing them, including my 2004
postselected learning theorem, and my 2006 "Quantum OR Bound" (with an
erroneous proof fixed in 2016 by Harrow, Lin, and Montanaro).
As time permits, I'll also discuss new joint work with Xinyi Chen,
Elad Hazan, and Ashwin Nayak, which takes my 2006 result on
PAClearnability of quantum states, and extends to the setting of
online learning. Here we show that, given a sequence of T twooutcome
measurements on an nqubit state, even if the sequence is chosen
adversarially, one can still learn to predict the outcomes of those
measurements with total regret O(n) (in the "realizable" case) or
O(sqrt(Tn)) (in the "nonrealizable" case).