Theory of Computation SeminarSchool of Computer Science

Date  Speaker  Title 
2011/11/1  Bill Gasarch  [+] Lower bounds on Tree Resolution Proofs for the Pigeon Hole Principle 
Resolution is a common technique to prove that a formula is not satisfiable. In many cases it gives SHORT proofs of such. Does it always? Unlikely if it did then NP=coNP I will present a very nice proof by Beyersdorff, Galesi, Luria that a resticted form a resolution yields proofs of size 2^{Omega(nlog n)} for the Pigeonhole principle. (The result is not new).
I will then present some intriguing open problems (that are MINE) NO prior knowledge of anything is needed. Really! In particular you do not need to know what resolution is ahead of time (I didn't when I first read this paper.)  
2011/11/8  Shai Vardi  [+] Local Computation Algorithms 
For input $x$, let $F(x)$ denote the set of outputs that are the ``legal'' answers to a computational problem $F$. Suppose $x$ and members of $F(x)$ are so large that there is not time to read them in their entirety. We propose a model of {\em local computation algorithms} which, for a given input $x$, support queries by a user to values of specified locations $y_i$ in a legal output $y \in F(x)$. When more than one legal output $y$ exists for a given $x$, the local computation algorithm should output in a way that is consistent with at least one such $y$. Local computation algorithms are intended to distill the common features of several concepts that have appeared in various algorithmic subfields, including local distributed computation, local algorithms, locally decodable codes, and local reconstruction. We develop a technique which under certain conditions can be applied to construct local computation algorithms that run in {\em polylogarithmic} time and space. The technique is based on known constructions of small sample spaces of $k$wise independent random variables and Beck's analysis in his algorithmic approach to the Lov{\'{a}}sz Local Lemma. We apply this technique to maximal independent set computations, scheduling radio network broadcasts, hypergraph coloring and satisfying $k$SAT formulas.
Joint work with Ronitt Rubinfeld and Ning Xie.
 
2011/11/15  Ronen Shaltiel  [+] Dispersers for affine sources with subpolynomial entropy 
We construct an explicit disperser for affine sources over $\F_2^n$ with entropy $k = 2^{\log^{0.9} n} = n^{o(1)}$. This is a polynomial time computable function $D:\F_2^n \to \set{0,1}$ such that for every affine space $V$ of $\F_2^n$ that has dimension at least $k$, $D(V)=\set{0,1}$. This improves the best previous construction of BenSasson and Kopparty that achieved $k = \Omega(n^{4/5})$ and is the first pseudorandom object for affine sources with entropy less than $\sqrt{n}$. Our technique follows a high level approach that was developed in the context of dispersers for two independent sources in two papers by Barak, Kindler, Shaltiel, Sudakov and Wigderson, and Barak, Rao, Shaltiel and Wigderson. We design a "challengeresponse game" for affine sources that enables the disperser to locate high entropy blocks in the source even though it only receives a single sample from the source distribution. In order to implement the game we rely on tools and ideas of Rao in the context of extractors for lowweight affine sources. The basic approach gives a disperser for affine sources with entropy larger than $\sqrt{n}$. We use the challengeresponse game to perform a recursive winwin analysis that enables us to handle affine sources with entropy smaller than $\sqrt{n}$.
 
2011/11/22  Ilan Newman  [+] Every Property of Hyperfinite Graphs is Testable 
A property testing algorithm for a property $\Pi$ in the bounded degree graph model\cite{GR97} is an algorithm that, given access to the adjacency list representation of a graph $G=(V,E)$ with maximum degree at most $d$, accepts $G$ with probability at least $2/3$ if $G$ has property $\Pi$, and rejects $G$ with probability at least $2/3$, if it differs on more than $\epsilon dn $ edges from every $d$degree bounded graph with property $\Pi$. A property is \emph{testable}, if for every $\epsilon,d$ and $n$, there is a property testing algorithm $A_{\epsilon,n,d}$ that makes at most $q(\epsilon,d)$ queries to an input graph of $n$ vertices, that is, a nonuniform algorithm that makes a number of queries that is independent of the graph size.
A $k$disc around a vertex $v$ of a graph $G=(V,E)$ is the subgraph induced by all vertices of distance at most $k$ from $v$.
Our results also give a partial explanation of the success of motifs in the analysis of complex networks.
This is a joint work with Christian Sohler.
 
2011/11/29  Zohar Karnin  [+] Deterministic construction of a high dimensional l_p section in l_1^n for any p<2, with applications to Compressed Sensing 
In Compressed Sensing there is a need for linear compression of sparse vectors. Such compression algorithms are characterized by three main parameters: Quality guarantee is achieved when the kernel of the compression matrix A is a high dimensional subspace in which the l_1 and l_2 norms behave similarly.
Since p < 2 we get a slightly weaker form of quality guarantee. However, unlike previous related results, our construction is deterministic, with a highly sparse A, having at most O(1) nonzero entries in each row. As a result, we get a shorter runtime for both the compression and decompression algorithms while keeping the other parameters (i.e., compression ratio and quality guarantee) almost the same as in previous constructions.
 
2011/12/6  Michael Langberg  [+] A Unified Framework for Approximating and Clustering Data 
One of the great challenges in computational theory is the extraction of patterns from massive and highdimensional data sets, e.g., clustering. The field of geometric clustering is an extensively studied one with a wide spectrum of ``real world'' applications.
A ``coreset'' for a given clustering optimization problem is a ``compressed'' representation of the data at hand, in the sense that a solution for the clustering problem with the (small) coreset as input would yield an approximate solution to the problem with the original (larger) input. Using traditional techniques, a coreset usually implies provable linear time algorithms for the corresponding clustering problem.
In this talk I will present a unified and abstract framework that yields the efficient construction of succinct coresets for several clustering problems. Representing the data by a set F of positive functions over a ground set X, our framework forges a link between the combinatorial complexity of the function family F at hand (measured in terms of classical VC dimension) and the paradigm of coresets. Our coresets are obtained via randomized sampling according to a delicately designed sampling distribution. Examples in which our framework has been successful (and improves over previous works) include the kmedian, the kline median, projective clustering, linear regression, low rank approximation, and subspace approximation problems.
This talk is based on joint works with Dan Feldman and Leonard Schulman.
 
2011/12/13  Gil Cohen  [+] On the Degree of Univariate Polynomials Over the Integers 
We study the following problem raised by von zur Gathen and Roche [Combinatorica '97]: What is the minimal degree of a nonconstant polynomial $f:\{0,\ldots,n\}\to\{0,\ldots,m\}$? The above mentioned authors proved that for $m=1$, $\deg(f) = no(n)$.
In this talk we present lower bounds for larger values of $m$ (even $m$ exponential in $n$). We complement these bounds by proving the existence of not too highdegree polynomials.
Our proofs use a variety of techniques that we believe will find other applications as well. One technique shows how to handle a certain set of diophantine equations by working modulo a well chosen set of primes (i.e. a Boolean cube of primes). Another technique shows how to use lattice theory and Minkowski's theorem to prove the existence of a polynomial with certain properties.
One motivation for studying this problem for $m>1$ comes from the work of Shpilka and Tal [CCC '11] who showed that in order to understand the Fourier spectrum of symmetric Boolean functions (which has shown to relate to the problem of learning symmetric juntas) one needs to study polynomials $f:\{0,...,n\}\to \{0,1,2\}$ and prove lower bounds on their degree, which is exactly the question that we study here for the case $m=2$.
Joint work with Amir Shpilka and Avishay Tal.
 
2011/12/20  Benny Sudakov (UCLA)  [+] Nearly Complete Graphs Decomposable into Large Induced Matchings and their Applications 
We describe two constructions of (very) dense graphs which are edge disjoint unions of large {\em induced} matchings. These constructions disprove (in a strong form) a conjecture of Meshulam, substantially improves a result of Birk, Linial and Meshulam on communicating over a shared channel, and (slightly) extends the analysis of H{\aa}stad and Wigderson of the graph test of Samorodnitsky and Trevisan for linearity. We can also use them to settle a combinatorial question of Vempala regarding a candidate rounding scheme for the directed Steiner tree problem.
Joint work with N. Alon and A. Moitra  
2011/12/27  Avi Wigderson  [+] Local Correction of Codes and Euclidean Incidence Geometry 
A classical theorem in Euclidean geometry asserts that if a set of points has the property that every line through two of them contains a third point, then they must all be on the same line. We prove several approximate versions of this theorem (and related ones), which are motivated from questions about locally correctable codes and matrix rigidity. The proofs use an interesting combination of combinatorial, algebraic and analytic tools.
Joint work with Boaz Barak, Zeev Dvir and Amir Yehudayoff  
2012/1/3  Scott Aaronson (MIT)  [+] Quantum Money from Hidden Subspaces 
Joint work with Paul Christiano Forty years ago, Wiesner pointed out that quantum mechanics raises the striking possibility of money that cannot be counterfeited according to the laws of physics. We propose the first quantum money scheme that is  
2012/1/10  Gabriel Nivasch (EPFL)  [+] Upper bounds for centerflats 
For every fixed d and every n, we construct an npoint set G in R^d such that every line in R^d is contained in a halfspace that contains only 2n/(d+2) points of G (up to lowerorder terms).
Apparently, the point set G satisfies the following more general property: For every k, every kflat in R^d is contained in a halfspace that contains only (k+1) n / (d+k+1) points of G (up to lowerorder terms).
In 2008, Bukh, Matousek, and Nivasch conjectured the following generalization of Rado's centerpoint theorem: For every npoint set S in R^d and every k, there exists a kflat f in R^d such that every halfspace that contains f contains at least (k+1) n / (d+k+1) points of S (up to lowerorder terms). (The centerpoint theorem is obtained by setting k=0.) Such a flat f would be called a "centerflat".
Thus, our upper bound construction shows that the leading constant (k+1)/(k+d+1) in the above conjecture is tight (certainly for k = 1, and apparently for all k).
The set G of our construction is the "stretched grid"  a point set which has been previously used by Bukh et al. for other related purposes.
Joint work with Boris Bukh.
 
2012/1/17  Dana Moshkovitz (MIT)  [+] The Sliding Scale Conjecture From Intersecting Curves 
The Sliding Scale Conjecture was posed by Bellare, Goldwasser, Lund and Russell in 1993 and has been open since. It says that there are PCPs with constant number of queries, polynomial alphabet and polynomially small error.We show that the conjecture can be proved assuming a certain geometric conjecture about curves over finite fields.The geometric conjecture states that there are small families of low degree curves that behave, both in their distribution over points and in the intersections between pairs of curves from the family, similarly to the family of all low degree curves.
 
2012/1/24  Klim Efremenko  [+] From Irreducible Representations to Locally Decodable Codes 
Locally Decodable Code (LDC) is a code that encodes a message in a way that one can decode any particular symbol of the message by reading only a constant number of locations, even if a constant fraction of the encoded message is adversarially corrupted.
We show that if there exists an irreducible representation (\rho, V) of a group G and q elements g_1,g_2,..., g_q in $G$ such that there exists a linear combination of matrices \rho(g_i) that is of rank one, then we can construct a $q$query Locally Decodable Code C:V> F^G. We show the potential of this approach by constructing constant query LDCs of subexponential length matching the parameters of the best known constructions.
No prior knowledge in representation theory will be assumed.
 
2012/1/31  Alexey Glibichuk  [+] Incidences over the field of prime order 
http://tq.cs.tau.ac.il/seminar/Glibichuk_Abstract.pdf In 1983 J. Beck proved the following incidence theorem in R^2. Let P be a set of points in R^2. Then either P contains cP points on a straight line, or the pairs of points of P determine at least cP^2 distinct lines, where c is an absolute constant. We write S for the number of elements of a finite set S. Beck's original paper was followed in the same issue of the same journal by a result of Szemeredi and Trotter stating that the number of incidences between m straight lines and n points in R^2 is $O(m+n+(mn)^{2/3})$. The SzemerediTrotter theorem implies Beck's theorem as a corollary. The analog of Beck's theorem for finite field model was not known for long time.
Breakthrough was made by Jean Bourgain, Netz Katz and Terrence Tao  
2012/2/6  Roy Schwartz  [+] Submodular Maximization 
he study of combinatorial problems with a submodular objective function has attracted much attention in recent years, and is partly motivated by the importance of such problems to economics, algorithmic game theory and combinatorial optimization. Classical works on submodular maximization problems are mostly combinatorial in nature. We show that continuous methods can be further used to obtain improved results in other settings.  
2012/3/13  Amir Shpilka  [+] On Identity Testing of Tensors, Lowrank Recovery and Compressed Sensing 
We study the problem of designing efficient, deterministic, blackbox polynomial identity testing algorithms for tensors, and obtain an explicit construction of a quasipolynomial sized hitting set. We also consider the question of recovering the tensor deterministically, and connect this problem to the question of Lowrankrecovery (in the noiseless case). We then show a connection between lowrank recovery and the task of sparse (vector) recovery: any sparserecovery algorithm yields a lowrank recovery scheme, with equivalent parameters. We obtain this reduction using linearalgebraic techniques, and not using convex optimization, which is more commonly seen in compressed sensing and lowrankreccovery algorithms. Finally, we shall make a connection to rankmetric codes, as studied in coding theory.
This is a joint work with Michael Forbes  
2012/3/20  Ran Raz  [+] The Surprise Examination Paradox and the Second Incompleteness Theorem 
We give a new proof for Godel's second incompleteness theorem, based on Kolmogorov complexity, Chaitin's incompleteness theorem, and an argument that resembles the surprise examination paradox.
We then go the other way around and suggest that the second incompleteness theorem gives a possible resolution of the surprise examination paradox. Roughly speaking, we argue that the flaw in the derivation of the paradox is that it contains a hidden assumption that one can prove the consistency of the mathematical theory in which the derivation is done; which is impossible by the second incompleteness theorem.
I will start with a short introduction to the known proofs for Godel's incompleteness theorems and their relations to the liar paradox, Kolmogorov complexity and Berry's paradox.
No knowledge in logic will be assumed.
Joint work with Shira Kritchman  
2012/3/27  Eden Chlamtac  [+] Smaller integrality gaps and improved approximations using hierarchies of convex programs 
We consider natural convex relaxations of integer programs, such as Linear Programs (LP) and SemiDefinite Programs (SDP), and examine how well they approximate various problems in combinatorial optimization.
We will look at different hierarchies, and some universal properties of the LP and SDP relaxations derived from them. Moreover, we will see how, for certain NPhard optimization problems, we can achieve improved approximations using such strengthened relaxations while maintaining polynomial running time overall.
 
2012/4/3  Alon Rosen  [+] Pseudorandom Functions and Lattices 
We give direct constructions of pseudorandom function families based on conjectured hard lattice problems and learning problems. Our constructions are asymptotically efficient and highly parallelizable in a practical sense, i.e., they can be computed by simple, relatively small lowdepth arithmetic or boolean circuits (e.g., in NC$^{1}$ or even TC$^{0}$). In addition, they are the first lowdepth PRFs that have no known attack by efficient quantum algorithms. Central to our results is a new ``derandomization'' technique for the learning with errors problem which, in effect, generates the error terms deterministically.
Joint work with Abhishek Banerjee and Chris Peikert.
 
2012/4/17  Noga Zewi  [+] An additive combinatorics approach to the logrank conjecture in communication complexity 
For a {0,1}valued matrix M let CC(M) denote he deterministic communication complexity of the boolean function associated with M. The logrank conjecture of Lovasz and Saks [FOCS 1988] states that CC(M) <= log^c(rank(M)) for some absolute constant c where rank(M) denotes the rank of M over the field of real numbers.
Our proof is based on the study of the "approximate duality conjecture" which was recently suggested by BenSasson and Zewi [STOC 2011] and studied there in connection to the PFR conjecture. First we improve the bounds on approximate duality assuming the PFR conjecture. Then we use the approximate duality conjecture (with improved bounds) to get the aforementioned upper bound on the communication complexity of lowrank martices, and this part uses the methodology suggested by Nisan and Wigderson [Combinatorica 1995].
Joint work with Eli BenSasson and Shachar Lovett.
 
2012/4/24  Elad Haramaty  [+] Optimal testing of multivariate polynomials over small prime fields 
We consider the problem of testing if a given function f is close to a nvariate degree d polynomial over the finite field of q elements. The natural, lowquery, test for this property would be to pick the smallest dimension t = t(q,d )˜ d/q such that every function of degree greater than d reveals this feature on some tdimensional affine subspace and to test that f when restricted to a random tdimensional affine subspace is a polynomial of degree at most d on this subspace. Such a test makes only qt queries, independent of n. Previous works, by Alon et al., and Kaufman and Ron and Jutla et al. , showed that this natural test rejected functions that were O(1)far from degree dpolynomials with probability at least O(qt) (the results of hold for all fields, while the results of hold only for fields of prime order). Thus to get a constant probability of detecting functions that were at constant distance from the space of degree d polynomials, the tests made q^{2t} queries. Kaufman and Ron also noted that when q is prime, then q^t queries are necessary. Thus these tests were off by at least a quadratic factor from known lower bounds. It was unclear if the soundness analysis of these tests were tight and this question relates closely to the task of understanding the behavior of the Gowers Norm. This motivated the work of Bhattacharyya et al., who gave an optimal analysis for the case of the binary field and showed that the natural test actually rejects functions that were O(1)far from degree dpolynomials with probability at least O(1).
In this work we give an optimal analysis of this test for all fields showing that the natural test does indeed reject functions that are O(1)far from degree d polynomials with O(1)probability. Our analysis thus shows that this test is optimal (matches known lower bounds) when q is prime. Our approach extends the proof technique of Bhattacharyya et al., however it has to overcome many technical barriers in the process. The natural extension of their analysis leads to an O(q^d) query complexity, which is worse than that of Kaufman and Ron for all q except 2! The main technical ingredient in our work is a tight analysis of the number of “hyperplanes” (affine subspaces of codimension 1) on which the restriction of a degree d polynomial has degree less than d. We show that the number of such hyperplanes is at most O(qtq,d ) — which is tight to within constant factors.
Joint work with Amir Shpilka and Madhu Sudan  
2012/5/1  Ankit Gupta  [+] Reconstruction of Depth4 Multilinear Circuits with Top Fanin 2 
A reconstruction algorithm for a multivariate polynomial f in F[x_1, …, x_n] is given blackbox access to f and is required to efficiently output a (succinct) representation of f in some suitable model of computation. The algorithm can make adaptive queries to the blackbox to evaluate f on inputs of its choice.
The simplest representation of a polynomial is as a sum of monomials, i.e., as a SP (depth2) circuit. The reconstruction problem in this case is referred to as the “interpolation problem” and admits efficient algorithms (Klivans & Spielman). Many interesting polynomials, e.g., determinant, however, have exponentially long representations as sums of monomials, whereas as arithmetic formulas or circuits, they can be represented with quasipolynomial, or even polynomial, complexity. Unfortunately, there are very strong hardness results for the reconstruction of general circuits classes. Hence positive results for this problem are known only for restricted circuit classes.
In this talk I’ll talk about a recent work in which we present a randomized algorithm for reconstructing multilinear SPSP(2) circuits, i.e., multilinear depth4 circuits with fanin 2 at the top ‘+’ gate. The algorithm is given blackbox access to a polynomial f in F[x_1,…,x_n] computable by a multilinear SPSP(2) circuit of size s and outputs an equivalent multilinear SPSP(2) circuit, runs in time poly(n,s), and works over any field F.
This is the first reconstruction result for any model of depth4 arithmetic circuits. Prior to our work, reconstruction results for bounded depth circuits were known only for depth2 arithmetic circuits (Klivans & Spielman), SPS(2) circuits (depth3 arithmetic circuits with top fanin 2) (Shpilka), and SPS(k) with k=O(1) (Karnin & Shpilka). Moreover, the running times of these algorithms have a polynomial dependence on F and hence do not work for infinite fields such as the rationals.
 
2012/5/6  Yonatan Goldhirsh  [+] Testing Formula Satisfaction 
Property Testing deals with the following question: Distinguish using as few queries to the input as possible, ideally with a number of queries independent of the input size, between inputs that satisfy a given property and inputs that are far from any possible input satisfying the property. In the massively parametrized model, a fixed part of the input is given to the algorithm in advance, on which the algorithm has to be exact (i.e. the approximation of "not being far from a satisfying input" can only be made for the input part not given in advance).
In this talk we consider properties defined by readonce Boolean formulas, where the formula is given in advance. Such formulas are representable as trees with their nodes labeled by the corresponding gates, and the part of the input not given in advance is the assignment, i.e. the values at the leaves.
For all properties given by And/Or gates, and in fact all properties where the gates are of bounded arity, we give a test whose number of queries does not depend on the input size at all. In essence this means that untestability cannot be "broken down" to small gates. On the other hand, for nonBoolean alphabets there is in fact a simple binary gate that can be used for formulas that are not testable with a constant number of queries.
We also discuss some related questions, such as instances where we can estimate the actual distance of the input from satisfying the formula, and a test for And/Or formulas whose number of queries is only quasipolynomial in the approximation parameter.
 
2012/5/15  Michael Pinsker  [+] Schaefer's theorem for graphs, or: why to consult the infinite at times 
I will present a complexity classification result for certain class of computational problems called *GraphSAT problems*. A GraphSAT problem is given by a fixed finite set \Psi of quantifierfree firstorder formulas in the language of graphs. The input of the problem consists of a set V of variables and a conjunction \Phi of statements (``constraints'') about these variables, where each statement is taken from \Psi; the question to decide is whether or not \Phi is satisfiable in a graph.
The complexity of GraphSAT problems depends on \Psi  the larger \Psi, the more complicated instances of the problem can get, and the more complex is the problem. We prove that either \Psi is contained in one out of 17 classes of graph formulas and the corresponding problem can be solved in polynomial time, or the problem is NPcomplete.
The perhaps surprising method of proof is to assign to every \Psi an infinite structure definable in the random graph, and then throw in some model theory, universal algebra, and structural Ramsey theory to analyze these structures.
This is a joint work with Manuel Bodirsky.
 
2012/5/29  Zvika Brakerski  [+] Efficient Interactive Coding Against Adversarial Noise 
We study the problem of constructing interactive protocols that are robust to noise, a problem that was originally considered in the seminal works of Schulman (FOCS '92, STOC '93), and has recently regained popularity. Robust interactive communication is the interactive analogue of error correcting codes: Given an interactive protocol which is designed to run on an errorfree channel, construct a protocol that evaluates the same function (or, more generally, simulates the execution of the original protocol) over a noisy channel. As in (noninteractive) error correcting codes, the noise can be either stochastic, i.e. drawn from some distribution, or adversarial, i.e. arbitrary subject only to a global bound on the number of errors.
We show how to efficiently simulate any interactive protocol in the presence of constantrate adversarial noise, while incurring only a constant blowup in the communication complexity ($\cc$). Our simulator is randomized, and succeeds in simulating the original protocol with probability at least $12^{\Omega(\cc)}$. Prior works could not achieve efficient simulation in the adversarial case.
Joint work with Yael Tauman Kalai.
 
2012/6/3  Amir Yehudayoff  [+] Some reconstruction problems 
The talk will introduce some natural datareconstruction problems, with possible connections to learning theory, cryptography, computational biology and more. A key example is the following. Assume V is a set of k nbit binary vectors. Let p be a distribution on V. Assume we get noisy samples from V by first choosing v according to p and then flipping each entry in v with probability 0.49, independently. The goal is to reconstruct V,p from such noisy samples.
We will show how to learn V,p in time quasipolynomial in k. The algorithm uses a new structure called a partialID graph, which captures impostors relations in a given set of vectors.
 
2012/6/12  Ariel Gabizon  [+] Extractors for Polynomials Sources over ConstantSize Fields of Small Characteristic 
et F be the field of q elements, where q=p^l for prime p. Informally speaking, a polynomial source is a distribution over F^n sampled by low degree multivariate polynomials. In this paper, we construct extractors for polynomial sources over fields of constant size q assuming p< Joint work with Eli BenSasson  
2012/6/19  Andrej Bogdanov  [+] Pseudorandomness for linearlength branching programs and stack machines 
We give a simple distribution that is pseudorandom against (1) Oblivious branching programs over binary alphabet; (2) Arbitrary branching programs over sufficiently large alphabet; (3) Logspace randomized Turing Machines extended with a stack that make a constant number of passes over their random tape. This distribution can be sampled by a pseudorandom generator with small linear stretch.
These results are consequences of the following property of the distribution over {0, 1}^n: Given any pair of subsets I, J of size 0.99n each and boolean functions f, g: {0, 1}^{0.99n} > {0, 1}, the joint distribution (f(xI), g(xJ)) is statistically close in the case x is uniformly random and in the case x is chosen from the pseudorandom distribution.
Based on joint work with Periklis Papakonstantinou and Andrew Wan.
* Special joint meeting with the Greater Tel Aviv Area Cryptography Seminar 
Date  Speaker  Title 
2010/10/19  Moni Naor  [+] Backyard Cuckoo Hashing: Constant WorstCase Operations with a Succinct Representation 
The performance of a dynamic dictionary is measured mainly by its update time, lookup time, and space consumption. In terms of update time and lookup time there are known constructions that guarantee constanttime operations in the worst case with high probability, and in terms of spaceconsumption there are known constructions that use essentially optimal space. However, although the first analysis of a dynamic dictionary dates back more than 45 years ago, when Knuth analyzed linear probing, the tradeoff between these aspects of performance is still not completely understood. In this talk I will describe recent work that provides the construction of the first dynamic dictionary that enjoys the best of both worlds: it stores n elements using only (1 + o(1))B bits, where B is the informationtheoretic lower bound for representing a set of size n taken from a universe of size u, and guarantees constanttime operations in the worst case with high probability. This problem was open even in the amortized setting. One of the main ingredients of our construction is a permutationbased variant of cuckoo hashing. An application of the above result is an approximate set membership data structure (AKA Bloom Filter) that is within 1+o(1) of optimal size and works in worst case O(1) time. Joint work with Yuri Arbitman and Gil Segev  
2010/10/26  Noga Zewi  [+] From affine to twosource extractors via approximate duality 
Twosource and affine extractors and dispersers (known also as bipartite Ramsey graphs) are fundamental objects studied in the context of derandomization. We show how to construct twosource extractors and dispersers for arbitrarily small minentropy rate in a blackbox manner from affine extractors with sufficiently good parameters. We present two blackbox constructions. Both constructions work for minentropy rate less than half. One of them can potentially reach arbitrarily small minentropy rate provided the affine extractor used to construct it outputs, on affine sources of minentropy rate half, a relatively large number of output bits and has sufficiently small error. Our analysis relies on the study of approximate duality, a concept related to the polynomial FreimanRuzsa conjecture (PFR) from additive combinatorics. For more details see: ECCC 2010144 Joint work with Eli BenSasson.  
2010/11/02  Ronen Shaltiel  [+] Typicallycorrect derandomization 
A main goal of complexity theory is to show that every randomized polynomial time algorithm can be simulated by a deterministic one. A typicallycorrect simulation of a randomized algorithm is a deterministic algorithm (preferably of the same complexity) that simulates the randomized algorithm correctly on "most inputs". This relaxed goal (introduced by Goldreich and Wigderson) is sometimes easier to obtain than standard derandomization. Specific examples are:
The results follow from general approaches to achieve typicallycorrect derandomization in various algorithmic settings. I will discuss these approaches and also, whether the relaxed goal of typically correct derandomization implies circuit lower bounds.
The talk is based on the following papers that can be downloaded from my webpage.
 
2010/11/09  Noam Livne  [+] Sequential Rationality in Cryptographic Protocols 
Much of the literature on rational cryptography focuses on analyzing the strategic properties of cryptographic protocols. However, due to the presence of computationallybounded players and the asymptotic nature of cryptographic security, a definition of sequential rationality for this setting has thus far eluded researchers. We propose a new framework for overcoming these obstacles, and provide the first definitions of computational solution concepts that guarantee sequential rationality. We argue that natural computational variants of subgame perfection are too strong for cryptographic protocols. As an alternative, we introduce a weakening called threatfree Nash equilibrium that is more permissive but still eliminates the undesirable ``empty threats'' of nonsequential solution concepts. To demonstrate the applicability of our framework, we revisit the problem of implementing a mediator for correlated equilibria (DodisHaleviRabin, Crypto 00), and propose a variant of their protocol that is sequentially rational for a nontrivial class of correlated equilibria. Our treatment provides a better understanding of the conditions under which mediators in a correlated equilibrium can be replaced by a stable protocol.  
2010/11/16  Oded Goldreich  [+] In a World of P=BPP 
We show that proving results such as BPP=P essentially necessitate the construction of suitable pseudorandom generators (i.e., generators that suffice for such derandomization results). In particular, the main incarnation of this equivalence refers to the standard notion of uniform derandomizationand to the corresponding pseudorandom generators (i.e., the standard uniform notion of ``canonical derandomizers''). This equivalence bypasses the question of which hardness assumptions are required for establishing such derandomization results, which has received considerable attention in the last decade or so (starting with Impagliazzo and Wigderson [JCSS, 2001]). We also identify a natural class of search problems that can be solved by deterministic polynomialtime reductions to BPP. This result is instrumental to the construction of the aforementioned pseudorandom generators (based on the assumption BPP=P), which is actually a reduction of the ``construction problem'' to BPP.  
2010/11/23  Alex Samorodnitsky  [+] Maximal eigenvalues of subsets of the cube and some applications 
Given a number k, we are interested in the maximal eigenvalue an induced kvertex subgraph of the discrete cube can have. We will give some partial answers and describe some connections of this question to bounds on binary codes and to random walks in the cube. In particular we will give a (sketchy) proof of the first LP bound for binary codes using eigenvalues.  
2010/11/30  Eli BenSasson  [+] Limits on the rate of locally testable affineinvariant codes 
A linear code is said to be affineinvariant if the coordinates of the code can be viewed as a vector space and the code is invariant under an affine transformation of the coordinates. A code is said to be locally testable if proximity of a received word to the code can be tested by querying the received word in a few coordinates. Locally testable codes have played a critical role in the construction of probabilistically checkable proofs and most such codes originate from simple affine invariant codes (in particular the ReedMuller codes). Furthermore it turns out that the local testability of these codes can really be attributed to their affineinvariance. It was hoped that by studying broader classes of affineinvariant codes, one may find nicer, or simpler, locally testable codes, and in particular improve (significantly) on the rate achieved by ReedMuller codes. In this work we show that lowrate is an inherent limitation of affineinvariant codes. We show that any kquery affineinvariant binary code is contained in an 2^{k}query testable ReedMuller code. In fact our result shows that any affineinvariant code that has a klocal constraint (i.e., a weight k codeword in its dual), a necessary condition for kquery local testability, is contained in a ReedMuller code that is 2^{k} locally characterized (i.e., its dual is spanned by words of weight at most 2^{k}). While the structure of affineinvariant codes has been explored quite extensively in the recent past Kaufman and Sudan [2008], Grigorescu et al. [2008, 2009], relating the locality of constraints in such codes to the rate has been a nontrivial challenge. Such questions lead to the task of showing that a class of systems of multivariate polynomial equations have no common zeroes over finite fields. Our result effectively shows a certain class of such systems (subclasses of diagonal systems) that have no common zeroes. Joint work with Madhu Sudan.  
2010/12/07  Amir Yehudayoff  [+] Pseudorandom generators for regular branching programs 
We give new pseudorandom generators for regular readonce branchingprograms of small width. A branching program is regular if the indegree of every vertex in it is (0 or) 2. For every width d and length n, our pseudorandom generator uses a seed of length O( (log(d) + loglog(n) + log(1/ε) ) log(n) ) to produce n bits that cannot be distinguished from a uniformly random string by any regular width d length n readonce branching program, except with probability ε. We also give a result for general readonce branching programs, in the case that there are no vertices that are reached with small probability. We show that if a (possibly nonregular) branching program of length n and width d has the property that every vertex in the program is traversed with probability at least p on a uniformly random input, then the error of the generator above is at most 2 ε/p^{2}. Joint work with Mark Braverman, Anup Rao and Ran Raz. ECCC version  
2010/12/21  Venkat Guruswami  [+] Bypassing UGC: Inapproximability of Subspace Approximation 
The Unique Games conjecture (UGC) has emerged in recent years as the starting point for optimal inapproximability results for several problems. However, these problems are only known to be UniqueGameshard but not UniqueGames"complete", leaving open the possibility that while many of the implications of the UGC are true, the conjecture itself is false. The question of whether inapproximability results shown under the UGC can be shown without appealing to the conjecture is thus a wellmotivated one. In this work, we bypass the UGC assumption in inapproximability results for two geometric problems  subspace approximation and Grothendieck problems in \ell_pnorms  obtaining a tight NPhardness result in each case. The talk will focus on the \ell_psubspace approximation problem (as it is the easier reduction). Here, the input consists of a set S of points in R^n and a parameter k (possibly depending on n). The goal is to find a kdimensional subspace H of R^n that minimizes the sum of the p'th powers of the Euclidean distances of the points in S from H. This problem is a generalization of various special cases for different values of p such as lowrank matrix approximation (p=2) or computing the radii of point sets (p = \infty).
For finite p > 2, we prove that this problem is NPhard to approximate within any factor less than the p'th moment E[g^p] of the standard normal variable g ~ N(0,1). Previously, the same inapproximability factor was shown assuming the UGC. A matching SDP based algorithm is also known (when k=n1).
This is the *first* approximation threshold, proven under P \neq NP, that involves the Gaussian random variable in a fundamental way. Note that the problem statement itself has no mention of Gaussians. The talk will explain the appearance of the Gaussian distribution in a dictatorship test that quantitatively utilizes the Central Limit Thoerem, which is then combined with a smooth version of Label Cover to give the inapproximability result. [No background knowledge of UGCbased results is needed for the talk.] Joint work with Prasad Raghavendra, Rishi Saket, and Yi Wu.
 
2010/12/28  Iftach Haitner  [+] Efficiency Improvements in Constructing Pseudorandom Generators from Oneway Functions 
We give a new construction of pseudorandom generators from any oneway function. The construction achieves better parameters and is simpler than that given in the seminal work of Hastad, Impagliazzo, Levin and Luby [SICOMP '99]. The key to our construction is a new notion of nextblock pseudoentropy, which is inspired by the notion of ``inaccessible entropy'' recently introduced in [Haitner, Reingold, Vadhan and Wee STOC '09]. An additional advantage over all previous constructions is that our pseudorandom generators are highly parallelizable and invoke the oneway function in a nonadaptive manner. Using [Applebaum, Ishai and Kushilevitz SICOMP '06], this implies the existence of pseudorandom generators in NC^{0} based on the existence of oneway functions in NC^{1}.  
2011/01/11  Raghu Meka  [+] Pseudorandom Generators from Invariance Principles 
Invariance principles or limit theorems have recently found several important applications in theoretical computer science. In this talk I'll present some recent results with the broad theme of constructing pseudorandom generators from invariance principles. The first set of results concerns the natural question of constructing pseudorandom generators (PRGs) for lowdegree polynomial threshold functions (PTFs). We give a PRG with seedlength log n/eps^{O(d)} fooling degree d PTFs with error at most eps. Previously, no nontrivial constructions were known even for quadratic threshold functions and constant error eps. For the class of degree 1 threshold functions or halfspaces, we construct PRGs with much better dependence on the error parameter eps and obtain a PRG with seedlength O(log n + log^2(1/eps)). The second set of results concerns "discrete central limit theorems", and fooling linear forms over 01 inputs in total variation distance.  
2011/01/18  Dana Moshkovitz  [+] Hardness of Approximately Solving Linear Equations Over Reals 
We consider the problem of approximately solving a system of homogeneous linear equations over reals, where each equation contains at most three variables.Since the allzero assignment always satisfies all the equations exactly, we restrict the assignments to be ``nontrivial". Here is an informal statement of our result: it is NPhard to distinguish whether there is a nontrivial assignment that satisfies $1\delta$ fraction of the equations or every nontrivial assignment fails to satisfy a constant fraction of the equations with a ``margin" of $\Omega(\sqrt{\delta})$.Unlike the wellstudied case of linear equations over finite fields, for equations over reals, the best approximation algorithm known (SDPbased) is the same no matter whether the number of variables per equation is two or three.
Our result is motivated by the following potential approach to proving The Unique Games Conjecture: 1. Prove the NPhardness of solving approximate linear equations over reals, for the case of three variables per equation (we prove this result).2. Prove the NPhardness of the problem for the case of two variables per equation, possibly via a reduction from the three variable case. 3. Prove the Unique Games Conjecture. An interesting feature of our result is that it shows NPhardness result that matches the performance of a nontrivial SDPalgorithm. Indeed, the Unique Games Conjecture predicts that an SDPbased algorithm is optimal for a huge class of problems (e.g. all CSPs by Raghavendra's result).
 
2011/02/22  Yuval IshaI  [+] On the Circuit Complexity of Universal Hashing and Cryptography 
Universal hash functions are fundamental algorithmic constructs with a widespread use throughout computer science. It was believed that the computation of such functions requires circuits of superlinear size. We show that this is not the case by giving an explicit construction of universal hash functions mapping n bits to n bits that can be computed by circuits of size O(n). We were led to this result through the study of methods to reduce the computational complexity of cryptographic primitives and protocols. Standard constructions of cryptographic primitives involve a large multiplicative computational overhead that grows with the desired level of security. We give evidence for the possibility of implementing cryptographic primitives (such as encryption, authentication, signatures, or secure twoparty computation) while incurring only a *constant* computational overhead compared to insecure implementations of the same tasks. Based on joint work with Eyal Kushilevitz, Rafail Ostrovsky, and Amit Sahai from STOC 2008.
 
2011/03/01  Klim Efremenko  [+] Local ListDecoding with a Constant Number of Queries 
Recently in [E] there was constructed locallydecodable codes of subexponential length. This result showed that these codes can handle up toone third fraction of errors. In this talk we show that the same codes can be locally uniquedecoded from error rate upto half and locally listdecoded from error rate 1α for any α>0, with only a constant number of queries and a constant alphabet size. This gives the first subexponential codes that can be locally listdecoded with a constant number of queries. We also will show a generic, simple way to amplify the errortolerance of any locally decodable code.  
2011/03/08  Bo'az Klartag  [+] The communication complexity of the Vector in Subspace problem 
Suppose Alice has a unit vector v in R^{n}, Bob has a subspace E of half the dimension in R^{n}, and they are guaranteed that either v lies in E, or else v lies in the orthogonal complement to E. Both Alice and Bob have an infinite computing power, but the communication channel between them is unfortunately very slow and expensive. How much bits have to be exchanged in order to decide whether v lies in E or not? Raz gave an O(sqrt n) protocol, which for each possible instance of the problem, outputs the correct answer with probability 99%. In this lecture we will discuss a lower bound of O(n^{1/3}) on the communication complexity. Joint work with O. Regev.  
2011/03/15  Michael Viderman  [+] Towards lower bounds on locally testable codes 
Locally testable codes (LTCs) are errorcorrecting codes for which membership, in the code, of a given word can be tested by reading a constant number of symbols. The main open problem in the area of locally testable codes (LTCs) is whether there exists an asymptotically good family of LTCs and to resolve this question it suffices to consider the case of query complexity 3. We argue that to refute the existence of such an asymptotically good family one should prove that the number of dual codewords of weight at most 3 is superlinear in the blocklength of the 3query LTC.
Then we show that every LTC must have linearly many redundant lowweight codewords in its dual. We actually prove the stronger claim that the actual test itself must use a linear number of redundant dual codewords (beyond the minimum number of basis elements required to characterize the code).
Based on two recent works: the first is joint with Eli BenSasson, Venkatesan Guruswami, Tali Kaufman and Madhu Sudan and the second is joint with Eli BenSasson.
 
2011/03/22  Scott Aaronson  [+] A LinearOptical Proof that the Permanent is #PComplete. 
I'll show how the model of "quantum computing with linear optics"  and in particular, a universality result due to Knill, Laflamme, andMilburn  can be used to give a completely new proof of Valiant's famous theorem that the permanent is #Pcomplete. This proof is arguably simpler than the standard one, requiring far fewer "unexplained gadgets." It thereby provides another illustration of quantum computing as a powerful "higherlevel programming language" for reasoning about classical counting classes. (An earlier example was the quantumbased proof that PP is closed under intersection.) If time permits, I'll also talk about linearoptical quantum computing more generally, and especially some open questions arising from my recent work on the subject with Alex Arkhipov.
 
2011/03/29  Benny Applebaum  [+] Pseudorandom Generators with Long Stretch and Low Locality from Random Local OneWay Functions 
We continue the study of pseudorandom generators (PRG) in NC^{0} that stretch nbits of input into mbits of outputs. While it is known that such generatorsare likely to exist for the case of small sublinear stretch m=n+n^{1\eps}, it remains unclear whether achieving larger stretch such as m=2n or even m=n+n^{2} is possible. The existence of such PRGs, which was posed as an open question in previous works (e.g., [Cryan and Miltersen, MFCS 2001], [Mossel,Shpilka and Trevisan, FOCS 2003], and [Applebaum, Ishai and Kushilevitz, FOCS 2004]), has recently gained an additional motivation due to several interesting applications. We obtain NC^{0} constructions of linearstretch PRGs and polynomialstretch weakPRGs where in the latter case the distinguishing advantage is 1/poly(n) rather than negligible. These constructions are based on the onewayness of ``random'' NC^{0} functions  a variant of an assumption made by Goldreich (ECCC 2000). We also show that our constructions give rise to strong inapproximability results for the densestsubgraph problem in duniform hypergraphs for constant d. This allows us to improve the previous bounds of Feige (STOC 2002) and Khot (FOCS 2004) from constant inapproximability factor to n^{eps}inapproximability, at the expense of relying on stronger assumptions.
 
2011/04/5  Adi Akavia  [+] Explicit Constructions Fooling Bohr Sets and Arithmetic Progressions 
In this talk I present novel results showing there are explicit constructions (ie, constructions that are deterministic and of time polynomial in the output size) of the following pseudorandom sets:
 
2011/05/03  Irit Dinur  [+] On the structure of NPhard 3SAT instances, and an analogous question for locally testable codes 
The PCP theorem says that it is NPhard to approximate the fraction of satisfiable clauses in a 3SAT instance. What is the structure of such 3SAT instances? Here by the "structure" of a 3SAT instance we refer to the constraint (hyper)graph whose vertices are variables and whose hyperedges are the clauses.
We study this question in an analogous setting of socalled locally testable codes (LTCs). LTCs are are error correcting codes that come with a constraint (hyper)graph. We prove a structural theorem on LTCs: every LTC can be decomposed into a constant number of "basic" codes whose constraint graph is an expander.
based on joint work with Tali Kaufman  
2011/05/04  Kevin Matulef  [+] Property Testing Lower Bounds via Communication Complexity 
We develop a new technique for proving lower bounds in property testing,by showing a strong connection between testing and communication complexity. We give a simple scheme for reducing communication problems to testing problems, thus allowing us to use known lower bounds in communication complexity to prove lower bounds in testing. This scheme is general and implies a number of new testing bounds, as well as simpler proofs of several known bounds. In this talk we will illustrate our technique by proving lower bounds for testing juntas, klinearity, monotonicity, and more. Based on joint work with Eric Blais and Joshua Brody.
 
2011/05/17  Will Perkins  [+] The BohmanFrieze Process 
The phase transition of the ErdosRenyi random graph process, during which a giant component emerges, is understood in great detail. But how robust is it? We describe the phase transition of the BohmanFrieze process, a simple modification of the ErdosRenyi process that adds dependence between the edges and is biased in favor of joining isolated vertices. We show that the phase transitions of the two processes are qualitatively similar, sharing several critical exponents related to the size and structure of the connected components in the barely sub and supercritical phases. The results extend to a wider class of processes, called 'boundedsize Achlioptas processes'.  
2011/05/24  Ishay Haviv  [+] Linear Index Coding via Semidefinite Programming 
In the index coding problem, introduced by Birk and Kol (INFOCOM, 1998), the goal is to broadcast an n bit word to n receivers, where the receivers have side information represented by a graph G and each of them is interested in one bit of the word. The objective is to minimize the length of the transmitted message. For linear index coding, the minimum possible length is known to be equal to a graph parameter called minrank (BarYossef et al., FOCS, 2006).
We show a polynomial time algorithm that, given an n vertex graph G with minrank k, finds a linear index code for G of length O(n^{f(k)}), where f(k) depends only on k. For example, for k=3 we obtain f(3) ~ 0.2574. Our algorithm employs a semidefinite program (SDP) introduced by Karger, Motwani and Sudan (J. ACM, 1998) for graph coloring. A crucial component of our analysis is bounding the objective value of the SDP in terms of the minrank.
As a side effect of our analysis, we show an exact expression for the maximum possible value of the Lov\'{a}sz thetafunction of a graph with minrank k. This compares two classical upper bounds on the Shannon capacity of a graph.
Joint work with Eden Chlamtac, Tel Aviv University  
2011/05/31  Or Meir  [+] IP = PSPACE vs. error correcting codes 
The IP theorem, which asserts that IP = PSPACE (Lund et. al., and Shamir, in J. ACM 39(4)), is one of the major achievements of complexity theory. The known proofs of the theorem are based on the arithmetization technique, which transforms a quantified Boolean formula into a related polynomial. The intuition that underlies the use of polynomials is commonly explained by the fact that polynomials constitute good error correcting codes. However, the known proofs seem tailored to the use of polynomials, and do not generalize to arbitrary error correcting codes. In this work, we show that the IP theorem can be proved by using general error correcting codes. We believe that this establishes a rigorous basis for the aforementioned intuition, and sheds further light on the IP theorem.

Date  Speaker  Title 
2009/10/20  Ronen Eldan  [+]A Polynomial Number of Random Points does not Determine the Volume of a Convex Body 
We prove that there does not exist an algorithm that, with nonnegligible probability, approximates up to a constant the volume of a convex body, using a polynomially long sequence of random points uniformly distributed in the body.  
2009/10/27  Or Sattath  [+]A Quantum Lovasz Local Lemma 
The Lovasz Local Lemma is a powerful tool in probability theory to show the existence of combinatorial objects meeting a prescribed collection of ``weakly dependent" criteria. We present a quantum generalization of this result, which replaces probabilities with a weaker notion of relative dimension and allows to lower bound the intersection size of vector spaces under certain independence conditions. Our result immediately applies to the Bounded Occurrence KQSAT problem: We show for instance that if each qubit appears in at most 2^k/(e \cdot k) projectors, there is a joint satisfiable state.
We then apply our results to the recently studied model of RANDOMKQSAT with the density (ratio of projectors to qubits) as a control parameter. Recent works have shown that the satisfiable region extends up to a density of $1$ in the large k limit. Using a hybrid approach building on work by Laumann et al. we greatly extend the known satisfiable region for RANDOMKQSAT to a density of Omega(2^k/k^2). Since our tool allows us to show the existence of joint satisfying states without the need to construct them, we are able to penetrate into regions where the satisfying states are conjectured to be entangled, avoiding the need to construct them, which had limited previous approaches to product states. The necessary details from quantum computation and information will be given. This is joint work with Andris Ambainis (Univesity of Latvia) and Julia Kempe (TelAviv University)  
2009/11/03  Ricky Rosen  [+]Strong Parallel Repetition Theorem for Free Projection Games 
The parallel repetition theorem states that for any two provers one round game with value at most $1\eps$ (for $\eps<1/2$), the value of the game repeated $n$ times in parallel is at most $(1\eps^3)^{\Omega(n/\log s)}$ where $s$ is the size of the answers set \cite{RazParallelRepetitionThm},\cite{Holenstein}.
For {\em Projection Games} the bound on the value of the game repeated $n$ times in parallel was improved to $(1\eps^2)^{\Omega(n)}$ \cite{Rao08} and was shown to be tight \cite{RazCounterexample/focs/Raz08}. In this paper we show that if the questions are taken according to a product distribution then the value of the repeated game is at most $(1\eps^2)^{\Omega(n/\log s)}$ and if in addition the game is a {\em Projection Game} we obtain a {\em strong parallel repetition} theorem, i.e., a bound of $(1\eps)^{\Omega(n)}$.  
2009/11/10  Vadim Lyubashevsky  [+]PublicKey Cryptographic Primitives Provably as Secure as Subset Sum 
We propose a semanticallysecure publickey encryption scheme that is directly based on the hardness of the subset sum problem. The subset sum assumption required for the security of our scheme is weaker than that of existing subsetsum based encryption schemes, namely the latticebased schemes of Ajtai and Dwork (STOC '97), Regev (STOC '03, STOC '05), and Peikert (STOC '09). Additionally, our proof of security is simpler and rather direct. We also present a natural variant of our scheme that is secure against keyleakage attacks, as well as an oblivious transfer protocol that is secure against semihonest adversaries. This is joint work with Adriana Palacio and Gil Segev.  
2009/11/17  Elad Haramati  [+]On the Structure of Cubic and Quartic Polynomials 
In our work we study the structure of polynomials of degree three and four that have high bias or high Gowers norm. In particular we obtain the following results. 1. We give a canonical representation for degree three or four polynomials that have a significant bias (i.e. they are not equidistributed). This result generalizes the corresponding results from the theory of quadratic forms. It also significantly improves the results of Green and Tao and Kaufman and Lovett for such polynomials.
2. For the case of degree four polynomials with high Gowers norm we show that (a subspace of codimension O(1) of) F^n can be partitioned to subspaces of dimension Omega(n) such that on each of the subspaces the polynomial is equal to some degree three polynomial. It was shown by Green and Tao and by Lovett, Meshulam and Samorodnitsky that a quartic polynomial with a high Gowers norm is not necessarily correlated with any cubic polynomial. Our result shows that a slightly weaker statement does hold. The proof is based on finding a structure in the space of partial derivatives of the underlying polynomial.  
2009/11/24  Prahladh Harsha  [+]Composition of lowerror 2query PCPs using decodable PCPs 
Proof composition is an essential ingredient in all constructions ofprobabilistically checkable proofs (PCPs). In this talk, I'll present a new generic, composition method for low error twoquery PCPs.
Earlier composition methods were either inapplicable in the lowerror regime or nonmodular (i.e., very much tailored to the specic PCPs that were being composed), if not both. Furthermore, until recently, composition in the low error regime suffered from incurring an extra consistency query, resulting in PCPs that are not twoquery and hence, much less useful for hardnessofapproximation reductions. I'll then illustrate how the new composition theorem can be used to give a considerably simpler and modular proof of the recent breakthrough result of Moshkovitz and Raz [FOCS 2008]: construction of 2query lowerror PCPs of nearly linear length. One of the critical components in the new composition is a new variant of PCP called "decodable PCP". A decodable PCP, besides being locally checkable as a standard PCP, is also locally decodable, in the sense, that the original NP witness can be locally decoded from it. [Joint work with Irit Dinur]  
2009/12/01  Avinatan Hassidim  [+]Quantum Money 
2009/12/08  Or Meir  [+]Combinatorial PCPs with efficient verifier 
2009/12/15  Guy Kindler  [+]A Quantitive version of the GibbardSatterthwaite Theorem 
Consider an election between q alternatives, where each of the voters rank the different alternatives, and the winner is determined according to some predefined function of this voting profile. Such a social choice function is called manipulable, if a situation might occur where a voter who knows the rankings given by other voters can change her ranking in a way that does not reflect her true preference, but which leads to an outcome that is more desirable to her. Gibbard and Satterthwaite proved that any social choice function where more than two alternatives can be selected is manipulable, unless it is a dictatorship (where the outcome of the election only depends on the choices of one voter). But how often must one encounter preference profiles for which some voters have an incentive not to vote truthfully? In the case where the social choice function is neutral, namely when it is invariant under changing the names of the alternatives, we prove a lower bound on the fraction of manipulable preference profiles which is inverse polynomial in the number of voters and alternatives. This means that by trying out different rankings randomly, voters have a nonnegligible chance of finding a beneficial manipulative ranking.
Joint work with Marcus Isaksson and Elchanan Mossel.  
2009/12/22  Amir Yehudayof  [+]Sumofsquare problem in complexity theory 
The talk will have three parts. In the first part, we will give a brief introduction to algebraic complexity theory, and to the permanent polynomial. In the second part, we will address the computational complexity of permanent, which is a major open problem in the theory of computing. We will, in fact, consider a restricted version of this question, namely, the computational complexity of permanent in a noncommutative world (in which the variables can be thought of as representing matrices). We will relate the noncommutative complexity of permanent to the socalled sumofsquares problem; a wellstudied mathematical problem that is interesting in its own right. In the third part, we will continue to study a special case of the sumofsquares problem  the integer sumofsquares problem  for which we will sketch the first asymptotic superlinear lower bound. Joint work with Pavel Hrubes and Avi Wigderson.
 
2009/12/29  Ariel Gabizon  [+]Simple Affine Extractors using Dimension Expansion 
Let F be the finite field of q elements. An (n,k)affine extractor is a coloring of F^n with 2 colors, such that every affine subspace of dimension at least k in F^n is colored in a balanced way. Roughly speaking, the problem of explicitly constructing affine extractors gets harder as the field size q gets smaller and easier as k gets larger.
In this work, we construct affine extractors whenever q = \Omega( (n/k)^2), provided that F has characteristic \Omega(n/k). For a wide range of parameters this is a new result, but even in ranges of parameters where we do not improve or match known results, we believe our construction and proof have the advantage of being very simple. Our proof uses a result of Heur, Leung, and Xiang about the dimension of products of subspaces. I will present this result of [HLX] in the hope that it could be useful elsewhere in combinatorics and theoretical computer science. joint work with Matt DeVos  
2010/01/05  Shachar Lovett  [+]Polynomial Threshold Functions: Structure, Approximation and Pseudorandomness 
We study the computational power of polynomial threshold functions (PTFs),that is, threshold functions of real polynomials over the boolean cube. We provide new results bounding the computational power of this model.
Our first result shows that lowdegree PTFs cannot approximate any function with many influential variables. We provide a couple of examples where this technique yields tight approximation bounds. Our second result relates to constructing pseudorandom generators fooling lowdegree PTFs, where we extend previous known cases where kwise independence fools PTFs. Both results are based on a new tool: a regularity theorem for PTFs. This tool has also been developed in several other parallel works, where it has been useful in other domains as well. Based on joint work with Ido BenEliezer and Ariel Yadin.  
2010/01/12  LeeAd Gottlieb  [+]A nonlinear approach to dimension reduction 
The celebrated JohnsonLindenstrauss lemma says that every n points inEuclidean space can be represented with O(log n) dimensions with only a minor distortion of pairwise distances. It has been conjectured that a muchimproved dimension reduction representation is achievable for many interesting data sets, by bounding the target dimension in terms of the intrinsic dimensions of the data (for example, by replacing the log n term with the doubling dimension). This question appears to be quite challenging, requiring new (nonlinear) embedding techniques.
We make progress towards resolving this question by presenting two dimension reduction theorems with similar flavor to the conjecture. For some intended applications, these results can serve as an alternative to the conjectured embedding. [Joint work with Robert Krauthgamer.]  
2010/01/19  Iftach Haitner  [+]A Parallel Repetition Theorem for Any Cryptographic Protocol 
Whether or not parallel repetition improves security, is afundamental question in the study of protocols. While parallel repetition improves the security of 3message protocols and of publiccoin protocols, Bellare, Impagliazzo and Naor (FOCS '97) gave an example of a protocol for which parallel repetition does not improve the security at all.
We show that by slightly modifying any protocol, in a way that preserves its other properties, we get a protocol for which parallel repetition does improve the security (to any degree) . In the second part of the talk (if time permits), I will presents our recent results on basing cryptography on minimal hardness assumptions, where we give simpler and more efficient (in some cases tight) constructions of pseudorandom generators, statistically hiding commitments and universal oneway hash functions based on oneway functions.  
2010/02/23  Zohar Karnin  [+]Explicit Dimension Reduction and its Applications 
We construct a small set of explicit linear transformationsmapping R^n to R^{O(\log n)}, such that the L_2 norm of any vector in R^n is distorted by at most 1 + o(1) in at least a fraction of 1  o(1) of the transformations in the set.
Albeit the tradeoff between the distortion and the success probability is suboptimal compared with probabilistic arguments, we nevertheless are able to apply our construction to a number of problems. In particular, we use it to construct an \epsilonsample (or pseudorandom generator) for spherical digons in S^{n1} (i.e., the ndimensional unit sphere), for \epsilon = o(1). This construction leads to an oblivious derandomization of the GoemansWilliamson MAX CUT algorithm and similar approximation algorithms (i.e., we construct a small set of hyperplanes, such that for any instance we can choose one of them to generate a good solution). We also construct an \epsilonsample for linear threshold functions on S^{n1}, for \epsilon = o(1).  
2010/03/02  Ran Raz  [+]Two "simple" problems that imply strong circuit lower bounds 
will present two families of mathematical problems that are verysimple to describe, that seem very natural to study from geometric, algebraic or combinatorial points of view, and are seemingly unrelated to theoretical computer science, and whose solution would give exceptionally strong results in theoretical computer science; namely, superpolynomial lower bounds for the size of general arithmetic circuits and formulas.
More specifically, I will discuss 'elusive functions and lower bounds for arithmetic circuits'  an approach to prove exponential lower bounds for circuit size; and 'tensorrank and lower bounds for arithmetic formulas'  an approach to prove superpolynomial lower bounds for formula size.  
2010/03/09  Oliver Friedmann  [+]An Exponential Lower Bound for the Latest Deterministic Strategy Iteration Algorithms 
We present a new exponential lower bound for the two mostpopular deterministic variants of the strategy improvement algorithm for solving parity, mean payoff, discounted payoff and simple stochastic games. The first variant improves every node in each step maximizing the current valuation locally, whereas the second variant computes the optimal improvement in each step. We outline families of games on which both variants require exponentially many strategy iterations.
 
2009/03/16  Dan Gutfreund  [+]Derandomizing ArthurMerlin Games and Approximate Counting Implies ExponentialSize Lower Bounds 
We show that if ArthurMerlin protocols can be derandomized, then there is a Boolean function computable in deterministic exponentialtime with access to an NP oracle, that cannot be computed by Boolean circuits of exponential size. This in turn implies the existence of very strong pseudorandom generators. We also show that the same conclusion holds if the problem of approximate counting the number of accepting paths of a nondeterministic Turing machine up to multiplicative factors can be done in nondeterministic polynomialtime. In other words, showing nondeterministic fully polynomialtime approximation schemes for #Pcomplete problems requires proving exponentialsize circuit lower bounds.
A key ingredient in our proof is a connection between computational learning theory and exponentialsize lower bounds. We show that the existence of deterministic learning algorithms with certain properties implies exponentialsize lower bounds, where the complexity of the hard function is related to the complexity of the learning algorithm. Joint work with Akinori Kawachi.  
2010/03/23  Eden Chlamtac  [+]New Approximation Algorithms for Densest kSubgraph 
We consider the Densest kSubgraph (DkS) problem: Given a graph G anda parameter k, find a subgraph of G on k vertices with the maximum number of edges. This is a well known optimization problem, which is a natural optimization version of the decision problem kCLIQUE.
The previous best known approximation ratio by Feige, Kortsarz and Peleg in 2001 was O(n^{1/3epsilon}) for epsilon estimated at around 1/60. We improve on this result, giving an O(n^{1/4+delta}) approximation for any delta>0. In essence, our algorithm relies on the following principle: In a random graph with average degree n^alpha, a planted ksubgraph can be detected if it has average degree (strictly) greater than k^alpha. We show that this principle largely carries over to provide results in the same spirit for worstcase (nonrandom) instances. Joint work with Aditya Bhaskara, Moses Charikar, Uriel Feige and Aravindan Vijayaraghavan.  
2010/04/13  Gillat Kol  [+]Locally Testable Codes Analogues to the Unique Games Conjecture Do Not Exist 
The Unique Games Conjecture (UGC) is an important open problem in the research of PCPs and hardness of approximation. The conjecture is a strengthening of the PCP Theorem, predicting the existence of a special type of PCP verifiers: 2query verifiers that only make unique tests. Moreover, the UGC predicts that such PCP verifiers can have almostperfect completeness and lowsoundness. The computational complexity notion of a PCP is closely related to the combinatorial notion of a Locally Testable Code (LTC). LTCs are errorcorrecting codes with codeword testers that only make a constant number of queries to the tested word. All known PCP constructions use LTCs as building blocks. Furthermore, to obtain PCPs with certain properties, one uses LTCs with corresponding properties.
In light of the strong connection between PCPs and LTCs, one may conjecture the existence of LTCs with properties similar to the ones required by the UGC. In this work we show that such LTCs do not exist. That is, we consider 2query LTCs with codeword testers that only make unique tests. Roughly speaking, we show that any such LTC with relative distance close to 1, almostperfect completeness and lowsoundness, is of constant size. The result intends to point out the limitations of some of the current techniques for constructing PCPs. In particular, it suggests that in order to prove the UGC using corresponding codes with local testability features, one must consider a different definition (than the one commonly used) of a family of LTCs. Our result also shows limitations of unique tests (compared, for example, to projection tests).  
2010/04/27  Oded Regev  [+]Tight Bound for the Gap Hamming Distance Problem 
We consider the Gap Hamming Distance problem in communication complexity. Here,Alice receives an $n$bit string $x$, and Bob receives an $n$bit string $y$.
They are promised that the Hamming distance between x and y is either at least $n/2+\sqrt{n}$ or at most $n/2\sqrt{n}$, and their goal is to decide which is the case. We show a tight lower bound of \Omega(n) on the randomized communication complexity of the problem. The bound is based on a new geometric statement regarding correlations in Gaussian space, related to a result of C. Borell from 1985. Partly based on a joint paper with Amit Chakrabarti. NOTE: Thanks to special permission by the organizers (no bribes involved!), this talk will be about 75 minutes long, but the audience should feel free to leave at any time during the talk.  
2010/05/04  Amir Shpilka  [+]On the Complexity of Boolean Functions in Different Characteristics 
Every Boolean function on n variables can be expressed as a unique multivariate polynomial modulo p for every prime p. In this work, we study how the degree of a function in one characteristic affects its complexity in other characteristics. We establish the following general principle: functions with low degree modulo p must have high complexity in every other characteristic q.
More precisely, we show the following results about Boolean functions f:{0,1}^n to {0,1} which depend on all n variables, and distinct primes p,q: 1. If f has degree o(\log n) modulo p, then it must have degree n^{1o(1)}modulo q. Thus a Boolean function has degree o(\log n) in only one characteristic. This result is essentially tight as there exist functions that have degree \log n in every characteristic. 2. If f has degree d = o(\log n) modulo p, it cannot be computed correctly on more than 1 p^{O(d)} fraction of the hypercube by polynomials of degree n^{1/2  \eps} modulo q. As a corollary of the above results it follows that if f has degree o(\log n) modulo p, then it requires superpolynomial size \AC_0[q] circuits. This gives a lower bound for a broad and natural class of functions. This is a joint work with Shachar Lovett (Weizmann) and Parikshit Gopalan (MSR).  
2010/05/11  Jelani Nelson  [+]Applications of FTmollification 
In this talk I will describe a general method, "FTmollification", forapproximating an arbitrary bounded multivariate function by a smooth approximation while maintaining quantitatively good control on high order partial derivatives. I will then describe a few of its known applications, including: (1) An optimal space upper bound for estimating moments in the data stream model.
(2) A simple proof of a classical result in approximation theory, Jackson's inequality, and one of its extensions to higher dimensions by (Newman and Shapiro, 1963). (3) An optimal derandomization of the BerryEsseen theorem by bounded independence. (4) Proofs that bounded independence fools degree2 threshold functions, and intersections of halfspaces. (5) An improved bound for agnostically learning halfspaces. Based on joint works with Ilias Diakonikolas, Daniel Kane, and David Woodruff  
2010/05/25  Ilya Volkovich  [+]PolynomialTime Deterministic Identity Testing of Depth4 Multilinear Circuits with Bounded Top Fanin 
Polynomial Identity Testing (PIT) is one of the centralproblems in algebraic complexity theory and algorithms design: Given an arithmetic circuit $C$ over a field $\F$ with input variables $x_1,x_2,\cdots,x_n$, we need to determine whether $C$ computes the identically zero polynomial.
In this paper we study the PIT problem for multilinear depth$4$ circuits of fanin $k$ at the top $+$ gate. This model is known as multilinear $\Spsp(k)$ circuits. We give the first polynomialtime deterministic identity testing algorithm for such circuits. The running time of our algorithm is $\poly(n) * s^O(k^2)$ where $n$ is the number of variables, $s$ is the size of the circuit and $k$ is the fanin of the top gate. Note that for polynomially large $s$ and constant $k$ the running time of the algorithm is polynomial. The importance of this model arises from \cite{AgrawalVinay08}, where it was shown that derandomizing polynomial identity testing for general $\Spsp$ circuits implies a derandomization of PIT in general arithmetic circuits. We obtain our results by suggesting a natural generalization of the approach that was used to design most of the PIT algorithms for depth$3$ $\Sps(k)$ circuits, to deal with depth$4$ $\Spsp(k)$ circuits. In fact, we prove a structural theorem on multilinear $\Spsp(k)$ circuits that computes the zero polynomial. More specifically: let $C$ be multilinear $\Spsp(k)$ circuits that computes the zero polynomial satisfying some addition properties. We show that any polynomial computed by a *subcircuit* of $C$ can contain only a ``small'' number of (nonzero) monomials.  
2010/06/01  Gil Segev  [+]Publickey Cryptosystems Resilient to Key Leakage 
Most of the work in the analysis of cryptographic schemes has traditionally focused on abstract adversarial models that do not capture "sidechannel attacks". Such attacks exploit various forms of unintended leakage of sensitive information, which is inherent to almost all physical implementations. Inspired by extremely devastating reallife attacks, in this talk I will describe a recent approach for modeling and combating sidechannel attacks. I will focus on a framework for modeling a wide range of attacks that rely on partial leakage of secret keys. In particular, I will present a new, simple, and efficient construction of a publickey encryption scheme that is resilient to leakage of almost its whole secret key, as well as a generic method for constructing leakageresilient encryption schemes that can be based on a variety of numbertheoretic assumptions. Based on joint work with Moni Naor.
 
2010/06/08  Shachar Lovett  [+]A lower bound for dynamic approximate membership data structures 
Date  Speaker  Title 
2008/11/04  Vadim Lyubashevsky  [+]Towards Practical LatticeBased Cryptography 
Recently, there has been a lot of theoretical work directed towards constructing cryptographic protocols based on the hardness of lattice problems. Because the security of latticebased schemes can be based on the hardness of worstcase problems and because it is conjectured that these protocols are immune against quantum attacks, latticebased cryptography holds great appeal. The main disadvantage is that these schemes generally involve operations on, and storage of, large n ª n matrices. This results in the protocols being rather inefficient and unsuitable for practical use.
In this talk, I will describe how the inherent inefficiency of lattice constructions can be overcome by considering algebraic lattices with additional structure  the lattices will correspond to ideals in polynomial rings (we call them ideal lattices). I will show how to construct cryptographic primitives such as collisionresistant hash functions and signature schemes whose security is based on the hardness of worstcase problems for ideal lattices. The hash functions that we build have almost linear running time, and turn out to be rather efficient in practice. The signature scheme also has running time that is almost linear (up to polylogarithmic factors), and so it is more efficient (at least asymptotically) than signatures with security based on factoring and discrete log. The talk will not be very technical, and should be pretty selfcontained. Since this is a relatively new field, I will also be presenting a lot of open problems.  
2008/11/11  Oded Regev  [+]Unique Games with Entangled Provers are Easy 
We consider oneround games between a classical verifier and two provers who shareentanglement. We show that when the constraints enforced by the verifier are `unique' constraints (i.e., permutations), the value of the game can be well approximated by a semidefinite program. Essentially the only algorithm known previously was for the special case of binary answers, as follows from the work of Tsirelson in 1980. Among other things, our result implies that the variant of the unique games conjecture where we allow the provers to share entanglement is false. Our proof is based on a novel `quantum rounding technique', showing how to take a solution to an SDP and transform it to a strategy for entangled provers.
NOTE: This talk requires absolutely no prior knowledge of quantum computation. Joint work with Julia Kempe and Ben Toner.  
2008/11/25  Guy Kindler  [+]Can cubic tiles be spherelike? 
The unit cube tiles R^d by Z^d, in the sense that its translations by vectors from Z^d cover R^d. It is natural to ask what is the minimal surface area of a body that tiles R^d by Z^d. The volume of any such body should clearly be at least 1, and therefore its surface area must be at least that of a unit volume ball, which of order sqrt(d). The surface area of the cube, however, is of order d, and no better tiling was known. In this work we use a random construction to show that the optimal surface area is indeed of order sqrt(d), namely there exist bodies that tile R^d as a cube would, but have spherelike surface areas.
Tiling problems were considered for well over a century, but this particular tiling problem was also recently considered in computer science because of its relations with the Unique Games conjecture and with the Parallel Repetition problem. Indeed, our current result follows from the same idea that was used recently by Raz in his counter example for the strong Parallel Repetition conjecture. Joint work with Ryan O'Donnell, Anup Rao, and Avi Wigderson.  
2008/12/02  Ronen Shaltiel  [+]Unconditional weak derandomization of weak algorithms: explicit versions of Yao's Lemma 
A simple averaging argument shows that given a randomized algorithm $A$ and a function $f$ such that for every input $x$, $\Pr[A(x)=f(x)] \ge 1\rho$ (where the probability is over the coin tosses of $A$). Then, there exists a \emph{nonuniform} deterministic algorithm $B$ ``of roughly the same complexity'' such that $\Pr[B(x)=f(x)] \ge 1\rho$ (here the probability is over a uniformly chosen input $x$). This implication is often referred to as ``the easy direction of Yao's lemma'' and can be thought of as ``weak derandomization'' in the sense that $B$ is deterministic but only succeeds on most inputs. The implication follows as there is a fixed value $r'$ for the random coins of $A$ such that ``hardwiring $r'$ into $A$'' produces a deterministic algorithm $B$.
However, this argument does not give a way to \emph{explicitly construct} $B$. We consider the task of proving \emph{uniform versions} of the implication above. That is, how to \emph{explicitly construct} a deterministic algorithm $B$ when given a randomized algorithm $A$. We prove such derandomization results for several classes of randomized algorithms. These include: randomized communication protocols, randomized decision trees (here we improve a previous result by Zimand), randomized streaming algorithms and randomized algorithms computed by polynomial size constant depth circuits. Our proof uses an approach suggested by Goldreich and Wigderson and ``extracts randomness from the input''. We show that specialized (seedless) extractors can produce randomness that is in some sense not correlated with the input. Our analysis can be applied to \emph{any} class of randomized algorithms as long as one can explicitly construct the appropriate extractor. Some of our derandomization results follow by constructing a new notion of seedless extractors that we call ``extractors for recognizable distributions'' which may be of independent interest.  
2008/12/09  Robert Krauthgamer  [+]On embedding edit distance into L_1 
The edit distance (aka Levenshtein distance) between two strings is the number of character insertions, deletions and substitutions required to transform one string to the other. A very powerful paradigm for solving computational problems on the metric space induced by the edit distance is to embed this metric into L_1, using a lowdistortion map (if possible).
I will first present a lowdistortion embedding of edit distance on permutations (aka the Ulam metric) [based on joint work (i) with Moses Charikar, and (ii) with Parikshit Gopalan and T.S. Jayram]. I will then discuss lower bounds on the distortion required to embed edit distance on 01 strings [joint work with Yuval Rabani] and, more briefly, on permutations [joint work with Alex Andoni].  
2008/12/16  Avinatan Hassidim  [+]Derandomizing Algorithms on Product Distributions 
Getting the deterministic complexity closer to the best known randomized complexity is an important goal in algorithms and communication protocols. In this work, we investigate the case where instead of one input, the algorithm\protocol is given multiple inputs sampled independently from an arbitrary unknown distribution. We show that in this case a strong and generic derandomization result can be obtained by a simple argument.
Our method relies on extracting randomness from ``samesource'' product distributions, which are distributions generated from multiple independent samples from the same source. The extraction process succeeds even for arbitrarily low minentropy, and is based on the order of the values and not on the values themselves (This may be seen as a generalization of the classical method of VonNeumann extended by Elias for extracting randomness from a biased coin). The tools developed in the paper are generic, and can be used elsewhere. We present applications to streaming algorithms, and to implicit probe search \cite{FiatNaor93}. We also refine our method to handle product distributions, where the i'th sample comes from one of several arbitrary unknown distributions. This requires creating a new set of tools, which may also be of independent interest. Joint work with Ariel Gabizon  
2008/12/23  Guy Rothblum  [+]When and How Can Data be Efficiently Released with Privacy? 
We consider private data analysis in the setting in which a trusted and trustworthy curator, having obtained a large data set containing private information, releases to the public a ``sanitization'' of the data set that simultaneously protects the privacy of the individual contributors of data and offers utility to the data analyst. We focus on the case where the process is noninteractive; once the sanitization has been released the original data and the curator play no further role.
Blum et al. [STOC '08] showed a remarkable result: for any collection of counting predicate queries, the exponential mechanism of McSherry and Talwar [FOCS '07] can be used to (inefficiently) generate a synthetic data set that maintains approximately correct fractional counts for all of the queries, while ensuring a strong privacy guarantee, known as differential privacy. In this work we investigate the computational complexity of such noninteractive privacy mechanisms, mapping the boundary between feasibility and infeasibility. We show: 1. For any polynomialsize query set C and polynomialsize data universe there is an efficient algorithm for generating a privacypreserving synthetic dataset that maintains approximate fractional counts, even when the size of the original dataset is subpolynomial in C. 2. When either the query set or the data universe are superpolynomial, assuming oneway functions exist, there is no efficient general method for releasing a privacypreserving synthetic dataset that maintains approximate fractional counts. In particular, this is a case where the exponential mechanism cannot, in general, be implemented efficiently. 3. Turning to the potentially easier problem of privately generating an arbitrary data structure (not necessarily synthetic data) from which it is possible to approximate counts for a given concept, there is a tight connection between hardness of sanitization and the existence of traitor tracing schemes, methods of content distribution in which keys are assigned to subscribers in a way that given any useful ``pirate'' decoding box, constructed by a coalition of malicious subscribers, it is possible to identify at least one of them. Joint work with Cynthia Dwork, Moni naor, Omer Reingold and Salil Vadhan.  
2008/12/30  Amnon TaShma  [+]Short seed extractors against quantum storage 
2009/01/06  Adi Akavia  [+]Very Local Self Correcting of Homomorphism and MPC Codes 
BlumLubyRubinfeld gave a local self correcting algorithm for homomorphism codes $Hom(G,H)$ for any finite abelian groups $G$ and $H$.
The BLR algorithm, given an entry location $s$ and oracle access to a corrupted codeword $w$ close to a codeword $C$, outputs the value of $C$ on entry $s$, while reading only a constant number of entries in $w$. Although the number of *entries* read by the BLR algorithm is constant, the number of read *bits* is $O(\logH)$, as each alphabet symbol is represented by $logH$ bits. This number of read bits may be quite large with respect to the information rate; in particular, for $Hom(Z_N,Z_N)$ the number of read bits is larger than the information rate. In this work we present a local self correcting algorithm for homomorphism codes $Hom(Z_N^k,Z_N)$ that reads $O(k)$ *bits* to output a single bit of the value of the closest codeword on the given entry (in a non standard binary representation); the algorithm is restricted to codewords $C$ and entries $s$ corresponding to invertible group elements, and extends to other codes $Hom(G,H)$. Our algorithm improves over prior works as long as $k = O(\log N)$. In particular, for $Hom(Z_N,Z_N)$ our algorithm reduces the number of read bits from $O(\log N)$ to $O(1)$. In addition, we use our algorithm to obtain the first local self correcting algorithm for the MPC codes of AkaviaGoldwasserSafra FOCS'03, and for their generalizations defined here. In terms of techniques, we develop a new approach for local self correcting in which we first reduce the self correcting problem into a property testing problem concerning the location of significant Fourier transform coefficients (aka, SFTtesting), and then give an efficient algorithm solving SFTtesting with $O(k)$ read bits. The SFTtesting algorithm may be of independent interest.  
2009/01/13  Dana Moshkovitz  [+]Two Query PCP with SubConstant Error 
We show that the NPComplete language 3Sat has a PCP verifier that makes two queries to a proof of almostlinear size and achieves subconstant probability of error o(1). The verifier performs only projection tests, meaning that the answer to the first query determines at most one accepting answer to the second query.
Previously, by the parallel repetition theorem, there were PCP Theorems with twoquery projection tests, but only (arbitrarily small) constant error and polynomial size. There were also PCP Theorems with subconstant error and almostlinear size, but a constant number of queries that is larger than 2. As a corollary, we obtain a host of new results. In particular, our theorem improves many of the hardness of approximation results that are proved using the parallel repetition theorem. A partial list includes the following: 1) 3Sat cannot be efficiently approximated to within a factor of 7/8+o(1), unless P=NP. This holds even under almostlinear reductions. Previously, the best known NPhardness factor was 7/8+epsilon for any constant epsilon>0, under polynomial reductions (Hastad). 2) 3Lin cannot be efficiently approximated to within a factor of 1/2+o(1), unless P=NP. This holds even under almostlinear reductions. Previously, the best known NPhardness factor was 1/2+epsilon for any constant epsilon>0, under polynomial reductions (Hastad). 3) A PCP Theorem with amortized query complexity 1+o(1) and amortized free bit complexity o(1). Previously, the best known amortized query complexity and free bit complexity were 1+epsilon and epsilon, respectively, for any constant epsilon>0 (Samorodnitsky and Trevisan). One of the new ideas that we use is a new technique for doing the composition step in the (classical) proof of the PCP Theorem, without increasing the number of queries to the proof. We formalize this as a composition of new objects that we call Locally Decode/Reject Codes (LDRC). The notion of LDRC was implicit in several previous works, and we make it explicit in this work. We believe that the formulation of LDRCs and their construction are of independent interest. This is joint work with Ran Raz  
2009/01/20  Klim Efremenko  [+]3Query Locally Decodable Codes of Subexponential Length 
Locally Decodable Codes (LDC) allow one to decode any particularsymbol of the input message by making a constant number of queries to a codeword, even if a constant fraction of the codeword is damaged. In a recent work ~\cite{Yekhanin08} Yekhanin constructs a $3$query LDC with subexponential length of size $\exp(\exp(O(\frac{\log n}{\log\log n})))$. However, this construction requires a conjecture that there are infinitely many Mersenne primes. In this paper we give an unconditional $3$query LDC construction with a shorter codeword length of $\exp(\exp(O(\sqrt{\log n \log \log n })))$. We also give a $2^r$query LDC with length of $\exp(\exp(O(\sqrt[r]{\log n \log \log^{r1} n })))$. The main ingredient in our construction is the existence of superpolynomial size setsystems with restricted intersections by \cite{Grolmusz00} which hold only over composite numbers.
 
2009/01/27  Benny Applebaum  [+]Public Key Cryptography from Different Assumptions 
We construct a new public key encryption based on two assumptions:  It is hard to distinguish between a random set of m sparse linear equations over n variables in GF(2) (e.g., O(n) equations each involves 10 variables), and a random set of such equations that have a solution satisfying 1\mu fraction of them for some small \mu.
 It is hard to distinguish between a random unbalanced bipartite graph of small degree (e.g., 10), and a similar graph with a random planted nonexpanding set of vertices. Most, if not all, previous constructions of public key encryption used hardness assumptions with significant algebraic structure. Our new assumptions, positing indistinguishability from uniform of certain natural distributions on instances of NPcomplete problems, seem relatively unstructured and qualitatively different from previous ones. We give some evidence for these assumptions by studying their resistance to certain natural algorithms, and relating them to variants of more widely studied assumptions such as the hardness of the ``search version'' of learning parity with noise and the planted clique problems. Joint work with Boaz Barak and Avi Wigderson.  
2009/03/03  Elad Eban  [+]Interactive Proofs For Quantum Computations 
The widely held belief that BQP strictly contains BPP raises fundamental questions: Upcoming generations of quantum computers might already be too large to be simulated classically. Is it possible to experimentally test that these systems perform as they should, if we cannot efficiently compute predictions for their behavior? Vazirani has asked [Vaz07]: If computing predictions for Quantum Mechanics requires exponential resources, is Quantum Mechanics a falsifiable theory? In cryptographic settings, an untrusted future company wants to sell a quantum computer or perform a delegated quantum computation. Can the customer be convinced of correctness without the ability to compare results to predictions? To provide answers to these questions, we define Quantum Prover Interactive Proofs (QPIP). Whereas in standard Interactive Proofs [GMR85] the prover is computationally unbounded, here our prover is in BQP, representing a quantum computer. The verifier models our current computational capabilities: it is a BPP machine, with access to few qubits. Our main theorem can be roughly stated as: "Any language in BQP has a QPIP, and moreover, a fault tolerant one" (providing a partial answer to a challenge posted in [Aar07]). We provide two proofs. The simpler one uses a new (possibly of independent interest) quantum authentication scheme (QAS) based on random Clifford elements. This QPIP however, is not fault tolerant. Our second protocol uses polynomial codes QAS due to BenOr, Crepeau, Gottesman, Hassidim, and Smith [BOCG+ 06], combined with quantum fault tolerance and secure multiparty quantum computation techniques. A slight modification of our constructions makes the protocol "blind": the quantum computation and input remain unknown to the prover.
 
2009/03/17  Andrew Wan  [+]QuasiCryptography 
We propose the study of quasicryptographic primitives and protocols.
These are relaxed versions of standard cryptographic primitives and protocols where the adversary may be given more resources than some of the honest parties. The purpose of this study is to obtain a better understanding of some of the obstacles in basing cryptography on $\np$hardness, as well as the relations between various cryptographic primitives. We make a first step in this direction by investigating the simplest quasicryptographic primitive: quasioneway functions. Extending an approach proposed by Gutfreund, Shaltiel, and TaShma (CCC 2005), we construct an ``infinitely often'' version of this primitive, partially answering an open question asked by these authors. On the negative side, we show that constructing a lengthdoubling quasipseudorandom generator from a quasioneway function (even a quasioneway permutation) is impossible via fullyblackbox reductions. This contrasts the standard cryptographic setting where such constructions are possible. (Joint with Andrej Bogdanov and Kunal Talwar)  
2009/03/24  Or Sattath  [+]The Pursuit For Uniqueness: Extending ValiantVazirani Theorem to the Probabilistic and Quantum Settings 
In 1985, ValiantVazirani showed [VV85] that solving NP with the promise that "yes" instances have only one witness, is powerful enough to solve the entire NP class (under randomized reductions).
We are interested in extending this result to the quantum setting. We prove extensions to the classes MerlinArthur (MA) and QuantumClassicalMerlin Arthur (QCMA) [AN02]. Our results have implications on the complexity of approximating the ground state energy of a quantum local Hamiltonian with a unique ground state and an inverse polynomial spectral gap. We show that the estimation, within polynomial accuracy, of the ground state energy of polygapped 1D local Hamiltonians is QCMAhard, under randomized reductions. This is in strong contrast to the case of constant gapped 1D Hamiltonians, which is in NP [Has07]. Moreover, it shows that unless QCMA can be reduced to NP by randomized reductions, there is no classical description of the ground state of every polygapped local Hamiltonian which allows the efficient calculation of expectation values. Finally, we conjecture that an analogous result to the class Quantum MerlinArthur (QMA) is impossible. This is formulated by separating between the two relavant classes, QMA and its "unique" version UQMA, using the definition by Aaronson and Kuperberg [AK06] of quantum oracles.  
2009/03/31  Eden Chlamtac  [+]Using SDP Hierarchies for Approximation Algorithms 
Semidefinite programming (SDP) is a convex optimization tool which has become one of the main tools in approximation algorithms in recent years.
For a large class of combinatorial optimization problems, the following approach gives the best possible approximation (under certain complexitytheoretic assumptions): phrase the combinatorial optimization problem as an Integer Program (IP), take a natural SDP relaxation for the IP, find the relaxed (SDP) optimum, and round it back to an integer solution. One possibility for improving this approach, is by taking tighter SDP relaxations (i.e. for which the SDP optimum is closer to the IP optimum). A systematic way of producing tighter relaxations was proposed by Sherali and Adams, Lovasz and Schrijver, and more recently Lasserre, who give various Linear Programming (LP) and SDP hierarchies of relaxations, where within a given hierarchy, the relaxations become progressively tighter (converging to the original IP). Unfortunately, starting with the work of Arora et. al (2002), there has been a series of negative results, showing that for various problems, such relaxations do not become significantly tighter up to a very high level in the hierarchy, at which point we do not believe the relaxed optimum can be found in polynomial time. In contrast, we give a positive result. Namely, we examine the Maximum Independent Set problem for $3$uniform hypergraphs which contain linearsize independent sets. We show that if the size of the maximum independent set in an $n$ vertex $3$uniform hypergraph is $\gamma n$, then one can find a nontrivial size independent set using $O(1/\gamma^2)$ levels of an SDP hierarchy. On the other hand, we give a simple lower bound of $1/\gamma$ for the number of levels required, thus demonstrating an infinite sequence of improved approximations as one uses progressively higherlevel relaxations. This is joint work with Gyanit Singh.  
2009/04/21  Enav Weinreb  [+]On the Complexity of Communication Complexity 
We consider the following question: given a twoargument Boolean function $f$, represented as an $N\times N$ binary matrix, how hard is it to determine the communication complexity of $f$? We address two aspects of this question. On the computational side, we prove that, under appropriate cryptographic assumptions (such as the intractability of factoring), the deterministic communication complexity of $f$ is hard to approximate to within some constant. Under stronger (yet arguably reasonable) assumptions, we obtain even stronger hardness results that match the best known approximation.
On the analytic side, we present a family of (twoargument) functions for which determining the deterministic communication complexity (or even obtaining nontrivial lower bounds on it) implies proving circuit lower bounds for some related functions. Such connections between circuit complexity and communication complexity were known before (Karchmer & Wigderson, 1988) only in the more involved context of relations (search problems) but not in the context of functions (decision problems). This result, in particular, may explain the difficulty of analyzing the communication complexity of certain functions such as the ``clique vs. independentset'' family of functions, introduced by Yannakakis (1988). Joint work with Eyal Kushilevitz.  
2009/05/05  Iddo Tzameret  [+]The proof complexity of polynomial identities 
2009/05/12  Eli BenSasson  [+]Affine Dispersers from Subspace Polynomials 
An affine disperser over a field F for sources of dimension d is a function f: F^n > F that is nonconstant (i.e., takes more than one value) on inputs coming from a ddimensional affine subspace of F^n.
Affine dispersers have been considered in the context of deterministic extraction of randomness from structured sources of imperfect randomness, and in particular, generalize the notion of dispersers for bitfixing sources. Previously, explicit constructions of affine dispersers were known for every d=\Omega(n), due to [Barak, Kindler, Shaltiel, Sudakov and Wigderson] and [Bourgain]. (The latter in fact gives stronger objects called affine extractors). In this work we give the first explicit affine dispersers for sublinear dimension. Specifically, our dispersers work even when d> n^{4/5}. The main novelty in our construction lies in the method of proof, which relies on elementary properties of subspace polynomials. In contrast, the previous works mentioned above relied on sumproduct theorems for finite fields. Joint work with Swastik Kopparty.  
2009/05/19  Ronald de Wolf  [+]Errorcorrecting data structures 
We study data structures in the presence of adversarial noise. We want to encode a given object in a succinct data structure that enables us to efficiently answer specific queries about the object, even if the data structure has been corrupted by a constant fraction of errors. This model is the common generalization of static data structures and errorcorrecting codes. The main issue is the tradeoff between the space used by the data structure and the time (number of probes) needed to answer a query about the encoded object. We prove a number of upper and lower bounds on various natural errorcorrecting data structure problems. In particular, we show that the optimal length of errorcorrecting data structures for the Membership problem (where we want to store subsets of size s from a universe of size n) is closely related to the optimal length of locally decodable errorcorrecting codes (LDCs) for sbit strings.
Unfortunately, all known LDC constructions with a constant number of probes have superpolynomial length, and this is conjectured to be unavoidable. This means that the model does not allow very efficient errorcorrecting data structures. In the last part of the talk (which is joint work with Victor Chen and Elena Grigorescu) we show how to relax the requirements in a way that enables data structures that are simultaneously close to optimal in time as well as space, and that still work (most of the time) with a constant fraction of errors.  
2009/05/26  Gil Segev  [+]ChosenCiphertext Security via Correlated Products 
2009/06/02  Joshua Brody  [+]A MultiRound Communication Lower Bound for the Gap Hamming problem 
The GapHammingDistance problem arose in the context of proving space lower bounds for a number of key problems in the data stream model. In this problem, Alice and Bob have to decide whether the Hamming distance between their input strings is large (i.e. at least n/2 + \sqrt{n}) or small (at most n/2  \sqrt{n}). The gap between "large" and "small" instances is crucial for capturing the approximation allowed in a data stream problem.
Thus far, for randomized communication, an \Omega(n) lower bound was known only for oneround protocols. We prove an \Omega(n) lower bound for randomized protocols that use any constant number of rounds. As a consequence, we prove that, for example, \epsilonapproximating the number of distinct elements in a stream requires \Omega(\epsilon^{2}) space, even with a constant number of passes over the stream. We obtain similar results for approximating frequency moments and for approximating the empirical entropy of a stream. Joint work with Amit Chakrabarti.  
2009/06/09  Ishay Haviv  [+]Rounding Parallel Repetitions of Unique Games 
We show a connection between the semidefinite relaxation of unique games and their behavior under parallel repetition. Specifically, denoting by val(G) the value of a twoprover unique game G, and by sdpval(G) the value of a natural semidefinite program to approximate val(G), we prove that for every large enough $\ell$, if sdpval(G) is at least $1\delta$, then the value of the $\ell$times parallel repeated version of G is at least $sdpval(G)^{s \ell}$ where $s=O(\log(k/\delta))$ for general unique games and $s=O(1)$ for XOR games (i.e., k=2). By a result of Feige and Lovasz (STOC '92), our bounds are tight up to a factor of $O(\log(1/\delta))$ in the exponent.
For games where there is a significant gap between the quantities val(G) and sdpval(G), our result implies that $val(G^{\ell})$ may be much larger than $\val(G)^{\ell}$, giving a counterexample to the strong parallel repetition conjecture. In a recent breakthrough, Raz has showed such an example using the maxcut game on odd cycles. Our results are based on a generalization of his techniques. Joint work with Boaz Barak, Moritz Hardt, Anup Rao, Oded Regev and David Steurer.  
2009/06/16  Itai Arad  [+]The Detectability Lemma and Quantum Gap Amplification 
Gap amplification of probabilities by random walks on expander graphs is a central notion in the study of CSPs. In particular it is the core of Dinur's proof of the PCP theorem, in derandomizations, etc. We prove a quantum version of gap amplification; to this end we apply the classical result in conjunction with purely quantum understandings regarding the behavior of sums of noncommuting matrices. The main problem in the area, of whether a quantum PCP theorem holds, is left open, and we will discuss the difficulties and its importance.
Joint work with Dorit Aharonov, Umesh Vazirani and Zeph Landau. 
Date  Speaker  Title 
2008/06/03  Amir Shpilka  [+]Reconstruction of depth3 arithmetic circuits 
Depth3 arithmetic circuits compute polynomials that can be represented as sums of products of linear functions. In spite of its simple structure, we are far from understanding this model. In this talk we will focus on the reconstruction problem for depth3 circuits, that have a constant number of multiplication gates. I.e., we have access to some unknown polynomial, that can be represented as a sum of a constant number of products of linear functions, and by asking for its values on (a small number of) inputs of our choice we would like to find a depth3 circuit computing it. We will show how to reconstruct such depth3 circuits in time exp(polylog(s)), where s is the size of a depth3 circuit computing the unknown polynomial. Our techniques rely on a theorem on the structure of zero depth3 circuits and on a zero testing algorithm that it implies.  
2008/05/27  Alex Samorodnitsky  [+]Low degree tests at large distances 
Consider the following problem: given a boolean function we need todecide whether it is "highly structured", which in our case means representable by a lowdegree polynomial, or "pseudorandom", which we take to mean being far from all lowdegree polynomials. Our decision has to be based on a very restricted randomized local view of the function.
This is a 'property testing' question with some existing and some potential connections to probabilistically checkable proofs and derandomization. We will discuss some recent developments regarding this question. In particular, it turns out that an attempt to measure pseudorandomness of a function analytically, via its 'Gowers norm', does not work so well. Based in part on joint work with Shachar Lovett, Roy Meshulam, and Luca Trevisan  
2008/05/20  Wim van Dam  [+]Classical and Quantum Algorithms for Exponential Congruences 
We discuss classical and quantum algorithms for solvability testing and finding integer solutions x,y of equations of the form af^x + bg^y = c over finite fields GF(q). A quantum algorithm with time complexity q^(3/8) (log q)^O(1) is presented. While still superpolynomial in log q, this quantum algorithm is significantly faster than the best known classical algorithm, which has time complexity q^(9/8) (log q)^O(1). Thus it gives an example of a natural problem where quantum algorithms provide about a cubic speedup over classical ones. This is joint work with Igor E. Shparlinski.
Cf. arXiv:0804.1109  
2008/05/13  Daniel Gottesman  [+]The threshold for faulttolerance 
Quantum computers are likely to be much more vulnerable to noise thanclassical computers, so it is likely they will need to be encoded using quantum errorcorrecting codes and computations will be performed using faulttolerant protocols. One critical result states that there is a threshold error rate, such that arbitrarily long reliable computations are possible if the error rate per gate and time step in the computer are below the threshold. I will give an overview of recent work on the threshold, with a slight emphasis on work by Aliferis, Preskill, and myself, showing a higher threshold under more general conditions than previous proofs.
 
2008/05/06  Shay Solomon  [+]Shallow, Low, and Light Trees, and Tight Lower Bounds for Euclidean Spanners 
We show that for every $n$point metric space $M$ thereexists a spanning tree $T$ with unweighted diameter $O(\log n)$ and weight $\omega(T) = O(\log n) \cdot \omega(MST(M))$.
Moreover, there is a designated point $rt$ such that for every point $v$, $dist_T(rt,v) \le (1+\epsilon) \cdot dist_M(rt,v)$, for an arbitrarily small constant $\epsilon > 0$. We extend this result, and provide a tradeoff between unweighted diameter and weight, and prove that this tradeoff is \emph{tight up to constant factors} in the entire range of parameters. These results enable us to settle a longstanding open question in Computational Geometry. In STOC'95 Arya et al. devised a construction of Euclidean Spanners with unweighted diameter $O(\log n)$ and weight $O(\log n) \cdot \omega(MST(M))$. Ten years later in SODA'05 Agarwal et al. showed that this result is tight up to a factor of $O(\log \log n)$. We close this gap and show that the result of Arya et al. is tight up to constant factors. Joint work with Yefim Dinitz and Michael Elkin.  
2008/04/08  Troy Lee  [+]Product theorems via semidefinite programming 
The tendency of semidefinite programs to compose perfectly under product has been exploitedmany times in complexity theory: for example, by Lovasz to determine the Shannon capacity of the pentagon; to show a direct sum theorem for nondeterministic communication complexity and direct product theorem for discrepancy; and in interactive proof systems to show parallel repetition theorems for restricted classes of games.
Despite all these examples of product theoremssome going back nearly thirty yearsit was only recently that Mittal and Szegedy began to develop a general theory to explain when and why semidefinite programs behave perfectly under product. This theory captured many examples in the literature, but there were also some notable exceptions which it could not explainnamely, an early parallel repetition result of Feige and Lovasz, and a direct product theorem for the discrepancy method of communication complexity by Lee, Shraibman, and Spalek. We extend the theory of Mittal and Szegedy to explain these cases as well. Indeed, to the best of our knowledge, our theory captures all examples of semidefinite product theorems in the literature. Joint work with Rajat Mittal  
2008/04/01  Dana Ron  [+]Approximating the Distance to Monotonicity in High Dimensions 
In this work we study the problem of approximating the distance of a function over [n]^d to monotonicity. Namely, we are interested in randomized sublinear algorithms that approximate the Hamming distance between a given function and the closest monotone function. We allow both an additive error, parameterized by \delta, and a multiplicative error.
Previous work on distance approximation to monotonicity focused on the onedimensional case and the only explicit extension to higher dimensions was with a multiplicative approximation factor exponential in the dimension d. Building on Goldreich et al. (Combinatorica, 2000) and Dodis et al. (Proceedings of RANDOM, 1999), in which there are better implicit results for the case n=2, we describe a reduction from the case of functions over the ddimensional hypercube to lower dimensional hypercubes. Using this reduction and a known distance approximation algorithm for the one dimensional case, we obtain a distance approximation algorithm for functions over the ddimensional hypercube, with any range R, which has a multiplicative approximation factor of O(d logR). For the case of a binary range, we present constant approximation algorithms for low dimensions. Combining these algorithms with the reduction described above, we obtain a variety of distance approximation algorithms for Boolean functions over the ddimensional hypercube, which suggest a tradeoff between quality of estimation and efficiency of computation. In particular, the multiplicative error ranges between O(d) and O(1). This is joint work with Shahar Fattal.  
2008/03/25  Ronen Shaltiel  [+]Hardness amplification proofs require majority 
A function $f$ is $\delta$hard for some class of circuits if everycircuit in the class errs on a $\delta$fraction of the inputs. For various classes of constant depth small size circuits there are explicit functions that are $\delta$hard for ``small'' values of $\delta$.
For stronger circuit classes there are many ``hardness amplification theorems'' that show how to transform any $\delta$hard function $f$ into a ``harder'' function $g$ that is $(1/2\epsilon)$hard (for very small $\epsilon$). These theorems are proved using blackbox reductions that transform a circuit that computes $g$ correctly on a $1/2+\epsilon$ fraction of the inputs into a circuit that computes $f$ correctly on a $1\delta$fraction of the inputs. We show that such reductions cannot have low complexity: 1. Any such reduction ``can be used'' to compute the majority function on $1/\epsilon$ bits. 2. Any such reduction must make $\Omega(\log(1/\delta)/\epsilon2)$ oracle queries. By the Natural Proofs paradigm of Razborov and Rudich (coupled with a cryptographic construction of Naor and Reingold) ``current techniques'' cannot prove lower bounds against circuit classes that can compute the majority function. Loosely speaking this means that: ``current techniques can only amplify hardness that we don't have''. The main difficulty in the proofs is that reductions that come up in proofs of hardness amplification theorems are allowed to be ``nonuniform'' and we develop techniques to bound such reductions. The talk will be self contained and I will give a brief survey of the area. Joint work with Emanuele Viola.  
2008/03/18  Irit Dinur  [+]Decodability of Group Homomorphisms beyond the Johnson Bound 
Given a pair of finite groups G and H, the set of homomorphisms from Gto H form an errorcorrecting code where codewords differ in at least 1/2 the coordinates. We show that for every pair of *abelian* groups G and H, the resulting code is (locally) listdecodable from a fraction of errors arbitrarily close to its distance. At the heart of this result is the following combinatorial result: for every pair of abelian groups G and H, if the maximum fraction of agreement between two distinct homomorphisms from G to H is Lambda, then for every epsilon> 0 and every function f:G>H, the number of homomorphisms that have agreement Lambda + epsilon with f is at most poly(1/epsilon). We thus give a broad class of codes whose listdecoding radius exceeds the ``Johnson bound''.
Joint work with Elena Grigorescu, Swastik Kopparty, and Madhu Sudan.  
2008/03/11  Avraham BenAroya  [+]A combinatorial construction of almostRamanujan graphs using the zigzag product 
Reingold, Vadhan and Wigderson [FOCS'00] introduced the graphzigzag product. This product combines a large graph and a small graph into one graph, such that the resulting graph inherits its size from the large graph, its degree from the small graph and its spectral gap from both. Using this product they gave the first fullyexplicit combinatorial construction of expander graphs. They showed how to construct $D$regular graphs having spectral gap $1O(D^{\frac{1}{3}})$. In the same paper, they posed the open problem of whether a similar graph product could be used to achieve the almostoptimal spectral gap $1O(D^{\frac{1}{2}})$.
In this paper we propose a generalization of the zigzag product that combines a large graph and several small graphs. The new product gives a better relation between the degree and the spectral gap of the resulting graph. We use the new product to give a fullyexplicit combinatorial construction of $D$regular graphs having spectral gap $1D^{\frac{1}{2} + o(1)}$. Joint work with Amnon TaShma  
2008/03/04  Zeev Dir  [+]HardnessRandomness Tradeoffs for Bounded Depth Arithmetic Circuits 
We show that lower bounds for bounded depth arithmetic circuits implyderandomization of polynomial identity testing for bounded depth arithmetic circuits. More formally, if there exists an explicit polynomial f(x_1,...,x_m) that cannot be computed by a depth d arithmetic circuit of small size then there exists an efficient deterministic algorithm to test whether a given depth d8 circuit is identically zero or not (assuming the individual degrees of the tested circuit are not too high). In particular, if we are guaranteed that the tested circuit computes a multilinear polynomial then we can perform the identity test efficiently. The above results are obtained using the the arithmetic NisanWigderson generator of [KabanetsImpagliazzo04] together with a new theorem on bounded depth circuits, which is the main technical contribution of our work.
Joint work with Amir Shpilka and Amir Yehudayoff.  
2008/02/26  Amir Yehudayoff  [+]Multilinear Formulas, MaximalPartition Discrepancy and MixedSources Extractors 
In this paper we study multilinear formulas, monotone arithmetic circuits, maximal partition discrepancy, best partition communication complexity and mixed two source extractors. We will first define maximal partition discrepancy, and construct a function f that has exponentially small maximal partition discrepancy. We will then review several application of this construction: 1. The best partition communication complexity of f is \Theta(n).
2. It can be used to extract a linear number of almost perfect random bits from a mixed two source of randomness. 3. We use f to prove lower bounds for several models of arithmetic circuits; for example, monotone arithmetic circuits and orthogonal multilinear formulas. We will focus mainly on the monotone model. Joint work with Ran Raz.  
2008/02/19  Adi Shraibman  [+]Disjointness is hard in the multiparty numberontheforehead model 
We show that disjointness requires randomized communicationOmega(n^{1/2k}/{(k1)2^{k1}2^{2^{k1}}) in the general kparty numberontheforehead model of complexity. The previous best lower bound was Omega((log n)/k1). By results of Beame, Pitassi, and Segerlind, this implies 2^{n^{Omega(1)} lower bounds on the size of treelike LovaszSchrijver proof systems needed to refute certain unsatisfiable CNFs, and superpolynomial lower bounds on the size of any treelike proof system whose terms are degreed polynomial inequalities for d = loglog n O(logloglog n).
Joint work with Troy Lee from Rutgers University  
2008/02/12  Carlos Mochon  [+]Quantum coin flipping (by telephone) 
Coin flipping by telephone (Blum '81) is one of the most basic tasks oftwoparty secure computation. In a quantum setting, it is possible to realize (weak) coin flipping with information theoretic security.
Quantum coin flipping has been a longstanding open problem, and its solution uses an innovative formalism developed by Alexei Kitaev for mapping quantum games into convex optimization problems. The optimizations are carried out over duals to the cone of operator monotone functions, though the mapped problem can also be described in a very simple language that requires no knowledge of quantum information. Time permitting, I will discuss both Kitaev's formalism, and the solution that leads to quantum weak coin flipping with arbitrarily small bias.  
2008/01/15  Shachar Lovett  [+]Unconditional pseudorandom generators for low degree polynomials 
We give an explicit construction of pseudorandomgenerators against low degree polynomials over finite fields. We show that the sum of $2^d$ smallbiased generators with error $\epsilon^{2^{O(d)}}$ is a pseudorandom generator against degree $d$ polynomials with error $\epsilon$. This gives a generator with seed length $2^{O(d)} \log{(n/\epsilon)}$. Our construction follows the recent breakthrough result of Bogadnov and Viola \cite{BV}. Their work shows that the sum of $d$ smallbiased generators is a pseudorandom generator against degree $d$ polynomials, assuming the Inverse Gowers Conjecture. However, this conjecture is only proven for $d=2,3$. The main advantage of our work is that it does not rely on any unproven conjectures.
 
2008/01/08  Guy Rothblum  [+]Delegating Computation: Interactive Proofs for Mortals 
The main challenge we consider in this work is: "For which computations can a prover supply you with the output of the computation and prove to you that the output is correct, where you use significantly less resources to verify the correctness of the output than it would take to run the computation, and without the prover working too hard?" We address this problem in the Interactive Proof setting. We construct, for many efficiently computable languages, publiccoin, interactive proofs where the verifier's running time is quasilinear, the prover's running time is polynomial, and the communication is polylogarithmic. In particular, we give such a proof system for any language in (uniform) NC.
Our results allow us to make progress on other related questions, such as the length of zeroknowledge proofs for NP languages, the expressive power of publiccoin interactive proofs with logspace verifiers, and constructing short efficiently verifiable noninteractive certificates of correctness for computations without resorting to the random oracle model. Previously, the question of verifying the correctness of computations was addressed in the PCP model by BFLS, in the argument model under computational assumptions by Kilian, and in the random oracle model by Micali. In contrast, our work is in the (single prover) interactive proof model, and makes no assumptions on the computational power of the dishonest prover. Joint work with Shafi Goldwasser and Yael Kalai.  
2008/01/01  Gil Segev  [+]Securing Vote Storage Mechanisms: Deterministic HistoryIndependent Strategies for Storing Information on WriteOnce Memories 
Motivated by the challenging task of designing "secure" vote storage mechanisms, we deal with information storage mechanisms that operate in extremely hostile environments. In such environments, the majority of existing techniques for information storage and for security are susceptible to powerful adversarial attacks. In this setting, we propose a mechanism for storing a set of at most $K$ elements from a large universe of size $N$ on writeonce memories in a manner that does not reveal the insertion order of the elements. We consider a standard model for writeonce memories, in which the memory is initialized to the all $0$'s state, and the only operation allowed is flipping bits from $0$ to $1$. Whereas previously known constructions were either inefficient (required $\Theta(K2)$ memory), randomized, or employed cryptographic techniques which are unlikely to be available in hostile environments, we eliminate each of these undesirable properties. The total amount of memory used by the mechanism is linear in the number of stored elements and polylogarithmic in the size of the universe of elements. In addition, we consider one of the classical distributed computing problems: conflict resolution in multipleaccess channels. By establishing a tight connection with the basic building block of our mechanism, we construct the first deterministic and nonadaptive conflict resolution algorithm whose running time is optimal up to polylogarithmic factors.
Joint work with Tal Moran and Moni Naor.  
2007/12/25  Uri Stav  [+]Linear and nonlinear Index Coding 
The following source coding problem was introduced by Birk and Kol: a sender holds a word x in {0,1}^n, and wishes to broadcast a codeword to n receivers, R_1,...,R_n. The receiver R_i is interested in x_i, and has prior side information comprising some subset of the n bits. This corresponds to a directed graph G on n vertices, where ij is an edge iff R_i knows the bit x_j. An index code for G is an encoding scheme which enables each R_i to always reconstruct x_i, given his side information. An optimal index code is an index code with minimal word length. In the talk I will survey the motivation, basic properties and several recent results on index coding. I will focus on a joint work with Eyal Lubetzky in which we show that linear index coding is not necessarily optimal (thus disproving an earlier conjecture of BarYossef, Birk, Jayram and Kol). This is achieved by an explicit construction of a side information graph, which extends Alon's variant of the celebrated Ramsey construction of Frankl and Wilson.
 
2007/12/18  Paul Valiant  [+]Testing Symmetric Properties of Distributions 
We introduce the notion of a \emph{Canonical Tester} for a classof properties, that is, a tester strong and general enough that ``a property is testable if and only if the Canonical Tester tests it''.
We construct a Canonical Tester for the class of symmetric properties of one or two distributions satisfying a certain weak continuity condition. Analyzing the performance of the Canonical Tester on specific properties resolves several open problems, establishing lowerbounds that match known upperbounds: we show that distinguishing between entropy $<\alpha$ or $>\beta$ on distributions over $[n]$ requires $n^{\alpha/\beta o(1)}$ samples, and distinguishing whether a pair of distributions has statistical distance $<\alpha$ or $>\beta$ requires $n^{1o(1)}$ samples. Our techniques also resolve a conjecture about a property that our Canonical Tester does not apply to: distinguishing identical distributions from those with statistical distance $>\beta$ requires $\Omega(n^{2/3})$ samples.  
2007/12/11  Iftach Haitner  [+]StatisticallyHiding Commitment from Any OneWay Function 
2007/12/04  Or Meir  [+]Combinatorial Construction of Locally Testable Codes 
2007/11/27  Ofer Neiman  [+]Local Embedding of Metric Spaces 
In many application areas, complex data sets are often representedby some metric space and metric embedding is used to provide a more structured representation of the data. In many of these applications much greater emphasis is put on the preserving the {\em local} structure of the original space than on maintaining its complete structure. In this paper we initiate the study of {\em local embeddings} of metric spaces and provide embeddings with distortion depending solely on the local structure of the space.
joint work with Ittai Abraham and Yair Bartal  
2007/11/20  Avinatan Hassidim  [+]Secure MultiParty Computation with (only) an Honest Majority 
Secret sharing and multiparty computation (also called ``securefunction evaluation'') are fundamental primitives in modern cryptography, allowing a group of mutually distrustful players to perform correct, distributed computations under the sole assumption that some number of them will follow the protocol honestly. This paper investigates how much trust is necessary  that is, how many players must remain honest  in order for distributed quantum computations to be possible.
We present a verifiable quantum secret sharing (VQSS) protocol, and a general secure multiparty quantum computation (MPQC) protocol, which can tolerate any $\lfloor \frac{n1}{2}\rfloor$ cheaters among $n$ players. Previous protocols for these tasks tolerated $\lfloor \frac{n1}{4}\rfloor$ and $\lfloor \frac{n1}{6}\rfloor$ cheaters, respectively. The threshold we achieve is tight  even in the classical case, ``fair'' multiparty computation is not possible if any set of $n/2$ players can cheat. Our protocols rely on approximate quantum errorcorrecting codes, which can tolerate a larger fraction of errors than traditional, exact codes. We introduce new families of authentication schemes and approximate codes tailored to the needs of our protocols, as well as new state purification techniques along the lines of those used in faulttolerant quantum circuits. Joint work with Michael BenOr, Claude Crépeau, Daniel Gottesman and Adam Smith  
2007/11/13  Ronen Shaltiel  [+]Lowend uniform hardness versus randomness tradeoffs for AM 
In 1998, Impagliazzo and Wigderson proved a hardness vs. randomnesstradeoff for BPP in the {\em uniform setting}, which was subsequently extended to give optimal tradeoffs for the full range of possible hardness assumptions by Trevisan and Vadhan (in a slightly weaker setting). In 2003, Gutfreund, Shaltiel and TaShma proved a uniform hardness vs. randomness tradeoff for AM, but that result only worked on the ``highend' of possible hardness assumptions. In this work, we give uniform hardness vs. randomness tradeoffs for AM that are nearoptimal for the full range of possible hardness assumptions. Following Gutfreund et al. we do this by constructing a hittingsetgenerator (HSG) for AM with ``resilient reconstruction.' Our construction is a recursive variant of the MiltersenVinodchandran HSG, the only known HSG construction with this required property. The main new idea is to have the reconstruction procedure operate implicitly and locally on superpolynomially large objects, using tools from PCPs (lowdegree testing, selfcorrection) together with a novel use of extractors that are built from ReedMuller codes for a sort of locallycomputable errorreduction. As a consequence we obtain gap theorems for AM (and AM $\cap$ coAM) that state, roughly, that either AM (or AM $\cap$ coAM) protocols running in time $t(n)$ can simulate all of EXP (``ArthurMerlin games are powerful'), or else all of AM (or AM $\cap$ coAM) can be simulated in nondeterministic time $s(n)$ (``ArthurMerlin games can be derandomized'), for a nearoptimal relationship between $t(n)$ and $s(n)$. As in the paper of Gutfreund et al., the case of AM $\cap$ coAM yields a particularly clean theorem that is of special interest due to the wide array of cryptographic and other problems that lie in this class.
The talk will be self contained and I will survey previous work on hardness versus randomness tradeoffs. Joint work with Chris Umans  
2007/11/06  Stephanie Wehner  [+]Clifford algebra and uncertainty relations 
2007/10/30  Shmuel Marcovitch  [+]Is Communication Complexity Physical? 
2007/10/23  Oded Schwartz  [+]Which comes first: MaxCut or UniqueGame? 
This is a public lecture on the PhD thesis. Thesis title: "Expansion and Approximability" In this thesis we study expanding graphs, the approximability of optimization problems and issues that combine both notions. This talk presents recent results with Muli Safra that demonstrates the implications of expansion to the complexity of approximating optimization problems.
Talk: Which comes first: MaxCut or UniqueGame? Abstract:  Finding tight thresholds of the approximability of NPhard optimization problems has been an ongoing successful study in complexity theory. When NPhardness cannot be obtained, one can reduce from problems whose complexity is not yet settled (even assuming P does not equal NP). A recently popular such problem is the (in)approximability of almost completely satisfiable UniqueGame. Khot et al. showed a reduction from this problem to the problem of approximating MaxCut, better than the approximation of Goemans & Williamson. We conjecture an efficient reduction in the other direction. We show that the Parallel Repetition technique can yield this reduction, if a "perfect" analysis is provided. We obtain such an analysis only for instances with good expansion properties, and up to a constant soundness. Our analysis is constructive (contrary to that of Raz, and the simplified one by Holenstein) and we discuss possible algorithmic consequences to the constructiveness. Joint work with Muli Safra 
Date  Speaker  Title 
2007/06/05  Avner Magen  [+]Tight integrality gaps for Vertex Cover SDPs in the LovaszSchrijver hierarchy 
2007/05/15  Ben Toner  [+]De Finetti theorems for conditional probability distributions 
2007/05/08  Guy Kindler  [+]Understanding parallel repetition requires understanding foams 
2007/05/01  Ronald de Wolf  [+]On computation and communication with small bias 
2007/04/17  Oded Schwartz  [+]An Elementary Construction of ConstantDegree Expanders 
2007/04/10  Vinod Vaikuntanathan  [+]Chosen ciphertext security, (almost) for free 
2007/03/27  Prahladh Harsha  [+]The Communication Complexity of Correlation 
2007/03/20  Benny Applebaum  [+]On Pseudorandom Generators with Linear Stretch in NC0 
2007/03/13  Dan Gutfreund  [+]Verifying and decoding in constant depth 
2007/03/06  Danny Harnik  [+]On the Compressibility of NP Instances and Cryptographic Applications 
2007/02/27  Avraham BenAroya  [+]Quantum expanders and the quantum entropy difference problem 
2007/01/16  Stephen D. Miller  [+]A spectral analysis of the Pollard Rho algorithm for discrete logarithms 
2007/01/09  Zeev Dvir  [+]Rank Extractors 
2007/01/02  Andrej Bogdanov  [+]Hardness Amplification for Errorless Heuristics 
2006/12/26  Ben Riva  [+]Barehanded Electronic Voting 
2006/12/12  Tal Moran  [+]ReceiptFree UniversallyVerifiable Voting With Everlasting Privacy 
2006/12/05  Noam Livne  [+]All Natural NPC Problems Have AverageCase Complete Versions 
2006/11/28  Gil Segev  [+]Tight Bounds for Unconditional Authentication Protocols in the Manual Channel and Shared Key Models 
2006/11/21  Enav Weinrab  [+]Private Approximation of Search Problems 
2006/11/14  Guy Kindler  [+]Bounded functions with small tails are Juntas 
2006/11/07  Ishay Haviv  [+]Tensorbased Hardness of the Shortest Vector Problem 
2006/10/31  Manor Mendel  [+]Ramsey partitions and proximity datastructures 
Date  Speaker  Title 
2006/06/13  Anup Rao  [+]Extractors for a Constant Number of Polynomially Small MinEntropy Independent Sources 
2006/06/06  Ronen Shaltiel  [+]2source dispersers for entropy $n^{o(1)}$ and Ramsey graphs beating the FranklWilson constructions 
2006/05/09  Dorit Aharonov  [+]A polynomial quantum algorithm for approximating the Jones polynomial 
2006/04/25  Oded Regev  [+]Boundederror quantum state identification with applications 
2006/04/04  Iddo Tzameret  [+]The Strength of Multilinear Proofs 
2006/03/21  Iftach Haitner  [+]On the power of the randomized iterate 
2006/03/07  Yuval Emek  [+]A Tight Upper Bound on the Probabilistic Embedding of SeriesParallel Graphs 
2006/01/31  Adam Smith  [+]Calibrating Noise to Sensitivity in Private Data Analysis 
2006/01/24  Sofya Raskhodnikova  [+]Sublinear Algorithms for Approximating String Compressibility and the Distribution Support Size 
2006/01/17  Ishay Haviv  [+]Hardness of the Covering Radius Problem on Lattices 
2006/01/10  Eyal Kaplan  [+]Derandomized Constructions of $k$Wise (Almost) Independent Permutations 
2006/01/03  Ricky Rosen  [+]Lattice Problems and Norm Embeddings 
2005/12/27  Adi Akavia  [+]On the (Im)Possibility of Basing OneWay Functions on NPHardness 
2005/12/20  Ofer Neiman  [+]Metric Embedding with Relaxed Guarantees 
2005/12/13  Guy Kindler  [+]The noisy broadcast problem 
2005/12/06  Alex Samorodnitsky  [+]Gowers Uniformity, influence of variables, and PCPs. 
2005/11/29  Ronen Gradwohl  [+]Random Selection with an Adversarial Majority 
2005/11/22  Ariel Gabizon  [+]Deterministic Extractors for Affine Sources over Large Fields 
2005/11/15  Eldar Fischer  [+]Testing and estimation of dense graph properties 
2005/11/08  Tamir Tuller  [+]Hardness of reconstructing evolutionary trees 
2005/11/01  Dana Moshkovitz  [+]SubConstant Error Low Degree Test of AlmostLinear Size 
Website written partially by Niko Farhi