Theory of Computation Seminar 

School of Computer Science
Tel Aviv University




We meet every Tuesday at 13:10 in Schreiber 309. Talks usually last 60-90 minutes.
To join (or leave) the mailing list
For more information please send an email to Amnon Ta-Shma.

To watch an abstract, click the event.

If you are using Google Calendar, you can subscribe to the theory seminar calendar by clicking the "+ Google Calendar" at the bottom of the calendar. The theory seminar calendar will then appear as "Theory seminar" under "Other calendars". You may unsubscribe, at any time, by clicking "Settings" at the bottom of the calendar list on the left side of your calendar page.



Schedule for 2011/2012

Date Speaker Title
2011/11/1 Bill Gasarch [+]  Lower bounds on Tree Resolution Proofs for the Pigeon Hole Principle

Resolution is a common technique to prove that a formula is not satisfiable. In many cases it gives SHORT proofs of such. Does it always?

Unlikely- if it did then NP=coNP I will present a very nice proof by Beyersdorff, Galesi, Luria that a resticted form a resolution yields proofs of size 2^{Omega(nlog n)} for the Pigeonhole principle. (The result is not new).

I will then present some intriguing open problems (that are MINE)

NO prior knowledge of anything is needed. Really! In particular you do not need to know what resolution is ahead of time (I didn't when I first read this paper.)

2011/11/8 Shai Vardi [+]  Local Computation Algorithms

For input $x$, let $F(x)$ denote the set of outputs that are the ``legal'' answers to a computational problem $F$. Suppose $x$ and members of $F(x)$ are so large that there is not time to read them in their entirety. We propose a model of {\em local computation algorithms} which, for a given input $x$, support queries by a user to values of specified locations $y_i$ in a legal output $y \in F(x)$. When more than one legal output $y$ exists for a given $x$, the local computation algorithm should output in a way that is consistent with at least one such $y$. Local computation algorithms are intended to distill the common features of several concepts that have appeared in various algorithmic subfields, including local distributed computation, local algorithms, locally decodable codes, and local reconstruction.

We develop a technique which under certain conditions can be applied to construct local computation algorithms that run in {\em polylogarithmic} time and space. The technique is based on known constructions of small sample spaces of $k$-wise independent random variables and Beck's analysis in his algorithmic approach to the Lov{\'{a}}sz Local Lemma. We apply this technique to maximal independent set computations, scheduling radio network broadcasts, hypergraph coloring and satisfying $k$-SAT formulas.

Joint work with Ronitt Rubinfeld and Ning Xie.

2011/11/15Ronen Shaltiel [+]  Dispersers for affine sources with sub-polynomial entropy

We construct an explicit disperser for affine sources over $\F_2^n$ with entropy $k = 2^{\log^{0.9} n} = n^{o(1)}$. This is a polynomial time computable function $D:\F_2^n \to \set{0,1}$ such that for every affine space $V$ of $\F_2^n$ that has dimension at least $k$, $D(V)=\set{0,1}$. This improves the best previous construction of Ben-Sasson and Kopparty that achieved $k = \Omega(n^{4/5})$ and is the first pseudorandom object for affine sources with entropy less than $\sqrt{n}$.

Our technique follows a high level approach that was developed in the context of dispersers for two independent sources in two papers by Barak, Kindler, Shaltiel, Sudakov and Wigderson, and Barak, Rao, Shaltiel and Wigderson. We design a "challenge-response game" for affine sources that enables the disperser to locate high entropy blocks in the source even though it only receives a single sample from the source distribution. In order to implement the game we rely on tools and ideas of Rao in the context of extractors for low-weight affine sources. The basic approach gives a disperser for affine sources with entropy larger than $\sqrt{n}$. We use the challenge-response game to perform a recursive win-win analysis that enables us to handle affine sources with entropy smaller than $\sqrt{n}$.

2011/11/22Ilan Newman [+]  Every Property of Hyperfinite Graphs is Testable

A property testing algorithm for a property $\Pi$ in the bounded degree graph model\cite{GR97} is an algorithm that, given access to the adjacency list representation of a graph $G=(V,E)$ with maximum degree at most $d$, accepts $G$ with probability at least $2/3$ if $G$ has property $\Pi$, and rejects $G$ with probability at least $2/3$, if it differs on more than $\epsilon dn $ edges from every $d$-degree bounded graph with property $\Pi$. A property is \emph{testable}, if for every $\epsilon,d$ and $n$, there is a property testing algorithm $A_{\epsilon,n,d}$ that makes at most $q(\epsilon,d)$ queries to an input graph of $n$ vertices, that is, a non-uniform algorithm that makes a number of queries that is independent of the graph size.

A $k$-disc around a vertex $v$ of a graph $G=(V,E)$ is the subgraph induced by all vertices of distance at most $k$ from $v$.
We show that the structure of a planar graph on large enough number of vertices, $n$, and with constant maximum degree $d$, is determined, up to the modification (insertion or deletion) of at most $\epsilon d n$ edges, by the frequency of $k$-discs for certain $k=k(\epsilon,d)$ that is independent of the size of the graph.
We can replace planar graphs by any hyperfinite class of graphs, which includes, for example, every graph class that does not contain a set of forbidden minors. We use this result to obtain new results and improve upon existing results in the area of property testing. In particular, we prove that:
- graph isomorphism is testable for every class of hyperfinite graphs,
- every graph property is testable for every class of hyperfinite graphs,
- every hyperfinite graph property is testable in the bounded degree graph model,
- A large class of graph parameters is approximable for hyperfinite graphs

Our results also give a partial explanation of the success of motifs in the analysis of complex networks.

This is a joint work with Christian Sohler.

2011/11/29Zohar Karnin [+]  Deterministic construction of a high dimensional l_p section in l_1^n for any p<2, with applications to Compressed Sensing

In Compressed Sensing there is a need for linear compression of sparse vectors. Such compression algorithms are characterized by three main parameters:
* Compression ratio (length of the compressed vector)
* Compression and decompression run-time
* Quality guarantee - noise-robustness of the compression and decompression algorithms

Quality guarantee is achieved when the kernel of the compression matrix A is a high dimensional subspace in which the l_1 and l_2 norms behave similarly.

In this work we construct, for any 1Since p < 2 we get a slightly weaker form of quality guarantee. However, unlike previous related results, our construction is deterministic, with a highly sparse A, having at most O(1) non-zero entries in each row. As a result, we get a shorter run-time for both the compression and decompression algorithms while keeping the other parameters (i.e., compression ratio and quality guarantee) almost the same as in previous constructions.

2011/12/6 Michael Langberg [+]  A Unified Framework for Approximating and Clustering Data

One of the great challenges in computational theory is the extraction of patterns from massive and high-dimensional data sets, e.g., clustering. The field of geometric clustering is an extensively studied one with a wide spectrum of ``real world'' applications.

A ``coreset'' for a given clustering optimization problem is a ``compressed'' representation of the data at hand, in the sense that a solution for the clustering problem with the (small) coreset as input would yield an approximate solution to the problem with the original (larger) input. Using traditional techniques, a coreset usually implies provable linear time algorithms for the corresponding clustering problem.

In this talk I will present a unified and abstract framework that yields the efficient construction of succinct coresets for several clustering problems. Representing the data by a set F of positive functions over a ground set X, our framework forges a link between the combinatorial complexity of the function family F at hand (measured in terms of classical VC dimension) and the paradigm of coresets. Our coresets are obtained via randomized sampling according to a delicately designed sampling distribution. Examples in which our framework has been successful (and improves over previous works) include the k-median, the k-line median, projective clustering, linear regression, low rank approximation, and subspace approximation problems.

This talk is based on joint works with Dan Feldman and Leonard Schulman.

2011/12/13Gil Cohen [+]  On the Degree of Univariate Polynomials Over the Integers

We study the following problem raised by von zur Gathen and Roche [Combinatorica '97]: What is the minimal degree of a nonconstant polynomial $f:\{0,\ldots,n\}\to\{0,\ldots,m\}$? The above mentioned authors proved that for $m=1$, $\deg(f) = n-o(n)$.

In this talk we present lower bounds for larger values of $m$ (even $m$ exponential in $n$). We complement these bounds by proving the existence of not too high-degree polynomials.

Our proofs use a variety of techniques that we believe will find other applications as well. One technique shows how to handle a certain set of diophantine equations by working modulo a well chosen set of primes (i.e. a Boolean cube of primes). Another technique shows how to use lattice theory and Minkowski's theorem to prove the existence of a polynomial with certain properties.

One motivation for studying this problem for $m>1$ comes from the work of Shpilka and Tal [CCC '11] who showed that in order to understand the Fourier spectrum of symmetric Boolean functions (which has shown to relate to the problem of learning symmetric juntas) one needs to study polynomials $f:\{0,...,n\}\to \{0,1,2\}$ and prove lower bounds on their degree, which is exactly the question that we study here for the case $m=2$.

Joint work with Amir Shpilka and Avishay Tal.

2011/12/20Benny Sudakov (UCLA) [+]  Nearly Complete Graphs Decomposable into Large Induced Matchings and their Applications

We describe two constructions of (very) dense graphs which are edge disjoint unions of large {\em induced} matchings. These constructions disprove (in a strong form) a conjecture of Meshulam, substantially improves a result of Birk, Linial and Meshulam on communicating over a shared channel, and (slightly) extends the analysis of H{\aa}stad and Wigderson of the graph test of Samorodnitsky and Trevisan for linearity. We can also use them to settle a combinatorial question of Vempala regarding a candidate rounding scheme for the directed Steiner tree problem.

Joint work with N. Alon and A. Moitra

2011/12/27Avi Wigderson [+]  Local Correction of Codes and Euclidean Incidence Geometry

A classical theorem in Euclidean geometry asserts that if a set of points has the property that every line through two of them contains a third point, then they must all be on the same line. We prove several approximate versions of this theorem (and related ones), which are motivated from questions about locally correctable codes and matrix rigidity. The proofs use an interesting combination of combinatorial, algebraic and analytic tools.

Joint work with Boaz Barak, Zeev Dvir and Amir Yehudayoff

2012/1/3 Scott Aaronson (MIT) [+]  Quantum Money from Hidden Subspaces

Joint work with Paul Christiano

Forty years ago, Wiesner pointed out that quantum mechanics raises the striking possibility of money that cannot be counterfeited according to the laws of physics. We propose the first quantum money scheme that is
(1) public-key---meaning that anyone can verify a banknote as genuine, not only the bank that printed it, and
(2) cryptographically secure, under a "classical" hardness assumption that has nothing to do with quantum money.
Our scheme is based on hidden subspaces, encoded as the zero-sets of random multivariate polynomials. A main technical advance is to show that the "black-box" version of our scheme, where the polynomials are replaced by classical oracles, is unconditionally secure. Previously, such a result had only been known relative to a quantum oracle (and even there, the proof was never published).
Even in Wiesner's original setting---quantum money that can only be verified by the bank---we are able to use our techniques to patch a major security hole in Wiesner's scheme. We give the first private-key quantum money scheme that allows unlimited verifications and that remains unconditionally secure, even if the counterfeiter can interact adaptively with the bank.
Our money scheme is simpler than previous public-key quantum money schemes, including a knot-based scheme of Farhi et al. The verifier needs to perform only two tests, one in the standard basis and one in the Hadamard basis---matching the original intuition for quantum money, based on the existence of complementary observables.
Our security proofs use a new variant of Ambainis's quantum adversary method, and several other tools that might be of independent interest.

2012/1/10 Gabriel Nivasch (EPFL) [+]  Upper bounds for centerflats

For every fixed d and every n, we construct an n-point set G in R^d such that every line in R^d is contained in a halfspace that contains only 2n/(d+2) points of G (up to lower-order terms).

Apparently, the point set G satisfies the following more general property: For every k, every k-flat in R^d is contained in a halfspace that contains only (k+1) n / (d+k+1) points of G (up to lower-order terms).

In 2008, Bukh, Matousek, and Nivasch conjectured the following generalization of Rado's centerpoint theorem: For every n-point set S in R^d and every k, there exists a k-flat f in R^d such that every halfspace that contains f contains at least (k+1) n / (d+k+1) points of S (up to lower-order terms). (The centerpoint theorem is obtained by setting k=0.) Such a flat f would be called a "centerflat".

Thus, our upper bound construction shows that the leading constant (k+1)/(k+d+1) in the above conjecture is tight (certainly for k = 1, and apparently for all k).

The set G of our construction is the "stretched grid" -- a point set which has been previously used by Bukh et al. for other related purposes.

Joint work with Boris Bukh.

2012/1/17 Dana Moshkovitz (MIT) [+]  The Sliding Scale Conjecture From Intersecting Curves

The Sliding Scale Conjecture was posed by Bellare, Goldwasser, Lund and Russell in 1993 and has been open since. It says that there are PCPs with constant number of queries, polynomial alphabet and polynomially small error.We show that the conjecture can be proved assuming a certain geometric conjecture about curves over finite fields.The geometric conjecture states that there are small families of low degree curves that behave, both in their distribution over points and in the intersections between pairs of curves from the family, similarly to the family of all low degree curves.

2012/1/24 Klim Efremenko [+]  From Irreducible Representations to Locally Decodable Codes

Locally Decodable Code (LDC) is a code that encodes a message in a way that one can decode any particular symbol of the message by reading only a constant number of locations, even if a constant fraction of the encoded message is adversarially corrupted.
In this talk we will show connection between LDC and a representation theory.

We show that if there exists an irreducible representation (\rho, V) of a group G and q elements g_1,g_2,..., g_q in $G$ such that there exists a linear combination of matrices \rho(g_i) that is of rank one, then we can construct a $q$-query Locally Decodable Code C:V-> F^G. We show the potential of this approach by constructing constant query LDCs of sub-exponential length matching the parameters of the best known constructions.

No prior knowledge in representation theory will be assumed.

2012/1/31 Alexey Glibichuk [+]  Incidences over the field of prime order

http://tq.cs.tau.ac.il/seminar/Glibichuk_Abstract.pdf

In 1983 J. Beck proved the following incidence theorem in R^2. Let P be a set of points in R^2. Then either P contains c|P| points on a straight line, or the pairs of points of P determine at least c|P|^2 distinct lines, where c is an absolute constant. We write |S| for the number of elements of a finite set S. Beck's original paper was followed in the same issue of the same journal by a result of Szemeredi and Trotter stating that the number of incidences between m straight lines and n points in R^2 is $O(m+n+(mn)^{2/3})$. The Szemeredi-Trotter theorem implies Beck's theorem as a corollary. The analog of Beck's theorem for finite field model was not known for long time.

Breakthrough was made by Jean Bourgain, Netz Katz and Terrence Tao

2012/2/6 Roy Schwartz [+]  Submodular Maximization

he study of combinatorial problems with a submodular objective function has attracted much attention in recent years, and is partly motivated by the importance of such problems to economics, algorithmic game theory and combinatorial optimization.
In addition to the fact that it is common for utility functions in economics and algorithmic game theory to be submodular, such functions also play a major role in combinatorics, graph theory and combinatorial optimization.
A partial list of well known problems captured by submodular maximization includes: Max-Cut, Max-DiCut, Max-k-Cover, Generalized-Assignment, several variants of Max-SAT and some welfare and scheduling problems.

Classical works on submodular maximization problems are mostly combinatorial in nature.
Recently, however, many results based on continuous algorithmic tools have emerged.
The main bottleneck in the continuous approach is how to approximately solve a non-convex relaxation for the submodular problem at hand.
A simple and elegant method, called ``continuous greedy'', successfully tackles this issue for monotone submodular objective functions, however, only much more complex tools are known to work for general non-monotone submodular objectives.
We present a new unified continuous greedy algorithm which finds approximate fractional solutions for both the non-monotone and monotone cases, and improves on the approximation ratio for various applications.
Some notable immediate implications are information-theoretic tight approximations for Submodular Max-SAT and Submodular-Welfare with k players, for any number of players k, and an improved (1/e)-approximation for maximizing a non-monotone submodular function subject to a matroid or O(1)-knapsack constraints.

We show that continuous methods can be further used to obtain improved results in other settings.
Perhaps the most basic submodular maximization problem is the problem of Unconstrained Submodular Maximization, which captures some well studied problems, such as: Max-Cut, Max-DiCut, and some variants of maximum facility location and Max-SAT.
Exploiting some symmetry properties of the problem, we present a simple information-theoretic tight (1/2)-approximation algorithm, which unlike previous known algorithms keeps a fractional inner state, i.e., it is based on a continuous approach.
We note that our algorithm can be further simplified to obtain a purely combinatorial algorithm which runs only in linear time.

2012/3/13 Amir Shpilka [+]  On Identity Testing of Tensors, Low-rank Recovery and Compressed Sensing

We study the problem of designing efficient, deterministic, black-box polynomial identity testing algorithms for tensors, and obtain an explicit construction of a quasi-polynomial sized hitting set. We also consider the question of recovering the tensor deterministically, and connect this problem to the question of Low-rank-recovery (in the noiseless case). We then show a connection between low-rank recovery and the task of sparse (vector) recovery: any sparse-recovery algorithm yields a low-rank recovery scheme, with equivalent parameters. We obtain this reduction using linear-algebraic techniques, and not using convex optimization, which is more commonly seen in compressed sensing and low-rank-reccovery algorithms. Finally, we shall make a connection to rank-metric codes, as studied in coding theory.

This is a joint work with Michael Forbes

2012/3/20 Ran Raz [+]  The Surprise Examination Paradox and the Second Incompleteness Theorem

We give a new proof for Godel's second incompleteness theorem, based on Kolmogorov complexity, Chaitin's incompleteness theorem, and an argument that resembles the surprise examination paradox.

We then go the other way around and suggest that the second incompleteness theorem gives a possible resolution of the surprise examination paradox. Roughly speaking, we argue that the flaw in the derivation of the paradox is that it contains a hidden assumption that one can prove the consistency of the mathematical theory in which the derivation is done; which is impossible by the second incompleteness theorem.

I will start with a short introduction to the known proofs for Godel's incompleteness theorems and their relations to the liar paradox, Kolmogorov complexity and Berry's paradox.

No knowledge in logic will be assumed.

Joint work with Shira Kritchman

2012/3/27 Eden Chlamtac [+]  Smaller integrality gaps and improved approximations using hierarchies of convex programs

We consider natural convex relaxations of integer programs, such as Linear Programs (LP) and Semi-Definite Programs (SDP), and examine how well they approximate various problems in combinatorial optimization.
The "integrality gap" -- the worst-case gap between the optimum of a convex relaxation and that of the integer program it approximates -- can sometimes be reduced by considering a hierarchy of relaxations.
These hierarchies give a systematic recipe to strengthen any convex relaxation repeatedly by one of several methods.

We will look at different hierarchies, and some universal properties of the LP and SDP relaxations derived from them. Moreover, we will see how, for certain NP-hard optimization problems, we can achieve improved approximations using such strengthened relaxations while maintaining polynomial running time overall.

2012/4/3 Alon Rosen [+]  Pseudorandom Functions and Lattices

We give direct constructions of pseudorandom function families based on conjectured hard lattice problems and learning problems. Our constructions are asymptotically efficient and highly parallelizable in a practical sense, i.e., they can be computed by simple, relatively small low-depth arithmetic or boolean circuits (e.g., in NC$^{1}$ or even TC$^{0}$). In addition, they are the first low-depth PRFs that have no known attack by efficient quantum algorithms. Central to our results is a new ``derandomization'' technique for the learning with errors problem which, in effect, generates the error terms deterministically.

Joint work with Abhishek Banerjee and Chris Peikert.

2012/4/17 Noga Zewi [+]  An additive combinatorics approach to the log-rank conjecture in communication complexity

For a {0,1}-valued matrix M let CC(M) denote he deterministic communication complexity of the boolean function associated with M. The log-rank conjecture of Lovasz and Saks [FOCS 1988] states that CC(M) <= log^c(rank(M)) for some absolute constant c where rank(M) denotes the rank of M over the field of real numbers.
We show that CC(M) <= c rank(M)/ logrank(M) for some absolute constant c, assuming a well-known conjecture from additive combinatorics, known as the Polynomial Freiman-Ruzsa (PFR) conjecture.

Our proof is based on the study of the "approximate duality conjecture" which was recently suggested by Ben-Sasson and Zewi [STOC 2011] and studied there in connection to the PFR conjecture. First we improve the bounds on approximate duality assuming the PFR conjecture. Then we use the approximate duality conjecture (with improved bounds) to get the aforementioned upper bound on the communication complexity of low-rank martices, and this part uses the methodology suggested by Nisan and Wigderson [Combinatorica 1995].

Joint work with Eli Ben-Sasson and Shachar Lovett.

2012/4/24 Elad Haramaty [+]  Optimal testing of multivariate polynomials over small prime fields

We consider the problem of testing if a given function f is close to a n-variate degree d polynomial over the finite field of q elements. The natural, low-query, test for this property would be to pick the smallest dimension t = t(q,d )˜ d/q such that every function of degree greater than d reveals this feature on some t-dimensional affine subspace and to test that f when restricted to a random t-dimensional affine subspace is a polynomial of degree at most d on this subspace. Such a test makes only qt queries, independent of n. Previous works, by Alon et al., and Kaufman and Ron and Jutla et al. , showed that this natural test rejected functions that were O(1)-far from degree d-polynomials with probability at least O(q-t) (the results of hold for all fields, while the results of hold only for fields of prime order). Thus to get a constant probability of detecting functions that were at constant distance from the space of degree d polynomials, the tests made q^{2t} queries. Kaufman and Ron also noted that when q is prime, then q^t queries are necessary. Thus these tests were off by at least a quadratic factor from known lower bounds. It was unclear if the soundness analysis of these tests were tight and this question relates closely to the task of understanding the behavior of the Gowers Norm. This motivated the work of Bhattacharyya et al., who gave an optimal analysis for the case of the binary field and showed that the natural test actually rejects functions that were O(1)-far from degree d-polynomials with probability at least O(1).

In this work we give an optimal analysis of this test for all fields showing that the natural test does indeed reject functions that are O(1)-far from degree d polynomials with O(1)-probability. Our analysis thus shows that this test is optimal (matches known lower bounds) when q is prime. Our approach extends the proof technique of Bhattacharyya et al., however it has to overcome many technical barriers in the process. The natural extension of their analysis leads to an O(q^d) query complexity, which is worse than that of Kaufman and Ron for all q except 2! The main technical ingredient in our work is a tight analysis of the number of “hyperplanes” (affine subspaces of co-dimension 1) on which the restriction of a degree d polynomial has degree less than d. We show that the number of such hyperplanes is at most O(qtq,d ) — which is tight to within constant factors.

Joint work with Amir Shpilka and Madhu Sudan

2012/5/1 Ankit Gupta [+]  Reconstruction of Depth-4 Multilinear Circuits with Top Fan-in 2

A reconstruction algorithm for a multivariate polynomial f in F[x_1, …, x_n] is given blackbox access to f and is required to efficiently output a (succinct) representation of f in some suitable model of computation. The algorithm can make adaptive queries to the blackbox to evaluate f on inputs of its choice.

The simplest representation of a polynomial is as a sum of monomials, i.e., as a SP (depth-2) circuit. The reconstruction problem in this case is referred to as the “interpolation problem” and admits efficient algorithms (Klivans & Spielman). Many interesting polynomials, e.g., determinant, however, have exponentially long representations as sums of monomials, whereas as arithmetic formulas or circuits, they can be represented with quasipolynomial, or even polynomial, complexity. Unfortunately, there are very strong hardness results for the reconstruction of general circuits classes. Hence positive results for this problem are known only for restricted circuit classes.

In this talk I’ll talk about a recent work in which we present a randomized algorithm for reconstructing multilinear SPSP(2) circuits, i.e., multilinear depth-4 circuits with fan-in 2 at the top ‘+’ gate. The algorithm is given blackbox access to a polynomial f in F[x_1,…,x_n] computable by a multilinear SPSP(2) circuit of size s and outputs an equivalent multilinear SPSP(2) circuit, runs in time poly(n,s), and works over any field F.

This is the first reconstruction result for any model of depth-4 arithmetic circuits. Prior to our work, reconstruction results for bounded depth circuits were known only for depth-2 arithmetic circuits (Klivans & Spielman), SPS(2) circuits (depth-3 arithmetic circuits with top fan-in 2) (Shpilka), and SPS(k) with k=O(1) (Karnin & Shpilka). Moreover, the running times of these algorithms have a polynomial dependence on |F| and hence do not work for infinite fields such as the rationals.

2012/5/6 Yonatan Goldhirsh [+]  Testing Formula Satisfaction

Property Testing deals with the following question: Distinguish using as few queries to the input as possible, ideally with a number of queries independent of the input size, between inputs that satisfy a given property and inputs that are far from any possible input satisfying the property. In the massively parametrized model, a fixed part of the input is given to the algorithm in advance, on which the algorithm has to be exact (i.e. the approximation of "not being far from a satisfying input" can only be made for the input part not given in advance).

In this talk we consider properties defined by read-once Boolean formulas, where the formula is given in advance. Such formulas are representable as trees with their nodes labeled by the corresponding gates, and the part of the input not given in advance is the assignment, i.e. the values at the leaves.

For all properties given by And/Or gates, and in fact all properties where the gates are of bounded arity, we give a test whose number of queries does not depend on the input size at all. In essence this means that untestability cannot be "broken down" to small gates. On the other hand, for non-Boolean alphabets there is in fact a simple binary gate that can be used for formulas that are not testable with a constant number of queries.

We also discuss some related questions, such as instances where we can estimate the actual distance of the input from satisfying the formula, and a test for And/Or formulas whose number of queries is only quasi-polynomial in the approximation parameter.

2012/5/15 Michael Pinsker [+]  Schaefer's theorem for graphs, or: why to consult the infinite at times

I will present a complexity classification result for certain class of computational problems called *Graph-SAT problems*. A Graph-SAT problem is given by a fixed finite set \Psi of quantifier-free first-order formulas in the language of graphs. The input of the problem consists of a set V of variables and a conjunction \Phi of statements (``constraints'') about these variables, where each statement is taken from \Psi; the question to decide is whether or not \Phi is satisfiable in a graph.

The complexity of Graph-SAT problems depends on \Psi - the larger \Psi, the more complicated instances of the problem can get, and the more complex is the problem. We prove that either \Psi is contained in one out of 17 classes of graph formulas and the corresponding problem can be solved in polynomial time, or the problem is NP-complete.

The perhaps surprising method of proof is to assign to every \Psi an infinite structure definable in the random graph, and then throw in some model theory, universal algebra, and structural Ramsey theory to analyze these structures.

This is a joint work with Manuel Bodirsky.

2012/5/29 Zvika Brakerski [+]  Efficient Interactive Coding Against Adversarial Noise

We study the problem of constructing interactive protocols that are robust to noise, a problem that was originally considered in the seminal works of Schulman (FOCS '92, STOC '93), and has recently regained popularity. Robust interactive communication is the interactive analogue of error correcting codes: Given an interactive protocol which is designed to run on an error-free channel, construct a protocol that evaluates the same function (or, more generally, simulates the execution of the original protocol) over a noisy channel. As in (non-interactive) error correcting codes, the noise can be either stochastic, i.e. drawn from some distribution, or adversarial, i.e. arbitrary subject only to a global bound on the number of errors.

We show how to efficiently simulate any interactive protocol in the presence of constant-rate adversarial noise, while incurring only a constant blow-up in the communication complexity ($\cc$). Our simulator is randomized, and succeeds in simulating the original protocol with probability at least $1-2^{-\Omega(\cc)}$. Prior works could not achieve efficient simulation in the adversarial case.

Joint work with Yael Tauman Kalai.

2012/6/3 Amir Yehudayoff [+]  Some reconstruction problems

The talk will introduce some natural data-reconstruction problems, with possible connections to learning theory, cryptography, computational biology and more.

A key example is the following. Assume V is a set of k n-bit binary vectors. Let p be a distribution on V. Assume we get noisy samples from V by first choosing v according to p and then flipping each entry in v with probability 0.49, independently. The goal is to reconstruct V,p from such noisy samples.

We will show how to learn V,p in time quasi-polynomial in k. The algorithm uses a new structure called a partial-ID graph, which captures impostors relations in a given set of vectors.

2012/6/12 Ariel Gabizon [+]  Extractors for Polynomials Sources over Constant-Size Fields of Small Characteristic

et F be the field of q elements, where q=p^l for prime p. Informally speaking, a polynomial source is a distribution over F^n sampled by low degree multivariate polynomials. In this paper, we construct extractors for polynomial sources over fields of constant size q assuming p< For instance, suppose a distribution X over F^n has support size q^k and is sampled by polynomials of individual degree d and total degree D.
When p, D and the "entropy rate" k/n are constant, we get an extractor over constant-size fields with constant error. The only previous construction by Dvir, Gabizon and Wigderson required a field of size polynomial in n.

Joint work with Eli Ben-Sasson

2012/6/19 Andrej Bogdanov [+]  Pseudorandomness for linear-length branching programs and stack machines

We give a simple distribution that is pseudorandom against (1) Oblivious branching programs over binary alphabet; (2) Arbitrary branching programs over sufficiently large alphabet; (3) Log-space randomized Turing Machines extended with a stack that make a constant number of passes over their random tape. This distribution can be sampled by a pseudorandom generator with small linear stretch.

These results are consequences of the following property of the distribution over {0, 1}^n: Given any pair of subsets I, J of size 0.99n each and boolean functions f, g: {0, 1}^{0.99n} -> {0, 1}, the joint distribution (f(x|I), g(x|J)) is statistically close in the case x is uniformly random and in the case x is chosen from the pseudorandom distribution.

Based on joint work with Periklis Papakonstantinou and Andrew Wan.

* Special joint meeting with the Greater Tel Aviv Area Cryptography Seminar


Schedule for 2010/2011

Date Speaker Title
2010/10/19Moni Naor [+]  Backyard Cuckoo Hashing: Constant Worst-Case Operations with a Succinct Representation

The performance of a dynamic dictionary is measured mainly by its update time, lookup time, and space consumption. In terms of update time and lookup time there are known constructions that guarantee constant-time operations in the worst case with high probability, and in terms of spaceconsumption there are known constructions that use essentially optimal space. However, although the first analysis of a dynamic dictionary dates back more than 45 years ago, when Knuth analyzed linear probing, the trade-off between these aspects of performance is still not completely understood.

In this talk I will describe recent work that provides the construction of the first dynamic dictionary that enjoys the best of both worlds: it stores n elements using only (1 + o(1))B bits, where B is the information-theoretic lower bound for representing a set of size n taken from a universe of size u, and guarantees constant-time operations in the worst case with high probability. This problem was open even in the amortized setting. One of the main ingredients of our construction is a permutation-based variant of cuckoo hashing.

An application of the above result is an approximate set membership data structure (AKA Bloom Filter) that is within 1+o(1) of optimal size and works in worst case O(1) time.

Joint work with Yuri Arbitman and Gil Segev

2010/10/26Noga Zewi [+]  From affine to two-source extractors via approximate duality

Two-source and affine extractors and dispersers (known also as bipartite Ramsey graphs) are fundamental objects studied in the context of derandomization. We show how to construct two-source extractors and dispersers for arbitrarily small min-entropy rate in a black-box manner from affine extractors with sufficiently good parameters. We present two black-box constructions. Both constructions work for min-entropy rate less than half.

One of them can potentially reach arbitrarily small min-entropy rate provided the affine extractor used to construct it outputs, on affine sources of min-entropy rate half, a relatively large number of output bits and has sufficiently small error. Our analysis relies on the study of approximate duality, a concept related to the polynomial Freiman-Ruzsa conjecture (PFR) from additive combinatorics.

For more details see: ECCC 2010-144

Joint work with Eli Ben-Sasson.

2010/11/02 Ronen Shaltiel [+]  Typically-correct derandomization

A main goal of complexity theory is to show that every randomized polynomial time algorithm can be simulated by a deterministic one. A typically-correct simulation of a randomized algorithm is a deterministic algorithm (preferably of the same complexity) that simulates the randomized algorithm correctly on "most inputs". This relaxed goal (introduced by Goldreich and Wigderson) is sometimes easier to obtain than standard derandomization. Specific examples are:

  1. We show that every randomized algorithm that can be implemented by polynomial size constant depth circuits has a polynomial time typically-correct simulation. In contrast, the best known always-correct simulations do not run in polynomial time.
  2. We show that every randomized communication protocol has a typically-correct simulation by a deterministic protocol with a linear loss in communication complexity. We also have similar results for decision trees and streaming algorithms.
  3. For general randomized polynomial time algorithms we have conditional results, giving a typically-correct simulation in polynomial time. The assumptions we use are incomparable to those used in hardness versus randomness tradeoffs (which achieve always-correct derandomization). Our results improve previous results of Goldreich and Wigderson.

The results follow from general approaches to achieve typically-correct derandomization in various algorithmic settings. I will discuss these approaches and also, whether the relaxed goal of typically correct derandomization implies circuit lower bounds.

The talk is based on the following papers that can be downloaded from my web-page.

  • Ronen Shaltiel. Weak derandomization of weak algorithms: explicit versions of Yao's lemma, CCC 2009.
  • Jeff Kinne, Dieter van-Melkebeek and Ronen Shaltiel. Pseudorandom generators, typically-correct derandomization, and circuit lower bounds, RANDOM 2009.
  • Ronen Shaltiel. Typically-correct derandomization. SIGACT news complexity theory column 66, SIGACT News: 41(2):57-72, 2010.
2010/11/09Noam Livne [+]  Sequential Rationality in Cryptographic Protocols

Much of the literature on rational cryptography focuses on analyzing the strategic properties of cryptographic protocols. However, due to the presence of computationally-bounded players and the asymptotic nature of cryptographic security, a definition of sequential rationality for this setting has thus far eluded researchers.

We propose a new framework for overcoming these obstacles, and provide the first definitions of computational solution concepts that guarantee sequential rationality. We argue that natural computational variants of subgame perfection are too strong for cryptographic protocols. As an alternative, we introduce a weakening called threat-free Nash equilibrium that is more permissive but still eliminates the undesirable ``empty threats'' of non-sequential solution concepts.

To demonstrate the applicability of our framework, we revisit the problem of implementing a mediator for correlated equilibria (Dodis-Halevi-Rabin, Crypto 00), and propose a variant of their protocol that is sequentially rational for a non-trivial class of correlated equilibria. Our treatment provides a better understanding of the conditions under which mediators in a correlated equilibrium can be replaced by a stable protocol.

2010/11/16Oded Goldreich [+]  In a World of P=BPP

We show that proving results such as BPP=P essentially necessitate the construction of suitable pseudorandom generators (i.e., generators that suffice for such derandomization results). In particular, the main incarnation of this equivalence refers to the standard notion of uniform derandomizationand to the corresponding pseudorandom generators (i.e., the standard uniform notion of ``canonical derandomizers''). This equivalence bypasses the question of which hardness assumptions are required for establishing such derandomization results, which has received considerable attention in the last decade or so (starting with Impagliazzo and Wigderson [JCSS, 2001]).

We also identify a natural class of search problems that can be solved by deterministic polynomial-time reductions to BPP. This result is instrumental to the construction of the aforementioned pseudorandom generators (based on the assumption BPP=P), which is actually a reduction of the ``construction problem'' to BPP.

2010/11/23Alex Samorodnitsky [+]  Maximal eigenvalues of subsets of the cube and some applications

Given a number k, we are interested in the maximal eigenvalue an induced k-vertex subgraph of the discrete cube can have.

We will give some partial answers and describe some connections of this question to bounds on binary codes and to random walks in the cube. In particular we will give a (sketchy) proof of the first LP bound for binary codes using eigenvalues.

2010/11/30Eli Ben-Sasson [+]  Limits on the rate of locally testable affine-invariant codes

A linear code is said to be affine-invariant if the coordinates of the code can be viewed as a vector space and the code is invariant under an affine transformation of the coordinates. A code is said to be locally testable if proximity of a received word to the code can be tested by querying the received word in a few coordinates. Locally testable codes have played a critical role in the construction of probabilistically checkable proofs and most such codes originate from simple affine invariant codes (in particular the Reed-Muller codes). Furthermore it turns out that the local testability of these codes can really be attributed to their affine-invariance. It was hoped that by studying broader classes of affine-invariant codes, one may find nicer, or simpler, locally testable codes, and in particular improve (significantly) on the rate achieved by Reed-Muller codes.

In this work we show that low-rate is an inherent limitation of affine-invariant codes. We show that any k-query affine-invariant binary code is contained in an 2k-query testable Reed-Muller code. In fact our result shows that any affine-invariant code that has a k-local constraint (i.e., a weight k codeword in its dual), a necessary condition for k-query local testability, is contained in a Reed-Muller code that is 2k -locally characterized (i.e., its dual is spanned by words of weight at most 2k).

While the structure of affine-invariant codes has been explored quite extensively in the recent past Kaufman and Sudan [2008], Grigorescu et al. [2008, 2009], relating the locality of constraints in such codes to the rate has been a non-trivial challenge. Such questions lead to the task of showing that a class of systems of multivariate polynomial equations have no common zeroes over finite fields. Our result effectively shows a certain class of such systems (subclasses of diagonal systems) that have no common zeroes.

Joint work with Madhu Sudan.

2010/12/07Amir Yehudayoff [+]  Pseudorandom generators for regular branching programs

We give new pseudorandom generators for regular read-once branchingprograms of small width. A branching program is regular if the in-degree of every vertex in it is (0 or) 2. For every width d and length n, our pseudorandom generator uses a seed of length O( (log(d) + loglog(n) + log(1/ε) ) log(n) ) to produce n bits that cannot be distinguished from a uniformly random string by any regular width d length n read-once branching program, except with probability ε.

We also give a result for general read-once branching programs, in the case that there are no vertices that are reached with small probability. We show that if a (possibly non-regular) branching program of length n and width d has the property that every vertex in the program is traversed with probability at least p on a uniformly random input, then the error of the generator above is at most 2 ε/p2.

Joint work with Mark Braverman, Anup Rao and Ran Raz. ECCC version

2010/12/21Venkat Guruswami [+]  Bypassing UGC: Inapproximability of Subspace Approximation

The Unique Games conjecture (UGC) has emerged in recent years as the starting point for optimal inapproximability results for several problems. However, these problems are only known to be Unique-Games-hard but not Unique-Games-"complete", leaving open the possibility that while many of the implications of the UGC are true, the conjecture itself is false. The question of whether inapproximability results shown under the UGC can be shown without appealing to the conjecture is thus a well-motivated one. In this work, we bypass the UGC assumption in inapproximability results for two geometric problems - subspace approximation and Grothendieck problems in \ell_p-norms - obtaining a tight NP-hardness result in each case.

The talk will focus on the \ell_p-subspace approximation problem (as it is the easier reduction). Here, the input consists of a set S of points in R^n and a parameter k (possibly depending on n). The goal is to find a k-dimensional subspace H of R^n that minimizes the sum of the p'th powers of the Euclidean distances of the points in S from H. This problem is a generalization of various special cases for different values of p such as low-rank matrix approximation (p=2) or computing the radii of point sets (p = \infty).

For finite p > 2, we prove that this problem is NP-hard to approximate within any factor less than the p'th moment E[|g|^p] of the standard normal variable g ~ N(0,1). Previously, the same inapproximability factor was shown assuming the UGC. A matching SDP based algorithm is also known (when k=n-1).

This is the *first* approximation threshold, proven under P \neq NP, that involves the Gaussian random variable in a fundamental way. Note that the problem statement itself has no mention of Gaussians. The talk will explain the appearance of the Gaussian distribution in a dictatorship test that quantitatively utilizes the Central Limit Thoerem, which is then combined with a smooth version of Label Cover to give the inapproximability result. [No background knowledge of UGC-based results is needed for the talk.]

Joint work with Prasad Raghavendra, Rishi Saket, and Yi Wu.

2010/12/28Iftach Haitner [+]  Efficiency Improvements in Constructing Pseudorandom Generators from One-way Functions

We give a new construction of pseudorandom generators from any one-way function. The construction achieves better parameters and is simpler than that given in the seminal work of Hastad, Impagliazzo, Levin and Luby [SICOMP '99]. The key to our construction is a new notion of next-block pseudoentropy, which is inspired by the notion of ``inaccessible entropy'' recently introduced in [Haitner, Reingold, Vadhan and Wee STOC '09]. An additional advantage over all previous constructions is that our pseudorandom generators are highly parallelizable and invoke the one-way function in a non-adaptive manner. Using [Applebaum, Ishai and Kushilevitz SICOMP '06], this implies the existence of pseudorandom generators in NC0 based on the existence of one-way functions in NC1.

2011/01/11Raghu Meka [+]  Pseudorandom Generators from Invariance Principles

Invariance principles or limit theorems have recently found several important applications in theoretical computer science. In this talk I'll present some recent results with the broad theme of constructing pseudorandom generators from invariance principles.

The first set of results concerns the natural question of constructing pseudorandom generators (PRGs) for low-degree polynomial threshold functions (PTFs). We give a PRG with seed-length log n/eps^{O(d)} fooling degree d PTFs with error at most eps. Previously, no nontrivial constructions were known even for quadratic threshold functions and constant error eps. For the class of degree 1 threshold functions or halfspaces, we construct PRGs with much better dependence on the error parameter eps and obtain a PRG with seed-length O(log n + log^2(1/eps)).

The second set of results concerns "discrete central limit theorems", and fooling linear forms over 0-1 inputs in total variation distance.

2011/01/18Dana Moshkovitz [+]  Hardness of Approximately Solving Linear Equations Over Reals

We consider the problem of approximately solving a system of homogeneous linear equations over reals, where each equation contains at most three variables.Since the all-zero assignment always satisfies all the equations exactly, we restrict the assignments to be ``non-trivial". Here is an informal statement of our result: it is NP-hard to distinguish whether there is a non-trivial assignment that satisfies $1-\delta$ fraction of the equations or every non-trivial assignment fails to satisfy a constant fraction of the equations with a ``margin" of $\Omega(\sqrt{\delta})$.Unlike the well-studied case of linear equations over finite fields, for equations over reals, the best approximation algorithm known (SDP-based) is the same no matter whether the number of variables per equation is two or three.

Our result is motivated by the following potential approach to proving The Unique Games Conjecture:

1. Prove the NP-hardness of solving approximate linear equations over reals, for the case of three variables per equation (we prove this result).
2. Prove the NP-hardness of the problem for the case of two variables per equation, possibly via a reduction from the three variable case.
3. Prove the Unique Games Conjecture.

An interesting feature of our result is that it shows NP-hardness result that matches the performance of a non-trivial SDP-algorithm. Indeed, the Unique Games Conjecture predicts that an SDP-based algorithm is optimal for a huge class of problems (e.g. all CSPs by Raghavendra's result).

This is joint work with Subhash Khot.
2011/02/22Yuval IshaI [+]  On the Circuit Complexity of Universal Hashing and Cryptography

Universal hash functions are fundamental algorithmic constructs with a widespread use throughout computer science. It was believed that the computation of such functions requires circuits of super-linear size. We show that this is not the case by giving an explicit construction of universal hash functions mapping n bits to n bits that can be computed by circuits of size O(n).

We were led to this result through the study of methods to reduce the computational complexity of cryptographic primitives and protocols. Standard constructions of cryptographic primitives involve a large multiplicative computational overhead that grows with the desired level of security. We give evidence for the possibility of implementing cryptographic primitives (such as encryption, authentication, signatures, or secure two-party computation) while incurring only a *constant* computational overhead compared to insecure implementations of the same tasks.

Based on joint work with Eyal Kushilevitz, Rafail Ostrovsky, and Amit Sahai from STOC 2008.

2011/03/01Klim Efremenko [+]  Local List-Decoding with a Constant Number of Queries

Recently in [E] there was constructed locally-decodable codes of sub-exponential length. This result showed that these codes can handle up toone third fraction of errors. In this talk we show that the same codes can be locally unique-decoded from error rate upto half and locally list-decoded from error rate 1-α for any α>0, with only a constant number of queries and a constant alphabet size. This gives the first sub-exponential codes that can be locally list-decoded with a constant number of queries. We also will show a generic, simple way to amplify the error-tolerance of any locally decodable code.

2011/03/08Bo'az Klartag [+]  The communication complexity of the Vector in Subspace problem

Suppose Alice has a unit vector v in Rn, Bob has a subspace E of half the dimension in Rn, and they are guaranteed that either v lies in E, or else v lies in the orthogonal complement to E. Both Alice and Bob have an infinite computing power, but the communication channel between them is unfortunately very slow and expensive. How much bits have to be exchanged in order to decide whether v lies in E or not? Raz gave an O(sqrt n) protocol, which for each possible instance of the problem, outputs the correct answer with probability 99%. In this lecture we will discuss a lower bound of O(n1/3) on the communication complexity. Joint work with O. Regev.

2011/03/15Michael Viderman [+]  Towards lower bounds on locally testable codes

Locally testable codes (LTCs) are error-correcting codes for which membership, in the code, of a given word can be tested by reading a constant number of symbols.

The main open problem in the area of locally testable codes (LTCs) is whether there exists an asymptotically good family of LTCs and to resolve this question it suffices to consider the case of query complexity 3. We argue that to refute the existence of such an asymptotically good family one should prove that the number of dual codewords of weight at most 3 is super-linear in the blocklength of the 3-query LTC.

Then we show that every LTC must have linearly many redundant low-weight codewords in its dual. We actually prove the stronger claim that the actual test itself must use a linear number of redundant dual codewords (beyond the minimum number of basis elements required to characterize the code).

Based on two recent works: the first is joint with Eli Ben-Sasson, Venkatesan Guruswami, Tali Kaufman and Madhu Sudan and the second is joint with Eli Ben-Sasson.

2011/03/22Scott Aaronson [+]  A Linear-Optical Proof that the Permanent is #P-Complete.

I'll show how the model of "quantum computing with linear optics" -- and in particular, a universality result due to Knill, Laflamme, andMilburn -- can be used to give a completely new proof of Valiant's famous theorem that the permanent is #P-complete. This proof is arguably simpler than the standard one, requiring far fewer "unexplained gadgets." It thereby provides another illustration of quantum computing as a powerful "higher-level programming language" for reasoning about classical counting classes. (An earlier example was the quantum-based proof that PP is closed under intersection.)

If time permits, I'll also talk about linear-optical quantum computing more generally, and especially some open questions arising from my recent work on the subject with Alex Arkhipov.

2011/03/29Benny Applebaum [+]  Pseudorandom Generators with Long Stretch and Low Locality from Random Local One-Way Functions

We continue the study of pseudorandom generators (PRG) in NC0 that stretch n-bits of input into m-bits of outputs. While it is known that such generatorsare likely to exist for the case of small sub-linear stretch m=n+n1-\eps, it remains unclear whether achieving larger stretch such as m=2n or even m=n+n2 is possible. The existence of such PRGs, which was posed as an open question in previous works (e.g., [Cryan and Miltersen, MFCS 2001], [Mossel,Shpilka and Trevisan, FOCS 2003], and [Applebaum, Ishai and Kushilevitz, FOCS 2004]), has recently gained an additional motivation due to several interesting applications.

We obtain NC0 constructions of linear-stretch PRGs and polynomial-stretch weak-PRGs where in the latter case the distinguishing advantage is 1/poly(n) rather than negligible. These constructions are based on the one-wayness of ``random'' NC0 functions -- a variant of an assumption made by Goldreich (ECCC 2000). We also show that our constructions give rise to strong inapproximability results for the densest-subgraph problem in d-uniform hypergraphs for constant d. This allows us to improve the previous bounds of Feige (STOC 2002) and Khot (FOCS 2004) from constant inapproximability factor to neps-inapproximability, at the expense of relying on stronger assumptions.

2011/04/5 Adi Akavia [+]   Explicit Constructions Fooling Bohr Sets and Arithmetic Progressions

In this talk I present novel results showing there are explicit constructions (ie, constructions that are deterministic and of time polynomial in the output size) of the following pseudo-random sets:

  1. Sets A of size \poly( (ln(N))^d, 1/epsilon ) with epsilon-discrepancy in all rank d Bohr sets in Z_N.

    This extends the Razborov-Szemeredi-Wigderson result on epsilon-discrepancy in arithmetic progressions to Bohr sets, which are their higher rank analogue arising in the context of additive combinatorics.

  2. Sets A_P of size \poly( ln(N), 1/epsilon ) that epsilon-approximate the uniform distribution over a given arithmetic progression P in Z_N, in the sense that $|E_{x \in A} \chi(x) - E_{x \in P}\chi(x)| < epsilon$ for all linear tests $\chi$ in Z_N.

    This extends results on small biased sets, which are sets approximating the uniform distribution over the entire domain, to sets approximating uniform distributions over (arbitrary size) arithmetic progressions.

2011/05/03Irit Dinur [+]  On the structure of NP-hard 3-SAT instances, and an analogous question for locally testable codes

The PCP theorem says that it is NP-hard to approximate the fraction of satisfiable clauses in a 3SAT instance. What is the structure of such 3SAT instances? Here by the "structure" of a 3SAT instance we refer to the constraint (hyper-)graph whose vertices are variables and whose hyper-edges are the clauses.
There are many interesting questions about the structure of hard instances of constraint satisfaction problems like 3SAT. For example, is it true that the constraint graph of a hard 3SAT instance must always be an expander?

We study this question in an analogous setting of so-called locally testable codes (LTCs). LTCs are are error correcting codes that come with a constraint (hyper-)graph. We prove a structural theorem on LTCs: every LTC can be decomposed into a constant number of "basic" codes whose constraint graph is an expander.

based on joint work with Tali Kaufman

2011/05/04Kevin Matulef [+]  Property Testing Lower Bounds via Communication Complexity

We develop a new technique for proving lower bounds in property testing,by showing a strong connection between testing and communication complexity. We give a simple scheme for reducing communication problems to testing problems, thus allowing us to use known lower bounds in communication complexity to prove lower bounds in testing. This scheme is general and implies a number of new testing bounds, as well as simpler proofs of several known bounds. In this talk we will illustrate our technique by proving lower bounds for testing juntas, k-linearity, monotonicity, and more.

Based on joint work with Eric Blais and Joshua Brody.

2011/05/17Will Perkins [+]  The Bohman-Frieze Process

The phase transition of the Erdos-Renyi random graph process, during which a giant component emerges, is understood in great detail. But how robust is it? We describe the phase transition of the Bohman-Frieze process, a simple modification of the Erdos-Renyi process that adds dependence between the edges and is biased in favor of joining isolated vertices. We show that the phase transitions of the two processes are qualitatively similar, sharing several critical exponents related to the size and structure of the connected components in the barely sub- and super-critical phases. The results extend to a wider class of processes, called 'bounded-size Achlioptas processes'.

2011/05/24Ishay Haviv [+]  Linear Index Coding via Semidefinite Programming

In the index coding problem, introduced by Birk and Kol (INFOCOM, 1998), the goal is to broadcast an n bit word to n receivers, where the receivers have side information represented by a graph G and each of them is interested in one bit of the word. The objective is to minimize the length of the transmitted message. For linear index coding, the minimum possible length is known to be equal to a graph parameter called minrank (Bar-Yossef et al., FOCS, 2006).

We show a polynomial time algorithm that, given an n vertex graph G with minrank k, finds a linear index code for G of length O(n^{f(k)}), where f(k) depends only on k. For example, for k=3 we obtain f(3) ~ 0.2574. Our algorithm employs a semidefinite program (SDP) introduced by Karger, Motwani and Sudan (J. ACM, 1998) for graph coloring. A crucial component of our analysis is bounding the objective value of the SDP in terms of the minrank.

As a side effect of our analysis, we show an exact expression for the maximum possible value of the Lov\'{a}sz theta-function of a graph with minrank k. This compares two classical upper bounds on the Shannon capacity of a graph.

Joint work with Eden Chlamtac, Tel Aviv University

2011/05/31Or Meir [+]  IP = PSPACE vs. error correcting codes

The IP theorem, which asserts that IP = PSPACE (Lund et. al., and Shamir, in J. ACM 39(4)), is one of the major achievements of complexity theory. The known proofs of the theorem are based on the arithmetization technique, which transforms a quantified Boolean formula into a related polynomial. The intuition that underlies the use of polynomials is commonly explained by the fact that polynomials constitute good error correcting codes. However, the known proofs seem tailored to the use of polynomials, and do not generalize to arbitrary error correcting codes.

In this work, we show that the IP theorem can be proved by using general error correcting codes. We believe that this establishes a rigorous basis for the aforementioned intuition, and sheds further light on the IP theorem.


Schedule for 2009/2010

Date Speaker Title
2009/10/20Ronen Eldan [+]A Polynomial Number of Random Points does not Determine the Volume of a Convex Body
We prove that there does not exist an algorithm that, with non-negligible probability, approximates up to a constant the volume of a convex body, using a polynomially long sequence of random points uniformly distributed in the body.
2009/10/27Or Sattath [+]A Quantum Lovasz Local Lemma
The Lovasz Local Lemma is a powerful tool in probability theory to show the existence of combinatorial objects meeting a prescribed collection of ``weakly dependent" criteria. We present a quantum generalization of this result, which replaces probabilities with a weaker notion of relative dimension and allows to lower bound the intersection size of vector spaces under certain independence conditions. Our result immediately applies to the Bounded Occurrence K-QSAT problem: We show for instance that if each qubit appears in at most 2^k/(e \cdot k) projectors, there is a joint satisfiable state.
We then apply our results to the recently studied model of RANDOM-K-QSAT with the density (ratio of projectors to qubits) as a control parameter. Recent works have shown that the satisfiable region extends up to a density of $1$ in the large k limit. Using a hybrid approach building on work by Laumann et al. we greatly extend the known satisfiable region for RANDOM-K-QSAT to a density of Omega(2^k/k^2). Since our tool allows us to show the existence of joint satisfying states without the need to construct them, we are able to penetrate into regions where the satisfying states are conjectured to be entangled, avoiding the need to construct them, which had limited previous approaches to product states.
The necessary details from quantum computation and information will be given.
This is joint work with Andris Ambainis (Univesity of Latvia) and Julia Kempe (Tel-Aviv University)
2009/11/03 Ricky Rosen [+]Strong Parallel Repetition Theorem for Free Projection Games
The parallel repetition theorem states that for any two provers one round game with value at most $1-\eps$ (for $\eps<1/2$), the value of the game repeated $n$ times in parallel is at most $(1-\eps^3)^{\Omega(n/\log s)}$ where $s$ is the size of the answers set \cite{RazParallelRepetitionThm},\cite{Holenstein}.
For {\em Projection Games} the bound on the value of the game repeated $n$ times in parallel was improved to $(1-\eps^2)^{\Omega(n)}$ \cite{Rao08} and was shown to be tight \cite{RazCounterexample/focs/Raz08}.
In this paper we show that if the questions are taken according to a product distribution then the value of the repeated game is at most $(1-\eps^2)^{\Omega(n/\log s)}$ and if in addition the game is a {\em Projection Game} we obtain a {\em strong parallel repetition} theorem, i.e., a bound of $(1-\eps)^{\Omega(n)}$.
2009/11/10Vadim Lyubashevsky [+]Public-Key Cryptographic Primitives Provably as Secure as Subset Sum
We propose a semantically-secure public-key encryption scheme that is directly based on the hardness of the subset sum problem. The subset sum assumption required for the security of our scheme is weaker than that of existing subset-sum based encryption schemes, namely the lattice-based schemes of Ajtai and Dwork (STOC '97), Regev (STOC '03, STOC '05), and Peikert (STOC '09). Additionally, our proof of security is simpler and rather direct. We also present a natural variant of our scheme that is secure against key-leakage attacks, as well as an oblivious transfer protocol that is secure against semi-honest adversaries. This is joint work with Adriana Palacio and Gil Segev.
2009/11/17Elad Haramati [+]On the Structure of Cubic and Quartic Polynomials
In our work we study the structure of polynomials of degree three and four that have high bias or high Gowers norm. In particular we obtain the following results. 1. We give a canonical representation for degree three or four polynomials that have a significant bias (i.e. they are not equidistributed). This result generalizes the corresponding results from the theory of quadratic forms. It also significantly improves the results of Green and Tao and Kaufman and Lovett for such polynomials.
2. For the case of degree four polynomials with high Gowers norm we show that (a subspace of co-dimension O(1) of) F^n can be partitioned to subspaces of dimension Omega(n) such that on each of the subspaces the polynomial is equal to some degree three polynomial. It was shown by Green and Tao and by Lovett, Meshulam and Samorodnitsky that a quartic polynomial with a high Gowers norm is not necessarily correlated with any cubic polynomial. Our result shows that a slightly weaker statement does hold. The proof is based on finding a structure in the space of partial derivatives of the underlying polynomial.
2009/11/24Prahladh Harsha [+]Composition of low-error 2-query PCPs using decodable PCPs
Proof composition is an essential ingredient in all constructions ofprobabilistically checkable proofs (PCPs). In this talk, I'll present a new generic, composition method for low error two-query PCPs.
Earlier composition methods were either inapplicable in the low-error regime or non-modular (i.e., very much tailored to the specic PCPs that were being composed), if not both. Furthermore, until recently, composition in the low error regime suffered from incurring an extra consistency query, resulting in PCPs that are not two-query and hence, much less useful for hardness-of-approximation reductions.
I'll then illustrate how the new composition theorem can be used to give a considerably simpler and modular proof of the recent break-through result of Moshkovitz and Raz [FOCS 2008]: construction of 2-query low-error PCPs of nearly linear length.
One of the critical components in the new composition is a new variant of PCP called "decodable PCP". A decodable PCP, besides being locally checkable as a standard PCP, is also locally decodable, in the sense, that the original NP witness can be locally decoded from it.
[Joint work with Irit Dinur]
2009/12/01Avinatan Hassidim [+]Quantum Money
2009/12/08Or Meir [+]Combinatorial PCPs with efficient verifier
2009/12/15Guy Kindler [+]A Quantitive version of the Gibbard-Satterthwaite Theorem
Consider an election between q alternatives, where each of the voters rank the different alternatives, and the winner is determined according to some predefined function of this voting profile. Such a social choice function is called manipulable, if a situation might occur where a voter who knows the rankings given by other voters can change her ranking in a way that does not reflect her true preference, but which leads to an outcome that is more desirable to her. Gibbard and Satterthwaite proved that any social choice function where more than two alternatives can be selected is manipulable, unless it is a dictatorship (where the outcome of the election only depends on the choices of one voter). But how often must one encounter preference profiles for which some voters have an incentive not to vote truthfully? In the case where the social choice function is neutral, namely when it is invariant under changing the names of the alternatives, we prove a lower bound on the fraction of manipulable preference profiles which is inverse polynomial in the number of voters and alternatives. This means that by trying out different rankings randomly, voters have a non-negligible chance of finding a beneficial manipulative ranking.
Joint work with Marcus Isaksson and Elchanan Mossel.
2009/12/22Amir Yehudayof [+]Sum-of-square problem in complexity theory
The talk will have three parts. In the first part, we will give a brief introduction to algebraic complexity theory, and to the permanent polynomial. In the second part, we will address the computational complexity of permanent, which is a major open problem in the theory of computing. We will, in fact, consider a restricted version of this question, namely, the computational complexity of permanent in a non-commutative world (in which the variables can be thought of as representing matrices). We will relate the non-commutative complexity of permanent to the so-called sum-of-squares problem; a well-studied mathematical problem that is interesting in its own right. In the third part, we will continue to study a special case of the sum-of-squares problem - the integer sum-of-squares problem - for which we will sketch the first asymptotic super-linear lower bound. Joint work with Pavel Hrubes and Avi Wigderson.
2009/12/29Ariel Gabizon [+]Simple Affine Extractors using Dimension Expansion
Let F be the finite field of q elements. An (n,k)-affine extractor is a coloring of F^n with 2 colors, such that every affine subspace of dimension at least k in F^n is colored in a balanced way. Roughly speaking, the problem of explicitly constructing affine extractors gets harder as the field size q gets smaller and easier as k gets larger.
In this work, we construct affine extractors whenever q = \Omega( (n/k)^2), provided that F has characteristic \Omega(n/k).
For a wide range of parameters this is a new result, but even in ranges of parameters where we do not improve or match known results, we believe our construction and proof have the advantage of being very simple.
Our proof uses a result of Heur, Leung, and Xiang about the dimension of products of subspaces. I will present this result of [HLX] in the hope that it could be useful elsewhere in combinatorics and theoretical computer science.
joint work with Matt DeVos
2010/01/05Shachar Lovett [+]Polynomial Threshold Functions: Structure, Approximation and Pseudorandomness
We study the computational power of polynomial threshold functions (PTFs),that is, threshold functions of real polynomials over the boolean cube. We provide new results bounding the computational power of this model.
Our first result shows that low-degree PTFs cannot approximate any function with many influential variables. We provide a couple of examples where this technique yields tight approximation bounds.
Our second result relates to constructing pseudorandom generators fooling low-degree PTFs, where we extend previous known cases where k-wise independence fools PTFs.
Both results are based on a new tool: a regularity theorem for PTFs. This tool has also been developed in several other parallel works, where it has been useful in other domains as well.
Based on joint work with Ido Ben-Eliezer and Ariel Yadin.
2010/01/12Lee-Ad Gottlieb [+]A nonlinear approach to dimension reduction
The celebrated Johnson-Lindenstrauss lemma says that every n points inEuclidean space can be represented with O(log n) dimensions with only a minor distortion of pairwise distances. It has been conjectured that a much-improved dimension reduction representation is achievable for many interesting data sets, by bounding the target dimension in terms of the intrinsic dimensions of the data (for example, by replacing the log n term with the doubling dimension). This question appears to be quite challenging, requiring new (nonlinear) embedding techniques.
We make progress towards resolving this question by presenting two dimension reduction theorems with similar flavor to the conjecture. For some intended applications, these results can serve as an alternative to the conjectured embedding.
[Joint work with Robert Krauthgamer.]
2010/01/19Iftach Haitner [+]A Parallel Repetition Theorem for Any Cryptographic Protocol
Whether or not parallel repetition improves security, is afundamental question in the study of protocols. While parallel repetition improves the security of 3-message protocols and of public-coin protocols, Bellare, Impagliazzo and Naor (FOCS '97) gave an example of a protocol for which parallel repetition does not improve the security at all.
We show that by slightly modifying any protocol, in a way that preserves its other properties, we get a protocol for which parallel repetition does improve the security (to any degree) .
In the second part of the talk (if time permits), I will presents our recent results on basing cryptography on minimal hardness assumptions, where we give simpler and more efficient (in some cases tight) constructions of pseudorandom generators, statistically hiding commitments and universal one-way hash functions based on one-way functions.
2010/02/23Zohar Karnin [+]Explicit Dimension Reduction and its Applications
We construct a small set of explicit linear transformationsmapping R^n to R^{O(\log n)}, such that the L_2 norm of any vector in R^n is distorted by at most 1 +- o(1) in at least a fraction of 1 - o(1) of the transformations in the set.
Albeit the tradeoff between the distortion and the success probability is sub-optimal compared with probabilistic arguments, we nevertheless are able to apply our construction to a number of problems. In particular, we use it to construct an \epsilon-sample (or pseudo-random generator) for spherical digons in S^{n-1} (i.e., the n-dimensional unit sphere), for \epsilon = o(1). This construction leads to an oblivious derandomization of the Goemans-Williamson MAX CUT algorithm and similar approximation algorithms (i.e., we construct a small set of hyperplanes, such that for any instance we can choose one of them to generate a good solution). We also construct an \epsilon-sample for linear threshold functions on S^{n-1}, for \epsilon = o(1).
2010/03/02Ran Raz [+]Two "simple" problems that imply strong circuit lower bounds
will present two families of mathematical problems that are verysimple to describe, that seem very natural to study from geometric, algebraic or combinatorial points of view, and are seemingly unrelated to theoretical computer science, and whose solution would give exceptionally strong results in theoretical computer science; namely, super-polynomial lower bounds for the size of general arithmetic circuits and formulas.
More specifically, I will discuss 'elusive functions and lower bounds for arithmetic circuits' - an approach to prove exponential lower bounds for circuit size; and 'tensor-rank and lower bounds for arithmetic formulas' - an approach to prove super-polynomial lower bounds for formula size.
2010/03/09Oliver Friedmann [+]An Exponential Lower Bound for the Latest Deterministic Strategy Iteration Algorithms
We present a new exponential lower bound for the two mostpopular deterministic variants of the strategy improvement algorithm for solving parity, mean payoff, discounted payoff and simple stochastic games. The first variant improves every node in each step maximizing the current valuation locally, whereas the second variant computes the optimal improvement in each step. We outline families of games on which both variants require exponentially many strategy iterations.
2009/03/16Dan Gutfreund [+]Derandomizing Arthur-Merlin Games and Approximate Counting Implies Exponential-Size Lower Bounds
We show that if Arthur-Merlin protocols can be derandomized, then there is a Boolean function computable in deterministic exponential-time with access to an NP oracle, that cannot be computed by Boolean circuits of exponential size. This in turn implies the existence of very strong pseudorandom generators. We also show that the same conclusion holds if the problem of approximate counting the number of accepting paths of a nondeterministic Turing machine up to multiplicative factors can be done in nondeterministic polynomial-time. In other words, showing nondeterministic fully polynomial-time approximation schemes for #P-complete problems requires proving exponential-size circuit lower bounds.
A key ingredient in our proof is a connection between computational learning theory and exponential-size lower bounds. We show that the existence of deterministic learning algorithms with certain properties implies exponential-size lower bounds, where the complexity of the hard function is related to the complexity of the learning algorithm.
Joint work with Akinori Kawachi.
2010/03/23Eden Chlamtac [+]New Approximation Algorithms for Densest k-Subgraph
We consider the Densest k-Subgraph (DkS) problem: Given a graph G anda parameter k, find a subgraph of G on k vertices with the maximum number of edges. This is a well known optimization problem, which is a natural optimization version of the decision problem k-CLIQUE.
The previous best known approximation ratio by Feige, Kortsarz and Peleg in 2001 was O(n^{1/3-epsilon}) for epsilon estimated at around 1/60. We improve on this result, giving an O(n^{1/4+delta}) approximation for any delta>0.
In essence, our algorithm relies on the following principle: In a random graph with average degree n^alpha, a planted k-subgraph can be detected if it has average degree (strictly) greater than k^alpha. We show that this principle largely carries over to provide results in the same spirit for worst-case (non-random) instances.
Joint work with Aditya Bhaskara, Moses Charikar, Uriel Feige and Aravindan Vijayaraghavan.
2010/04/13Gillat Kol [+]Locally Testable Codes Analogues to the Unique Games Conjecture Do Not Exist
The Unique Games Conjecture (UGC) is an important open problem in the research of PCPs and hardness of approximation. The conjecture is a strengthening of the PCP Theorem, predicting the existence of a special type of PCP verifiers: 2-query verifiers that only make unique tests. Moreover, the UGC predicts that such PCP verifiers can have almost-perfect completeness and low-soundness. The computational complexity notion of a PCP is closely related to the combinatorial notion of a Locally Testable Code (LTC). LTCs are error-correcting codes with codeword testers that only make a constant number of queries to the tested word. All known PCP constructions use LTCs as building blocks. Furthermore, to obtain PCPs with certain properties, one uses LTCs with corresponding properties.
In light of the strong connection between PCPs and LTCs, one may conjecture the existence of LTCs with properties similar to the ones required by the UGC. In this work we show that such LTCs do not exist. That is, we consider 2-query LTCs with codeword testers that only make unique tests. Roughly speaking, we show that any such LTC with relative distance close to 1, almost-perfect completeness and low-soundness, is of constant size.
The result intends to point out the limitations of some of the current techniques for constructing PCPs. In particular, it suggests that in order to prove the UGC using corresponding codes with local testability features, one must consider a different definition (than the one commonly used) of a family of LTCs. Our result also shows limitations of unique tests (compared, for example, to projection tests).
2010/04/27Oded Regev [+]Tight Bound for the Gap Hamming Distance Problem
We consider the Gap Hamming Distance problem in communication complexity. Here,Alice receives an $n$-bit string $x$, and Bob receives an $n$-bit string $y$.
They are promised that the Hamming distance between x and y is either at least $n/2+\sqrt{n}$ or at most $n/2-\sqrt{n}$, and their goal is to decide which is the case.
We show a tight lower bound of \Omega(n) on the randomized communication complexity of the problem. The bound is based on a new geometric statement regarding correlations in Gaussian space, related to a result of C. Borell from 1985.
Partly based on a joint paper with Amit Chakrabarti.
NOTE: Thanks to special permission by the organizers (no bribes involved!), this talk will be about 75 minutes long, but the audience should feel free to leave at any time during the talk.
2010/05/04Amir Shpilka [+]On the Complexity of Boolean Functions in Different Characteristics
Every Boolean function on n variables can be expressed as a unique multivariate polynomial modulo p for every prime p. In this work, we study how the degree of a function in one characteristic affects its complexity in other characteristics. We establish the following general principle: functions with low degree modulo p must have high complexity in every other characteristic q.
More precisely, we show the following results about Boolean functions f:{0,1}^n to {0,1} which depend on all n variables, and distinct primes p,q: 1. If f has degree o(\log n) modulo p, then it must have degree n^{1-o(1)}modulo q. Thus a Boolean function has degree o(\log n) in only one characteristic. This result is essentially tight as there exist functions that have degree \log n in every characteristic.
2. If f has degree d = o(\log n) modulo p, it cannot be computed correctly on more than 1- p^{-O(d)} fraction of the hypercube by polynomials of degree n^{1/2 - \eps} modulo q.
As a corollary of the above results it follows that if f has degree o(\log n) modulo p, then it requires super-polynomial size \AC_0[q] circuits. This gives a lower bound for a broad and natural class of functions.
This is a joint work with Shachar Lovett (Weizmann) and Parikshit Gopalan (MSR).
2010/05/11Jelani Nelson [+]Applications of FT-mollification
In this talk I will describe a general method, "FT-mollification", forapproximating an arbitrary bounded multivariate function by a smooth approximation while maintaining quantitatively good control on high order partial derivatives. I will then describe a few of its known applications, including: (1) An optimal space upper bound for estimating moments in the data stream model.
(2) A simple proof of a classical result in approximation theory, Jackson's inequality, and one of its extensions to higher dimensions by (Newman and Shapiro, 1963).
(3) An optimal derandomization of the Berry-Esseen theorem by bounded independence.
(4) Proofs that bounded independence fools degree-2 threshold functions, and intersections of halfspaces.
(5) An improved bound for agnostically learning halfspaces.
Based on joint works with Ilias Diakonikolas, Daniel Kane, and David Woodruff
2010/05/25Ilya Volkovich [+]Polynomial-Time Deterministic Identity Testing of Depth-4 Multilinear Circuits with Bounded Top Fan-in
Polynomial Identity Testing (PIT) is one of the centralproblems in algebraic complexity theory and algorithms design: Given an arithmetic circuit $C$ over a field $\F$ with input variables $x_1,x_2,\cdots,x_n$, we need to determine whether $C$ computes the identically zero polynomial.
In this paper we study the PIT problem for multilinear depth-$4$ circuits of fan-in $k$ at the top $+$ gate.
This model is known as multilinear $\Spsp(k)$ circuits.
We give the first polynomial-time deterministic identity testing algorithm for such circuits.
The running time of our algorithm is $\poly(n) * s^O(k^2)$ where $n$ is the number of variables, $s$ is the size of the circuit and $k$ is the fan-in of the top gate.
Note that for polynomially large $s$ and constant $k$ the running time of the algorithm is polynomial.
The importance of this model arises from \cite{AgrawalVinay08}, where it was shown that derandomizing polynomial identity testing for general $\Spsp$ circuits implies a derandomization of PIT in general arithmetic circuits.
We obtain our results by suggesting a natural generalization of the approach that was used to design most of the PIT algorithms for depth-$3$ $\Sps(k)$ circuits, to deal with depth-$4$ $\Spsp(k)$ circuits.
In fact, we prove a structural theorem on multilinear $\Spsp(k)$ circuits that computes the zero polynomial.
More specifically: let $C$ be multilinear $\Spsp(k)$ circuits that computes the zero polynomial satisfying some addition properties. We show that any polynomial computed by a *subcircuit* of $C$ can contain only a ``small'' number of (non-zero) monomials.
2010/06/01Gil Segev [+]Public-key Cryptosystems Resilient to Key Leakage
Most of the work in the analysis of cryptographic schemes has traditionally focused on abstract adversarial models that do not capture "side-channel attacks". Such attacks exploit various forms of unintended leakage of sensitive information, which is inherent to almost all physical implementations. Inspired by extremely devastating real-life attacks, in this talk I will describe a recent approach for modeling and combating side-channel attacks. I will focus on a framework for modeling a wide range of attacks that rely on partial leakage of secret keys. In particular, I will present a new, simple, and efficient construction of a public-key encryption scheme that is resilient to leakage of almost its whole secret key, as well as a generic method for constructing leakage-resilient encryption schemes that can be based on a variety of number-theoretic assumptions. Based on joint work with Moni Naor.
2010/06/08Shachar Lovett [+]A lower bound for dynamic approximate membership data structures

Schedule for 2008/2009

Date Speaker Title
2008/11/04Vadim Lyubashevsky[+]Towards Practical Lattice-Based Cryptography
Recently, there has been a lot of theoretical work directed towards constructing cryptographic protocols based on the hardness of lattice problems. Because the security of lattice-based schemes can be based on the hardness of worst-case problems and because it is conjectured that these protocols are immune against quantum attacks, lattice-based cryptography holds great appeal. The main disadvantage is that these schemes generally involve operations on, and storage of, large n ª n matrices. This results in the protocols being rather inefficient and unsuitable for practical use.
In this talk, I will describe how the inherent inefficiency of lattice constructions can be overcome by considering algebraic lattices with additional structure - the lattices will correspond to ideals in polynomial rings (we call them ideal lattices). I will show how to construct cryptographic primitives such as collision-resistant hash functions and signature schemes whose security is based on the hardness of worst-case problems for ideal lattices. The hash functions that we build have almost linear running time, and turn out to be rather efficient in practice. The signature scheme also has running time that is almost linear (up to poly-logarithmic factors), and so it is more efficient (at least asymptotically) than signatures with security based on factoring and discrete log.
The talk will not be very technical, and should be pretty self-contained. Since this is a relatively new field, I will also be presenting a lot of open problems.
2008/11/11Oded Regev[+]Unique Games with Entangled Provers are Easy
We consider one-round games between a classical verifier and two provers who shareentanglement. We show that when the constraints enforced by the verifier are `unique' constraints (i.e., permutations), the value of the game can be well approximated by a semidefinite program. Essentially the only algorithm known previously was for the special case of binary answers, as follows from the work of Tsirelson in 1980. Among other things, our result implies that the variant of the unique games conjecture where we allow the provers to share entanglement is false. Our proof is based on a novel `quantum rounding technique', showing how to take a solution to an SDP and transform it to a strategy for entangled provers.
NOTE: This talk requires absolutely no prior knowledge of quantum computation.
Joint work with Julia Kempe and Ben Toner.
2008/11/25Guy Kindler[+]Can cubic tiles be sphere-like?
The unit cube tiles R^d by Z^d, in the sense that its translations by vectors from Z^d cover R^d. It is natural to ask what is the minimal surface area of a body that tiles R^d by Z^d. The volume of any such body should clearly be at least 1, and therefore its surface area must be at least that of a unit volume ball, which of order sqrt(d). The surface area of the cube, however, is of order d, and no better tiling was known. In this work we use a random construction to show that the optimal surface area is indeed of order sqrt(d), namely there exist bodies that tile R^d as a cube would, but have sphere-like surface areas.
Tiling problems were considered for well over a century, but this particular tiling problem was also recently considered in computer science because of its relations with the Unique Games conjecture and with the Parallel Repetition problem. Indeed, our current result follows from the same idea that was used recently by Raz in his counter example for the strong Parallel Repetition conjecture.
Joint work with Ryan O'Donnell, Anup Rao, and Avi Wigderson.
2008/12/02Ronen Shaltiel[+]Unconditional weak derandomization of weak algorithms: explicit versions of Yao's Lemma
A simple averaging argument shows that given a randomized algorithm $A$ and a function $f$ such that for every input $x$, $\Pr[A(x)=f(x)] \ge 1-\rho$ (where the probability is over the coin tosses of $A$). Then, there exists a \emph{nonuniform} deterministic algorithm $B$ ``of roughly the same complexity'' such that $\Pr[B(x)=f(x)] \ge 1-\rho$ (here the probability is over a uniformly chosen input $x$). This implication is often referred to as ``the easy direction of Yao's lemma'' and can be thought of as ``weak derandomization'' in the sense that $B$ is deterministic but only succeeds on most inputs. The implication follows as there is a fixed value $r'$ for the random coins of $A$ such that ``hardwiring $r'$ into $A$'' produces a deterministic algorithm $B$.
However, this argument does not give a way to \emph{explicitly construct} $B$.
We consider the task of proving \emph{uniform versions} of the implication above. That is, how to \emph{explicitly construct} a deterministic algorithm $B$ when given a randomized algorithm $A$. We prove such derandomization results for several classes of randomized algorithms.
These include: randomized communication protocols, randomized decision trees (here we improve a previous result by Zimand), randomized streaming algorithms and randomized algorithms computed by polynomial size constant depth circuits.
Our proof uses an approach suggested by Goldreich and Wigderson and ``extracts randomness from the input''. We show that specialized (seedless) extractors can produce randomness that is in some sense not correlated with the input. Our analysis can be applied to \emph{any} class of randomized algorithms as long as one can explicitly construct the appropriate extractor. Some of our derandomization results follow by constructing a new notion of seedless extractors that we call ``extractors for recognizable distributions'' which may be of independent interest.
2008/12/09Robert Krauthgamer[+]On embedding edit distance into L_1
The edit distance (aka Levenshtein distance) between two strings is the number of character insertions, deletions and substitutions required to transform one string to the other. A very powerful paradigm for solving computational problems on the metric space induced by the edit distance is to embed this metric into L_1, using a low-distortion map (if possible).
I will first present a low-distortion embedding of edit distance on permutations (aka the Ulam metric) [based on joint work (i) with Moses Charikar, and (ii) with Parikshit Gopalan and T.S. Jayram]. I will then discuss lower bounds on the distortion required to embed edit distance on 0-1 strings [joint work with Yuval Rabani] and, more briefly, on permutations [joint work with Alex Andoni].
2008/12/16Avinatan Hassidim[+]Derandomizing Algorithms on Product Distributions
Getting the deterministic complexity closer to the best known randomized complexity is an important goal in algorithms and communication protocols. In this work, we investigate the case where instead of one input, the algorithm\protocol is given multiple inputs sampled independently from an arbitrary unknown distribution. We show that in this case a strong and generic derandomization result can be obtained by a simple argument.
Our method relies on extracting randomness from ``same-source'' product distributions, which are distributions generated from multiple independent samples from the same source. The extraction process succeeds even for arbitrarily low min-entropy, and is based on the order of the values and not on the values themselves (This may be seen as a generalization of the classical method of Von-Neumann extended by Elias for extracting randomness from a biased coin).
The tools developed in the paper are generic, and can be used elsewhere.
We present applications to streaming algorithms, and to implicit probe search \cite{FiatNaor93}. We also refine our method to handle product distributions, where the i'th sample comes from one of several arbitrary unknown distributions. This requires creating a new set of tools, which may also be of independent interest.
Joint work with Ariel Gabizon
2008/12/23Guy Rothblum [+]When and How Can Data be Efficiently Released with Privacy?
We consider private data analysis in the setting in which a trusted and trustworthy curator, having obtained a large data set containing private information, releases to the public a ``sanitization'' of the data set that simultaneously protects the privacy of the individual contributors of data and offers utility to the data analyst. We focus on the case where the process is non-interactive; once the sanitization has been released the original data and the curator play no further role.
Blum et al. [STOC '08] showed a remarkable result: for any collection of counting predicate queries, the exponential mechanism of McSherry and Talwar [FOCS '07] can be used to (inefficiently) generate a synthetic data set that maintains approximately correct fractional counts for all of the queries, while ensuring a strong privacy guarantee, known as differential privacy.
In this work we investigate the computational complexity of such non-interactive privacy mechanisms, mapping the boundary between feasibility and infeasibility. We show: 1. For any polynomial-size query set C and polynomial-size data universe there is an efficient algorithm for generating a privacy-preserving synthetic data-set that maintains approximate fractional counts, even when the size of the original dataset is sub-polynomial in |C|.
2. When either the query set or the data universe are super-polynomial, assuming one-way functions exist, there is no efficient general method for releasing a privacy-preserving synthetic data-set that maintains approximate fractional counts. In particular, this is a case where the exponential mechanism cannot, in general, be implemented efficiently.
3. Turning to the potentially easier problem of privately generating an arbitrary data structure (not necessarily synthetic data) from which it is possible to approximate counts for a given concept, there is a tight connection between hardness of sanitization and the existence of traitor tracing schemes, methods of content distribution in which keys are assigned to subscribers in a way that given any useful ``pirate'' decoding box, constructed by a coalition of malicious subscribers, it is possible to identify at least one of them.
Joint work with Cynthia Dwork, Moni naor, Omer Reingold and Salil Vadhan.
2008/12/30Amnon Ta-Shma[+]Short seed extractors against quantum storage
2009/01/06Adi Akavia[+]Very Local Self Correcting of Homomorphism and MPC Codes
Blum-Luby-Rubinfeld gave a local self correcting algorithm for homomorphism codes $Hom(G,H)$ for any finite abelian groups $G$ and $H$.
The BLR algorithm, given an entry location $s$ and oracle access to a corrupted codeword $w$ close to a codeword $C$, outputs the value of $C$ on entry $s$, while reading only a constant number of entries in $w$.
Although the number of *entries* read by the BLR algorithm is constant, the number of read *bits* is $O(\log|H|)$, as each alphabet symbol is represented by $log|H|$ bits. This number of read bits may be quite large with respect to the information rate; in particular, for $Hom(Z_N,Z_N)$ the number of read bits is larger than the information rate.
In this work we present a local self correcting algorithm for homomorphism codes $Hom(Z_N^k,Z_N)$ that reads $O(k)$ *bits* to output a single bit of the value of the closest codeword on the given entry (in a non standard binary representation); the algorithm is restricted to codewords $C$ and entries $s$ corresponding to invertible group elements, and extends to other codes $Hom(G,H)$. Our algorithm improves over prior works as long as $k = O(\log N)$. In particular, for $Hom(Z_N,Z_N)$ our algorithm reduces the number of read bits from $O(\log N)$ to $O(1)$.
In addition, we use our algorithm to obtain the first local self correcting algorithm for the MPC codes of Akavia-Goldwasser-Safra FOCS'03, and for their generalizations defined here.
In terms of techniques, we develop a new approach for local self correcting in which we first reduce the self correcting problem into a property testing problem concerning the location of significant Fourier transform coefficients (aka, SFT-testing), and then give an efficient algorithm solving SFT-testing with $O(k)$ read bits. The SFT-testing algorithm may be of independent interest.
2009/01/13Dana Moshkovitz[+]Two Query PCP with Sub-Constant Error
We show that the NP-Complete language 3Sat has a PCP verifier that makes two queries to a proof of almost-linear size and achieves sub-constant probability of error o(1). The verifier performs only projection tests, meaning that the answer to the first query determines at most one accepting answer to the second query.
Previously, by the parallel repetition theorem, there were PCP Theorems with two-query projection tests, but only (arbitrarily small) constant error and polynomial size.
There were also PCP Theorems with sub-constant error and almost-linear size, but a constant number of queries that is larger than 2.
As a corollary, we obtain a host of new results. In particular, our theorem improves many of the hardness of approximation results that are proved using the parallel repetition theorem. A partial list includes the following: 1) 3Sat cannot be efficiently approximated to within a factor of 7/8+o(1), unless P=NP.
This holds even under almost-linear reductions.
Previously, the best known NP-hardness factor was 7/8+epsilon for any constant epsilon>0, under polynomial reductions (Hastad).
2) 3Lin cannot be efficiently approximated to within a factor of 1/2+o(1), unless P=NP.
This holds even under almost-linear reductions.
Previously, the best known NP-hardness factor was 1/2+epsilon for any constant epsilon>0, under polynomial reductions (Hastad).
3) A PCP Theorem with amortized query complexity 1+o(1) and amortized free bit complexity o(1). Previously, the best known amortized query complexity and free bit complexity were 1+epsilon and epsilon, respectively, for any constant epsilon>0 (Samorodnitsky and Trevisan).
One of the new ideas that we use is a new technique for doing the composition step in the (classical) proof of the PCP Theorem, without increasing the number of queries to the proof. We formalize this as a composition of new objects that we call Locally Decode/Reject Codes (LDRC). The notion of LDRC was implicit in several previous works, and we make it explicit in this work. We believe that the formulation of LDRCs and their construction are of independent interest.
This is joint work with Ran Raz
2009/01/20Klim Efremenko[+]3-Query Locally Decodable Codes of Subexponential Length
Locally Decodable Codes (LDC) allow one to decode any particularsymbol of the input message by making a constant number of queries to a codeword, even if a constant fraction of the codeword is damaged. In a recent work ~\cite{Yekhanin08} Yekhanin constructs a $3$-query LDC with sub-exponential length of size $\exp(\exp(O(\frac{\log n}{\log\log n})))$. However, this construction requires a conjecture that there are infinitely many Mersenne primes. In this paper we give an unconditional $3$-query LDC construction with a shorter codeword length of $\exp(\exp(O(\sqrt{\log n \log \log n })))$. We also give a $2^r$-query LDC with length of $\exp(\exp(O(\sqrt[r]{\log n \log \log^{r-1} n })))$. The main ingredient in our construction is the existence of super-polynomial size set-systems with restricted intersections by \cite{Grolmusz00} which hold only over composite numbers.
2009/01/27Benny Applebaum[+]Public Key Cryptography from Different Assumptions
We construct a new public key encryption based on two assumptions: - It is hard to distinguish between a random set of m sparse linear equations over n variables in GF(2) (e.g., O(n) equations each involves 10 variables), and a random set of such equations that have a solution satisfying 1-\mu fraction of them for some small \mu.
- It is hard to distinguish between a random unbalanced bipartite graph of small degree (e.g., 10), and a similar graph with a random planted non-expanding set of vertices.
Most, if not all, previous constructions of public key encryption used hardness assumptions with significant algebraic structure. Our new assumptions, positing indistinguishability from uniform of certain natural distributions on instances of NP-complete problems, seem relatively unstructured and qualitatively different from previous ones.
We give some evidence for these assumptions by studying their resistance to certain natural algorithms, and relating them to variants of more widely studied assumptions such as the hardness of the ``search version'' of learning parity with noise and the planted clique problems.
Joint work with Boaz Barak and Avi Wigderson.
2009/03/03Elad Eban[+]Interactive Proofs For Quantum Computations
The widely held belief that BQP strictly contains BPP raises fundamental questions: Upcoming generations of quantum computers might already be too large to be simulated classically. Is it possible to experimentally test that these systems perform as they should, if we cannot efficiently compute predictions for their behavior? Vazirani has asked [Vaz07]: If computing predictions for Quantum Mechanics requires exponential resources, is Quantum Mechanics a falsifiable theory? In cryptographic settings, an untrusted future company wants to sell a quantum computer or perform a delegated quantum computation. Can the customer be convinced of correctness without the ability to compare results to predictions? To provide answers to these questions, we define Quantum Prover Interactive Proofs (QPIP). Whereas in standard Interactive Proofs [GMR85] the prover is computationally unbounded, here our prover is in BQP, representing a quantum computer. The verifier models our current computational capabilities: it is a BPP machine, with access to few qubits. Our main theorem can be roughly stated as: "Any language in BQP has a QPIP, and moreover, a fault tolerant one" (providing a partial answer to a challenge posted in [Aar07]). We provide two proofs. The simpler one uses a new (possibly of independent interest) quantum authentication scheme (QAS) based on random Clifford elements. This QPIP however, is not fault tolerant. Our second protocol uses polynomial codes QAS due to Ben-Or, Crepeau, Gottesman, Hassidim, and Smith [BOCG+ 06], combined with quantum fault tolerance and secure multiparty quantum computation techniques. A slight modification of our constructions makes the protocol "blind": the quantum computation and input remain unknown to the prover.
2009/03/17Andrew Wan[+]Quasi-Cryptography
We propose the study of quasi-cryptographic primitives and protocols.
These are relaxed versions of standard cryptographic primitives and protocols where the adversary may be given more resources than some of the honest parties.
The purpose of this study is to obtain a better understanding of some of the obstacles in basing cryptography on $\np$-hardness, as well as the relations between various cryptographic primitives.
We make a first step in this direction by investigating the simplest quasi-cryptographic primitive: quasi-one-way functions.
Extending an approach proposed by Gutfreund, Shaltiel, and Ta-Shma (CCC 2005), we construct an ``infinitely often'' version of this primitive, partially answering an open question asked by these authors.
On the negative side, we show that constructing a length-doubling quasi-pseudorandom generator from a quasi-one-way function (even a quasi-one-way permutation) is impossible via fully-black-box reductions.
This contrasts the standard cryptographic setting where such constructions are possible.
(Joint with Andrej Bogdanov and Kunal Talwar)
2009/03/24Or Sattath[+]The Pursuit For Uniqueness: Extending Valiant-Vazirani Theorem to the Probabilistic and Quantum Settings
In 1985, Valiant-Vazirani showed [VV85] that solving NP with the promise that "yes" instances have only one witness, is powerful enough to solve the entire NP class (under randomized reductions).
We are interested in extending this result to the quantum setting. We prove extensions to the classes Merlin-Arthur (MA) and Quantum-Classical-Merlin- Arthur (QCMA) [AN02]. Our results have implications on the complexity of approximating the ground state energy of a quantum local Hamiltonian with a unique ground state and an inverse polynomial spectral gap. We show that the estimation, within polynomial accuracy, of the ground state energy of poly-gapped 1-D local Hamiltonians is QCMA-hard, under randomized reductions. This is in strong contrast to the case of constant gapped 1-D Hamiltonians, which is in NP [Has07]. Moreover, it shows that unless QCMA can be reduced to NP by randomized reductions, there is no classical description of the ground state of every poly-gapped local Hamiltonian which allows the efficient calculation of expectation values.
Finally, we conjecture that an analogous result to the class Quantum Merlin-Arthur (QMA) is impossible. This is formulated by separating between the two relavant classes, QMA and its "unique" version UQMA, using the definition by Aaronson and Kuperberg [AK06] of quantum oracles.
2009/03/31Eden Chlamtac[+]Using SDP Hierarchies for Approximation Algorithms
Semidefinite programming (SDP) is a convex optimization tool which has become one of the main tools in approximation algorithms in recent years.
For a large class of combinatorial optimization problems, the following approach gives the best possible approximation (under certain complexity-theoretic assumptions): phrase the combinatorial optimization problem as an Integer Program (IP), take a natural SDP relaxation for the IP, find the relaxed (SDP) optimum, and round it back to an integer solution.
One possibility for improving this approach, is by taking tighter SDP relaxations (i.e. for which the SDP optimum is closer to the IP optimum).
A systematic way of producing tighter relaxations was proposed by Sherali and Adams, Lovasz and Schrijver, and more recently Lasserre, who give various Linear Programming (LP) and SDP hierarchies of relaxations, where within a given hierarchy, the relaxations become progressively tighter (converging to the original IP). Unfortunately, starting with the work of Arora et. al (2002), there has been a series of negative results, showing that for various problems, such relaxations do not become significantly tighter up to a very high level in the hierarchy, at which point we do not believe the relaxed optimum can be found in polynomial time.
In contrast, we give a positive result. Namely, we examine the Maximum Independent Set problem for $3$-uniform hypergraphs which contain linear-size independent sets. We show that if the size of the maximum independent set in an $n$ vertex $3$-uniform hypergraph is $\gamma n$, then one can find a non-trivial size independent set using $O(1/\gamma^2)$ levels of an SDP hierarchy. On the other hand, we give a simple lower bound of $1/\gamma$ for the number of levels required, thus demonstrating an infinite sequence of improved approximations as one uses progressively higher-level relaxations.
This is joint work with Gyanit Singh.
2009/04/21Enav Weinreb[+]On the Complexity of Communication Complexity
We consider the following question: given a two-argument Boolean function $f$, represented as an $N\times N$ binary matrix, how hard is it to determine the communication complexity of $f$? We address two aspects of this question. On the computational side, we prove that, under appropriate cryptographic assumptions (such as the intractability of factoring), the deterministic communication complexity of $f$ is hard to approximate to within some constant. Under stronger (yet arguably reasonable) assumptions, we obtain even stronger hardness results that match the best known approximation.
On the analytic side, we present a family of (two-argument) functions for which determining the deterministic communication complexity (or even obtaining non-trivial lower bounds on it) implies proving circuit lower bounds for some related functions. Such connections between circuit complexity and communication complexity were known before (Karchmer & Wigderson, 1988) only in the more involved context of relations (search problems) but not in the context of functions (decision problems). This result, in particular, may explain the difficulty of analyzing the communication complexity of certain functions such as the ``clique vs.
independent-set'' family of functions, introduced by Yannakakis (1988).
Joint work with Eyal Kushilevitz.
2009/05/05Iddo Tzameret[+]The proof complexity of polynomial identities
2009/05/12Eli Ben-Sasson[+]Affine Dispersers from Subspace Polynomials
An affine disperser over a field F for sources of dimension d is a function f: F^n -> F that is nonconstant (i.e., takes more than one value) on inputs coming from a d-dimensional affine subspace of F^n.
Affine dispersers have been considered in the context of deterministic extraction of randomness from structured sources of imperfect randomness, and in particular, generalize the notion of dispersers for bit-fixing sources. Previously, explicit constructions of affine dispersers were known for every d=\Omega(n), due to [Barak, Kindler, Shaltiel, Sudakov and Wigderson] and [Bourgain]. (The latter in fact gives stronger objects called affine extractors).
In this work we give the first explicit affine dispersers for sublinear dimension. Specifically, our dispersers work even when d> n^{4/5}.
The main novelty in our construction lies in the method of proof, which relies on elementary properties of subspace polynomials. In contrast, the previous works mentioned above relied on sum-product theorems for finite fields.
Joint work with Swastik Kopparty.
2009/05/19Ronald de Wolf[+]Error-correcting data structures
We study data structures in the presence of adversarial noise. We want to encode a given object in a succinct data structure that enables us to efficiently answer specific queries about the object, even if the data structure has been corrupted by a constant fraction of errors. This model is the common generalization of static data structures and error-correcting codes. The main issue is the tradeoff between the space used by the data structure and the time (number of probes) needed to answer a query about the encoded object. We prove a number of upper and lower bounds on various natural error-correcting data structure problems. In particular, we show that the optimal length of error-correcting data structures for the Membership problem (where we want to store subsets of size s from a universe of size n) is closely related to the optimal length of locally decodable error-correcting codes (LDCs) for s-bit strings.
Unfortunately, all known LDC constructions with a constant number of probes have superpolynomial length, and this is conjectured to be unavoidable. This means that the model does not allow very efficient error-correcting data structures. In the last part of the talk (which is joint work with Victor Chen and Elena Grigorescu) we show how to relax the requirements in a way that enables data structures that are simultaneously close to optimal in time as well as space, and that still work (most of the time) with a constant fraction of errors.
2009/05/26Gil Segev[+]Chosen-Ciphertext Security via Correlated Products
2009/06/02Joshua Brody[+]A Multi-Round Communication Lower Bound for the Gap Hamming problem
The Gap-Hamming-Distance problem arose in the context of proving space lower bounds for a number of key problems in the data stream model. In this problem, Alice and Bob have to decide whether the Hamming distance between their input strings is large (i.e. at least n/2 + \sqrt{n}) or small (at most n/2 - \sqrt{n}). The gap between "large" and "small" instances is crucial for capturing the approximation allowed in a data stream problem.
Thus far, for randomized communication, an \Omega(n) lower bound was known only for one-round protocols. We prove an \Omega(n) lower bound for randomized protocols that use any constant number of rounds. As a consequence, we prove that, for example, \epsilon-approximating the number of distinct elements in a stream requires \Omega(\epsilon^{-2}) space, even with a constant number of passes over the stream. We obtain similar results for approximating frequency moments and for approximating the empirical entropy of a stream.
Joint work with Amit Chakrabarti.
2009/06/09Ishay Haviv[+]Rounding Parallel Repetitions of Unique Games
We show a connection between the semi-definite relaxation of unique games and their behavior under parallel repetition. Specifically, denoting by val(G) the value of a two-prover unique game G, and by sdpval(G) the value of a natural semi-definite program to approximate val(G), we prove that for every large enough $\ell$, if sdpval(G) is at least $1-\delta$, then the value of the $\ell$-times parallel repeated version of G is at least $sdpval(G)^{s \ell}$ where $s=O(\log(k/\delta))$ for general unique games and $s=O(1)$ for XOR games (i.e., k=2). By a result of Feige and Lovasz (STOC '92), our bounds are tight up to a factor of $O(\log(1/\delta))$ in the exponent.
For games where there is a significant gap between the quantities val(G) and sdpval(G), our result implies that $val(G^{\ell})$ may be much larger than $\val(G)^{\ell}$, giving a counterexample to the strong parallel repetition conjecture. In a recent breakthrough, Raz has showed such an example using the max-cut game on odd cycles. Our results are based on a generalization of his techniques.
Joint work with Boaz Barak, Moritz Hardt, Anup Rao, Oded Regev and David Steurer.
2009/06/16Itai Arad[+]The Detectability Lemma and Quantum Gap Amplification
Gap amplification of probabilities by random walks on expander graphs is a central notion in the study of CSPs. In particular it is the core of Dinur's proof of the PCP theorem, in derandomizations, etc. We prove a quantum version of gap amplification; to this end we apply the classical result in conjunction with purely quantum understandings regarding the behavior of sums of non-commuting matrices. The main problem in the area, of whether a quantum PCP theorem holds, is left open, and we will discuss the difficulties and its importance.
Joint work with Dorit Aharonov, Umesh Vazirani and Zeph Landau.

Schedule for 2007/2008

Date Speaker Title
2008/06/03Amir Shpilka[+]Reconstruction of depth-3 arithmetic circuits
Depth-3 arithmetic circuits compute polynomials that can be represented as sums of products of linear functions. In spite of its simple structure, we are far from understanding this model. In this talk we will focus on the reconstruction problem for depth-3 circuits, that have a constant number of multiplication gates. I.e., we have access to some unknown polynomial, that can be represented as a sum of a constant number of products of linear functions, and by asking for its values on (a small number of) inputs of our choice we would like to find a depth-3 circuit computing it. We will show how to reconstruct such depth-3 circuits in time exp(polylog(s)), where s is the size of a depth-3 circuit computing the unknown polynomial. Our techniques rely on a theorem on the structure of zero depth-3 circuits and on a zero testing algorithm that it implies.
2008/05/27Alex Samorodnitsky[+]Low degree tests at large distances
Consider the following problem: given a boolean function we need todecide whether it is "highly structured", which in our case means representable by a low-degree polynomial, or "pseudorandom", which we take to mean being far from all low-degree polynomials. Our decision has to be based on a very restricted randomized local view of the function.
This is a 'property testing' question with some existing and some potential connections to probabilistically checkable proofs and derandomization. We will discuss some recent developments regarding this question. In particular, it turns out that an attempt to measure pseudorandomness of a function analytically, via its 'Gowers norm', does not work so well.
Based in part on joint work with Shachar Lovett, Roy Meshulam, and Luca Trevisan
2008/05/20Wim van Dam[+]Classical and Quantum Algorithms for Exponential Congruences
We discuss classical and quantum algorithms for solvability testing and finding integer solutions x,y of equations of the form af^x + bg^y = c over finite fields GF(q). A quantum algorithm with time complexity q^(3/8) (log q)^O(1) is presented. While still superpolynomial in log q, this quantum algorithm is significantly faster than the best known classical algorithm, which has time complexity q^(9/8) (log q)^O(1). Thus it gives an example of a natural problem where quantum algorithms provide about a cubic speed-up over classical ones. This is joint work with Igor E. Shparlinski.
Cf. arXiv:0804.1109
2008/05/13Daniel Gottesman[+]The threshold for fault-tolerance
Quantum computers are likely to be much more vulnerable to noise thanclassical computers, so it is likely they will need to be encoded using quantum error-correcting codes and computations will be performed using fault-tolerant protocols. One critical result states that there is a threshold error rate, such that arbitrarily long reliable computations are possible if the error rate per gate and time step in the computer are below the threshold. I will give an overview of recent work on the threshold, with a slight emphasis on work by Aliferis, Preskill, and myself, showing a higher threshold under more general conditions than previous proofs.
2008/05/06Shay Solomon[+]Shallow, Low, and Light Trees, and Tight Lower Bounds for Euclidean Spanners
We show that for every $n$-point metric space $M$ thereexists a spanning tree $T$ with unweighted diameter $O(\log n)$ and weight $\omega(T) = O(\log n) \cdot \omega(MST(M))$.
Moreover, there is a designated point $rt$ such that for every point $v$, $dist_T(rt,v) \le (1+\epsilon) \cdot dist_M(rt,v)$, for an arbitrarily small constant $\epsilon > 0$.
We extend this result, and provide a tradeoff between unweighted diameter and weight, and prove that this tradeoff is \emph{tight up to constant factors} in the entire range of parameters.
These results enable us to settle a long-standing open question in Computational Geometry.
In STOC'95 Arya et al. devised a construction of Euclidean Spanners with unweighted diameter $O(\log n)$ and weight $O(\log n) \cdot \omega(MST(M))$. Ten years later in SODA'05 Agarwal et al. showed that this result is tight up to a factor of $O(\log \log n)$. We close this gap and show that the result of Arya et al. is tight up to constant factors.
Joint work with Yefim Dinitz and Michael Elkin.
2008/04/08Troy Lee[+]Product theorems via semidefinite programming
The tendency of semidefinite programs to compose perfectly under product has been exploitedmany times in complexity theory: for example, by Lovasz to determine the Shannon capacity of the pentagon; to show a direct sum theorem for non-deterministic communication complexity and direct product theorem for discrepancy; and in interactive proof systems to show parallel repetition theorems for restricted classes of games.
Despite all these examples of product theorems---some going back nearly thirty years---it was only recently that Mittal and Szegedy began to develop a general theory to explain when and why semidefinite programs behave perfectly under product. This theory captured many examples in the literature, but there were also some notable exceptions which it could not explain---namely, an early parallel repetition result of Feige and Lovasz, and a direct product theorem for the discrepancy method of communication complexity by Lee, Shraibman, and Spalek.
We extend the theory of Mittal and Szegedy to explain these cases as well. Indeed, to the best of our knowledge, our theory captures all examples of semidefinite product theorems in the literature.
Joint work with Rajat Mittal
2008/04/01Dana Ron[+]Approximating the Distance to Monotonicity in High Dimensions
In this work we study the problem of approximating the distance of a function over [n]^d to monotonicity. Namely, we are interested in randomized sublinear algorithms that approximate the Hamming distance between a given function and the closest monotone function. We allow both an additive error, parameterized by \delta, and a multiplicative error.
Previous work on distance approximation to monotonicity focused on the one-dimensional case and the only explicit extension to higher dimensions was with a multiplicative approximation factor exponential in the dimension d. Building on Goldreich et al.
(Combinatorica, 2000) and Dodis et al. (Proceedings of RANDOM, 1999), in which there are better implicit results for the case n=2, we describe a reduction from the case of functions over the d-dimensional hyper-cube to lower dimensional hyper-cubes.
Using this reduction and a known distance approximation algorithm for the one dimensional case, we obtain a distance approximation algorithm for functions over the d-dimensional hyper-cube, with any range R, which has a multiplicative approximation factor of O(d log|R|).
For the case of a binary range, we present constant approximation algorithms for low dimensions. Combining these algorithms with the reduction described above, we obtain a variety of distance approximation algorithms for Boolean functions over the d-dimensional hyper-cube, which suggest a trade-off between quality of estimation and efficiency of computation. In particular, the multiplicative error ranges between O(d) and O(1).
This is joint work with Shahar Fattal.
2008/03/25Ronen Shaltiel[+]Hardness amplification proofs require majority
A function $f$ is $\delta$-hard for some class of circuits if everycircuit in the class errs on a $\delta$-fraction of the inputs. For various classes of constant depth small size circuits there are explicit functions that are $\delta$-hard for ``small'' values of $\delta$.
For stronger circuit classes there are many ``hardness amplification theorems'' that show how to transform any $\delta$-hard function $f$ into a ``harder'' function $g$ that is $(1/2-\epsilon)$-hard (for very small $\epsilon$). These theorems are proved using black-box reductions that transform a circuit that computes $g$ correctly on a $1/2+\epsilon$ fraction of the inputs into a circuit that computes $f$ correctly on a $1-\delta$-fraction of the inputs.
We show that such reductions cannot have low complexity: 1. Any such reduction ``can be used'' to compute the majority function on $1/\epsilon$ bits.
2. Any such reduction must make $\Omega(\log(1/\delta)/\epsilon2)$ oracle queries.
By the Natural Proofs paradigm of Razborov and Rudich (coupled with a cryptographic construction of Naor and Reingold) ``current techniques'' cannot prove lower bounds against circuit classes that can compute the majority function. Loosely speaking this means that: ``current techniques can only amplify hardness that we don't have''.
The main difficulty in the proofs is that reductions that come up in proofs of hardness amplification theorems are allowed to be ``nonuniform'' and we develop techniques to bound such reductions.
The talk will be self contained and I will give a brief survey of the area.
Joint work with Emanuele Viola.
2008/03/18Irit Dinur[+]Decodability of Group Homomorphisms beyond the Johnson Bound
Given a pair of finite groups G and H, the set of homomorphisms from Gto H form an error-correcting code where codewords differ in at least 1/2 the coordinates. We show that for every pair of *abelian* groups G and H, the resulting code is (locally) list-decodable from a fraction of errors arbitrarily close to its distance. At the heart of this result is the following combinatorial result: for every pair of abelian groups G and H, if the maximum fraction of agreement between two distinct homomorphisms from G to H is Lambda, then for every epsilon> 0 and every function f:G-->H, the number of homomorphisms that have agreement Lambda + epsilon with f is at most poly(1/epsilon). We thus give a broad class of codes whose list-decoding radius exceeds the ``Johnson bound''.
Joint work with Elena Grigorescu, Swastik Kopparty, and Madhu Sudan.
2008/03/11Avraham Ben-Aroya [+]A combinatorial construction of almost-Ramanujan graphs using the zig-zag product
Reingold, Vadhan and Wigderson [FOCS'00] introduced the graphzig-zag product. This product combines a large graph and a small graph into one graph, such that the resulting graph inherits its size from the large graph, its degree from the small graph and its spectral gap from both. Using this product they gave the first fully-explicit combinatorial construction of expander graphs. They showed how to construct $D$-regular graphs having spectral gap $1-O(D^{-\frac{1}{3}})$. In the same paper, they posed the open problem of whether a similar graph product could be used to achieve the almost-optimal spectral gap $1-O(D^{-\frac{1}{2}})$.
In this paper we propose a generalization of the zig-zag product that combines a large graph and several small graphs. The new product gives a better relation between the degree and the spectral gap of the resulting graph. We use the new product to give a fully-explicit combinatorial construction of $D$-regular graphs having spectral gap $1-D^{-\frac{1}{2} + o(1)}$.
Joint work with Amnon Ta-Shma
2008/03/04Zeev Dir[+]Hardness-Randomness Tradeoffs for Bounded Depth Arithmetic Circuits
We show that lower bounds for bounded depth arithmetic circuits implyderandomization of polynomial identity testing for bounded depth arithmetic circuits. More formally, if there exists an explicit polynomial f(x_1,...,x_m) that cannot be computed by a depth d arithmetic circuit of small size then there exists an efficient deterministic algorithm to test whether a given depth d-8 circuit is identically zero or not (assuming the individual degrees of the tested circuit are not too high). In particular, if we are guaranteed that the tested circuit computes a multilinear polynomial then we can perform the identity test efficiently. The above results are obtained using the the arithmetic Nisan-Wigderson generator of [KabanetsImpagliazzo04] together with a new theorem on bounded depth circuits, which is the main technical contribution of our work.
Joint work with Amir Shpilka and Amir Yehudayoff.
2008/02/26Amir Yehudayoff[+]Multilinear Formulas, Maximal-Partition Discrepancy and Mixed-Sources Extractors
In this paper we study multilinear formulas, monotone arithmetic circuits, maximal partition discrepancy, best partition communication complexity and mixed two source extractors. We will first define maximal partition discrepancy, and construct a function f that has exponentially small maximal partition discrepancy. We will then review several application of this construction: 1. The best partition communication complexity of f is \Theta(n).
2. It can be used to extract a linear number of almost perfect random bits from a mixed two source of randomness.
3. We use f to prove lower bounds for several models of arithmetic circuits; for example, monotone arithmetic circuits and orthogonal multilinear formulas.
We will focus mainly on the monotone model.
Joint work with Ran Raz.
2008/02/19Adi Shraibman[+]Disjointness is hard in the multi-party number-on-the-forehead model
We show that disjointness requires randomized communicationOmega(n^{1/2k}/{(k-1)2^{k-1}2^{2^{k-1}}) in the general k-party number-on-the-forehead model of complexity. The previous best lower bound was Omega((log n)/k-1). By results of Beame, Pitassi, and Segerlind, this implies 2^{n^{Omega(1)} lower bounds on the size of tree-like Lovasz-Schrijver proof systems needed to refute certain unsatisfiable CNFs, and super-polynomial lower bounds on the size of any tree-like proof system whose terms are degree-d polynomial inequalities for d = loglog n -O(logloglog n).
Joint work with Troy Lee from Rutgers University
2008/02/12Carlos Mochon[+]Quantum coin flipping (by telephone)
Coin flipping by telephone (Blum '81) is one of the most basic tasks oftwo-party secure computation. In a quantum setting, it is possible to realize (weak) coin flipping with information theoretic security.
Quantum coin flipping has been a longstanding open problem, and its solution uses an innovative formalism developed by Alexei Kitaev for mapping quantum games into convex optimization problems. The optimizations are carried out over duals to the cone of operator monotone functions, though the mapped problem can also be described in a very simple language that requires no knowledge of quantum information.
Time permitting, I will discuss both Kitaev's formalism, and the solution that leads to quantum weak coin flipping with arbitrarily small bias.
2008/01/15Shachar Lovett[+]Unconditional pseudorandom generators for low degree polynomials
We give an explicit construction of pseudorandomgenerators against low degree polynomials over finite fields. We show that the sum of $2^d$ small-biased generators with error $\epsilon^{2^{O(d)}}$ is a pseudorandom generator against degree $d$ polynomials with error $\epsilon$. This gives a generator with seed length $2^{O(d)} \log{(n/\epsilon)}$. Our construction follows the recent breakthrough result of Bogadnov and Viola \cite{BV}. Their work shows that the sum of $d$ small-biased generators is a pseudo-random generator against degree $d$ polynomials, assuming the Inverse Gowers Conjecture. However, this conjecture is only proven for $d=2,3$. The main advantage of our work is that it does not rely on any unproven conjectures.
2008/01/08Guy Rothblum[+]Delegating Computation: Interactive Proofs for Mortals
The main challenge we consider in this work is: "For which computations can a prover supply you with the output of the computation and prove to you that the output is correct, where you use significantly less resources to verify the correctness of the output than it would take to run the computation, and without the prover working too hard?" We address this problem in the Interactive Proof setting. We construct, for many efficiently computable languages, public-coin, interactive proofs where the verifier's running time is quasi-linear, the prover's running time is polynomial, and the communication is poly-logarithmic. In particular, we give such a proof system for any language in (uniform) NC.
Our results allow us to make progress on other related questions, such as the length of zero-knowledge proofs for NP languages, the expressive power of public-coin interactive proofs with log-space verifiers, and constructing short efficiently verifiable non-interactive certificates of correctness for computations without resorting to the random oracle model.
Previously, the question of verifying the correctness of computations was addressed in the PCP model by BFLS, in the argument model under computational assumptions by Kilian, and in the random oracle model by Micali.
In contrast, our work is in the (single prover) interactive proof model, and makes no assumptions on the computational power of the dishonest prover.
Joint work with Shafi Goldwasser and Yael Kalai.
2008/01/01Gil Segev[+]Securing Vote Storage Mechanisms: Deterministic History-Independent Strategies for Storing Information on Write-Once Memories
Motivated by the challenging task of designing "secure" vote storage mechanisms, we deal with information storage mechanisms that operate in extremely hostile environments. In such environments, the majority of existing techniques for information storage and for security are susceptible to powerful adversarial attacks. In this setting, we propose a mechanism for storing a set of at most $K$ elements from a large universe of size $N$ on write-once memories in a manner that does not reveal the insertion order of the elements. We consider a standard model for write-once memories, in which the memory is initialized to the all $0$'s state, and the only operation allowed is flipping bits from $0$ to $1$. Whereas previously known constructions were either inefficient (required $\Theta(K2)$ memory), randomized, or employed cryptographic techniques which are unlikely to be available in hostile environments, we eliminate each of these undesirable properties. The total amount of memory used by the mechanism is linear in the number of stored elements and poly-logarithmic in the size of the universe of elements. In addition, we consider one of the classical distributed computing problems: conflict resolution in multiple-access channels. By establishing a tight connection with the basic building block of our mechanism, we construct the first deterministic and non-adaptive conflict resolution algorithm whose running time is optimal up to poly-logarithmic factors.
Joint work with Tal Moran and Moni Naor.
2007/12/25Uri Stav[+]Linear and non-linear Index Coding
The following source coding problem was introduced by Birk and Kol: a sender holds a word x in {0,1}^n, and wishes to broadcast a codeword to n receivers, R_1,...,R_n. The receiver R_i is interested in x_i, and has prior side information comprising some subset of the n bits. This corresponds to a directed graph G on n vertices, where ij is an edge iff R_i knows the bit x_j. An index code for G is an encoding scheme which enables each R_i to always reconstruct x_i, given his side information. An optimal index code is an index code with minimal word length. In the talk I will survey the motivation, basic properties and several recent results on index coding. I will focus on a joint work with Eyal Lubetzky in which we show that linear index coding is not necessarily optimal (thus disproving an earlier conjecture of Bar-Yossef, Birk, Jayram and Kol). This is achieved by an explicit construction of a side information graph, which extends Alon's variant of the celebrated Ramsey construction of Frankl and Wilson.
2007/12/18Paul Valiant[+]Testing Symmetric Properties of Distributions
We introduce the notion of a \emph{Canonical Tester} for a classof properties, that is, a tester strong and general enough that ``a property is testable if and only if the Canonical Tester tests it''.
We construct a Canonical Tester for the class of symmetric properties of one or two distributions satisfying a certain weak continuity condition.
Analyzing the performance of the Canonical Tester on specific properties resolves several open problems, establishing lower-bounds that match known upper-bounds: we show that distinguishing between entropy $<\alpha$ or $>\beta$ on distributions over $[n]$ requires $n^{\alpha/\beta- o(1)}$ samples, and distinguishing whether a pair of distributions has statistical distance $<\alpha$ or $>\beta$ requires $n^{1-o(1)}$ samples. Our techniques also resolve a conjecture about a property that our Canonical Tester does not apply to: distinguishing identical distributions from those with statistical distance $>\beta$ requires $\Omega(n^{2/3})$ samples.
2007/12/11Iftach Haitner[+]Statistically-Hiding Commitment from Any One-Way Function
2007/12/04Or Meir[+]Combinatorial Construction of Locally Testable Codes
2007/11/27Ofer Neiman[+]Local Embedding of Metric Spaces
In many application areas, complex data sets are often representedby some metric space and metric embedding is used to provide a more structured representation of the data. In many of these applications much greater emphasis is put on the preserving the {\em local} structure of the original space than on maintaining its complete structure. In this paper we initiate the study of {\em local embeddings} of metric spaces and provide embeddings with distortion depending solely on the local structure of the space.
joint work with Ittai Abraham and Yair Bartal
2007/11/20Avinatan Hassidim[+]Secure MultiParty Computation with (only) an Honest Majority
Secret sharing and multiparty computation (also called ``securefunction evaluation'') are fundamental primitives in modern cryptography, allowing a group of mutually distrustful players to perform correct, distributed computations under the sole assumption that some number of them will follow the protocol honestly. This paper investigates how much trust is necessary -- that is, how many players must remain honest -- in order for distributed quantum computations to be possible.
We present a verifiable quantum secret sharing (VQSS) protocol, and a general secure multiparty quantum computation (MPQC) protocol, which can tolerate any $\lfloor \frac{n-1}{2}\rfloor$ cheaters among $n$ players. Previous protocols for these tasks tolerated $\lfloor \frac{n-1}{4}\rfloor$ and $\lfloor \frac{n-1}{6}\rfloor$ cheaters, respectively. The threshold we achieve is tight -- even in the classical case, ``fair'' multiparty computation is not possible if any set of $n/2$ players can cheat.
Our protocols rely on approximate quantum error-correcting codes, which can tolerate a larger fraction of errors than traditional, exact codes. We introduce new families of authentication schemes and approximate codes tailored to the needs of our protocols, as well as new state purification techniques along the lines of those used in fault-tolerant quantum circuits.
Joint work with Michael Ben-Or, Claude Crépeau, Daniel Gottesman and Adam Smith
2007/11/13Ronen Shaltiel[+]Low-end uniform hardness versus randomness tradeoffs for AM
In 1998, Impagliazzo and Wigderson proved a hardness vs. randomnesstradeoff for BPP in the {\em uniform setting}, which was subsequently extended to give optimal tradeoffs for the full range of possible hardness assumptions by Trevisan and Vadhan (in a slightly weaker setting). In 2003, Gutfreund, Shaltiel and Ta-Shma proved a uniform hardness vs. randomness tradeoff for AM, but that result only worked on the ``high-end' of possible hardness assumptions. In this work, we give uniform hardness vs. randomness tradeoffs for AM that are near-optimal for the full range of possible hardness assumptions. Following Gutfreund et al. we do this by constructing a hitting-set-generator (HSG) for AM with ``resilient reconstruction.' Our construction is a recursive variant of the Miltersen-Vinodchandran HSG, the only known HSG construction with this required property. The main new idea is to have the reconstruction procedure operate implicitly and locally on superpolynomially large objects, using tools from PCPs (low-degree testing, self-correction) together with a novel use of extractors that are built from Reed-Muller codes for a sort of locally-computable error-reduction. As a consequence we obtain gap theorems for AM (and AM $\cap$ coAM) that state, roughly, that either AM (or AM $\cap$ coAM) protocols running in time $t(n)$ can simulate all of EXP (``Arthur-Merlin games are powerful'), or else all of AM (or AM $\cap$ coAM) can be simulated in nondeterministic time $s(n)$ (``Arthur-Merlin games can be derandomized'), for a near-optimal relationship between $t(n)$ and $s(n)$. As in the paper of Gutfreund et al., the case of AM $\cap$ coAM yields a particularly clean theorem that is of special interest due to the wide array of cryptographic and other problems that lie in this class.
The talk will be self contained and I will survey previous work on hardness versus randomness tradeoffs.
Joint work with Chris Umans
2007/11/06Stephanie Wehner[+]Clifford algebra and uncertainty relations
2007/10/30Shmuel Marcovitch[+]Is Communication Complexity Physical?
2007/10/23Oded Schwartz[+]Which comes first: Max-Cut or Unique-Game?
This is a public lecture on the PhD thesis. Thesis title: "Expansion and Approximability" In this thesis we study expanding graphs, the approximability of optimization problems and issues that combine both notions. This talk presents recent results with Muli Safra that demonstrates the implications of expansion to the complexity of approximating optimization problems.
Talk: Which comes first: Max-Cut or Unique-Game? Abstract: --------- Finding tight thresholds of the approximability of NP-hard optimization problems has been an ongoing successful study in complexity theory. When NP-hardness cannot be obtained, one can reduce from problems whose complexity is not yet settled (even assuming P does not equal NP). A recently popular such problem is the (in)approximability of almost completely satisfiable Unique-Game.
Khot et al. showed a reduction from this problem to the problem of approximating Max-Cut, better than the approximation of Goemans & Williamson. We conjecture an efficient reduction in the other direction. We show that the Parallel Repetition technique can yield this reduction, if a "perfect" analysis is provided. We obtain such an analysis only for instances with good expansion properties, and up to a constant soundness. Our analysis is constructive (contrary to that of Raz, and the simplified one by Holenstein) and we discuss possible algorithmic consequences to the constructiveness.
Joint work with Muli Safra

Schedule for 2006/2007

Date Speaker Title
2007/06/05Avner Magen[+]Tight integrality gaps for Vertex Cover SDPs in the Lovasz-Schrijver hierarchy
2007/05/15Ben Toner[+]De Finetti theorems for conditional probability distributions
2007/05/08Guy Kindler[+]Understanding parallel repetition requires understanding foams
2007/05/01Ronald de Wolf[+]On computation and communication with small bias
2007/04/17Oded Schwartz[+]An Elementary Construction of Constant-Degree Expanders
2007/04/10Vinod Vaikuntanathan[+]Chosen ciphertext security, (almost) for free
2007/03/27Prahladh Harsha[+]The Communication Complexity of Correlation
2007/03/20Benny Applebaum[+]On Pseudorandom Generators with Linear Stretch in NC0
2007/03/13Dan Gutfreund[+]Verifying and decoding in constant depth
2007/03/06Danny Harnik[+]On the Compressibility of NP Instances and Cryptographic Applications
2007/02/27Avraham Ben-Aroya[+]Quantum expanders and the quantum entropy difference problem
2007/01/16Stephen D. Miller[+]A spectral analysis of the Pollard Rho algorithm for discrete logarithms
2007/01/09Zeev Dvir[+]Rank Extractors
2007/01/02Andrej Bogdanov[+]Hardness Amplification for Errorless Heuristics
2006/12/26Ben Riva[+]Bare-handed Electronic Voting
2006/12/12Tal Moran[+]Receipt-Free Universally-Verifiable Voting With Everlasting Privacy
2006/12/05Noam Livne[+]All Natural NPC Problems Have Average-Case Complete Versions
2006/11/28Gil Segev[+]Tight Bounds for Unconditional Authentication Protocols in the Manual Channel and Shared Key Models
2006/11/21Enav Weinrab[+]Private Approximation of Search Problems
2006/11/14Guy Kindler[+]Bounded functions with small tails are Juntas
2006/11/07Ishay Haviv[+]Tensor-based Hardness of the Shortest Vector Problem
2006/10/31Manor Mendel[+]Ramsey partitions and proximity data-structures


Schedule for 2005/2006

Date Speaker Title
2006/06/13Anup Rao[+]Extractors for a Constant Number of Polynomially Small Min-Entropy Independent Sources
2006/06/06Ronen Shaltiel[+]2-source dispersers for entropy $n^{o(1)}$ and Ramsey graphs beating the Frankl-Wilson constructions
2006/05/09Dorit Aharonov[+]A polynomial quantum algorithm for approximating the Jones polynomial
2006/04/25Oded Regev [+]Bounded-error quantum state identification with applications
2006/04/04Iddo Tzameret[+]The Strength of Multilinear Proofs
2006/03/21Iftach Haitner[+]On the power of the randomized iterate
2006/03/07Yuval Emek[+]A Tight Upper Bound on the Probabilistic Embedding of Series-Parallel Graphs
2006/01/31Adam Smith[+]Calibrating Noise to Sensitivity in Private Data Analysis
2006/01/24Sofya Raskhodnikova[+]Sublinear Algorithms for Approximating String Compressibility and the Distribution Support Size
2006/01/17Ishay Haviv[+]Hardness of the Covering Radius Problem on Lattices
2006/01/10Eyal Kaplan[+]Derandomized Constructions of $k$-Wise (Almost) Independent Permutations
2006/01/03Ricky Rosen[+]Lattice Problems and Norm Embeddings
2005/12/27Adi Akavia[+]On the (Im)Possibility of Basing One-Way Functions on NP-Hardness
2005/12/20Ofer Neiman[+]Metric Embedding with Relaxed Guarantees
2005/12/13Guy Kindler[+]The noisy broadcast problem
2005/12/06Alex Samorodnitsky[+]Gowers Uniformity, influence of variables, and PCPs.
2005/11/29Ronen Gradwohl[+]Random Selection with an Adversarial Majority
2005/11/22Ariel Gabizon[+]Deterministic Extractors for Affine Sources over Large Fields
2005/11/15Eldar Fischer[+]Testing and estimation of dense graph properties
2005/11/08Tamir Tuller[+]Hardness of reconstructing evolutionary trees
2005/11/01Dana Moshkovitz[+]Sub-Constant Error Low Degree Test of Almost-Linear Size

Website written partially by Niko Farhi