We study the problem of obtaining efficient, deterministic, black-box polynomial identity testing algorithms for depth-3 set-multilinear circuits (over arbitrary fields). This class of circuits has an efficient, deterministic, white-box polynomial identity testing algorithm (due to Raz and Shpilka), but has no known such black-box algorithm. We recast this problem as a question of finding a low-dimensional subspace , spanned by rank 1 tensors, such that any non-zero tensor in the dual space ker() has high rank. We obtain explicit constructions of essentially optimal-size hitting sets for tensors of degree 2 (matrices), and obtain quasi-polynomial sized hitting sets for arbitrary tensors (but this second hitting set is less explicit).
We also show connections to the task of performing low-rank recovery of matrices, which is studied in the field of compressed sensing. Low-rank recovery asks (say, over R) to recover a matrix M from few measurements, under the promise that M is rank r. In this work, we restrict our attention to recovering matrices that are exactly rank r using deterministic, non-adaptive, linear measurements, that are free from noise. Over R, we provide a set (of size 4nr) of such measurements, from which M can be recovered in O(rn2+r3n) field operations, and the number of measurements is essentially optimal. Further, the measurements can be taken to be all rank-1 matrices, or all sparse matrices. To the best of our knowledge no explicit constructions with those properties were known prior to this work.
We also give a more formal connection between low-rank recovery and the task of sparse (vector) recovery: any sparse-recovery algorithm that exactly recovers vectors of length n and sparsity 2r, using m non-adaptive measurements, yields a low-rank recovery scheme for exactly recovering nn matrices of rank r, making 2nm non-adaptive measurements. Furthermore, if the sparse-recovery algorithm runs in time , then the low-rank recovery algorithm runs in time O(rn2+n). We obtain this reduction using linear-algebraic techniques, and not using convex optimization, which is more commonly seen in compressed sensing algorithms.
Finally, we also make a connection to rank-metric codes, as studied in coding theory. These are codes with codewords consisting of matrices (or tensors) where the distance of matrices A and B is rank(A−B), as opposed to the usual hamming metric. We obtain essentially optimal-rate codes over matrices, and provide an efficient decoding algorithm. We obtain codes over tensors as well, with poorer rate, but still with efficient decoding