We consider relative error low rank approximation of tensors with respect to the Frobenius norm. Namely, given an order- q tensor A R q i =1 n i , output a rank- k tensor B for which A − B 2 F ( 1 + ) OPT , where OPT = inf rank- k A A − A 2 F . Despite much success on obtaining relative error low rank approximations for matrices, no such results were known for tensors for arbitrary (1 + ) -approximations. One structural issue is that there may be no rank- k tensor A k achieving the above infinum. Another, computational issue, is that an efficient relative error low rank approximation algorithm for tensors would allow one to compute the rank of a tensor, which is NP-hard. We bypass these two issues via (1) bicriteria and (2) parameterized complexity solutions:
(1) We give an algorithm which outputs a rank k = O (( k ) q − 1 ) tensor B for which A − B 2 F ( 1 + ) OPT in nn z ( A ) + n \poly ( k ) time in the real RAM model, whenever either A k exists or 0"> OPT 0 . Here nn z ( A ) denotes the number of non-zero entries in A .
(2) We give an algorithm for any 0"> 0 which outputs a rank k tensor B for which A − B 2 F ( 1 + ) OPT and runs in ( nn z ( A ) + n pol y ( k ) + exp ( k 2 )) n time in the unit cost RAM model.
For outputting a rank- k tensor, or even bicriteria solution with rank- C k for a certain constant 1"> C 1 , we show a 2 ( k 1 − o (1) ) time lower bound under the Exponential Time Hypothesis.
Our results are based on an ``iterative existential argument'', and also give the first relative error low rank approximations for tensors for a large number of error measures for which nothing was known. In particular, we give the first relative error approximation algorithms on tensors for: column row and tube subset selection, entrywise p -low rank approximation for 1 p 2 , low rank approximation with respect to sum of Euclidean norms of faces or tubes, weighted low rank approximation, and low rank approximation in distributed and streaming models. We also obtain several new results for matrices, such as nn z ( A ) -time CUR decompositions, improving the previous nn z ( A ) log n -time CUR decompositions, which may be of independent interest.