Abstract
Finding the rank of a tensor is a problem that has many applications. Unfortunately, it is often very difficult to determine the rank of a given tensor. Inspired by the heuristics of convex relaxation, we consider the nuclear norm instead of the rank of a tensor. We determine the nuclear norm of various tensors of interest. Along the way, we also do a systematic study various measures of orthogonality in tensor product spaces and we give a new generalization of the singular value decomposition to higher-order tensors.
Similar content being viewed by others
References
C. D. Aliprantis and K. C. Border, Infinite Dimensional Analysis – A Hitchhiker’s Guide, third edition, Springer, 2006.
D. Bini, M. Capovani, G. Lotti and F. Romani, \(O(n^{2.7799})\) complexity for matrix multiplication, Inf. Proc. Letters 8 (1979), 234–235.
M. Bläser, Beyond the Alder-Strassen bound, Theor. Comp. Science 331 (2005), 3–21.
P. Bürgisser, M. Clausen and M. A. Shokrollahi, Algebraic Complexity Theory, Grundlehren der Mathematischen WIssenschaften 315, Springer-Verlag, Berlin, 1997.
E.J. Candès and B. Recht, Exact matrix completion via convex optimization, Foundations of Computational Mathematics 9, 2009, 717–772.
E. J. Candès and T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Trans. on Information Theory 56 (2010), no. 5, 2053–2080.
E. Carlen, E. H. Lieb and M. Loss (2006) An inequality of Hadamard type for permanents, Methods Appl. Anal. 13(1) 1–17.
J. D. Carroll and J. Chang, Analysis of individual differences in multidimensional scaling via an n-way generalization of “Eckart-Young” decomposition, Psychometrika 35 (1970), 218–319.
D. Coppersmith and S. Winograd, Matrix Multiplication via Arithmetic Progessions, J. Symbolic Computation, 9 (3) (1990), 251–280.
V. De Silva and L.-H. Lim, Tensor rank and the ill-posedness of the best low-rank approximation problem, SIAM J. Matrix Anal. Appl. 30 (2008), 1254–1279.
S. Gandy, B. Recht and I. Yamada, Tensor completion and low n-rank tensor recovery via convex optimization, Inverse Problems 27 (2011), no. 2.
D. Glynn, The permanent of a square matrix, European J. Combin. 31 (2010), no. 7, 1887–1891.
A. Grothendieck, Produits tensoriels topologiques et espaces nucléares, Mem. Amer. Math. Soc. 1955 (1955), no. 16.
R. A. Harshman, Foundations of the PARAFAC procedure: models and conditions for an “explanatory” multimodal factor analysis, UCLA working papers in Phonetics 16 (1970), 1–84.
J. Håstad, Tensor rank is NP-complete, J. Algorithms 11 (1990), no. 4, 644–654.
J. Håstad, Tensor rank is NP-complete, Automata, languages and programming (Stresa, 1989), Lecture Notes in Comput. Sci. 372, Springer, Berlin, 1989, 451–460.
F. L. Hitchcock, Multiple invariants and generalized rank of a p-way matrix or tensor, J. Math. and Phys. 7 (1927), no. 1, 40–79.
C. J. Hillar and L.-H. Lim, Most tensor problems are NP-hard, Journal of the ACM 60 (2013), no. 6, Art. 45.
R. H. Keshavan, A. Montanari and S. Oh, Matrix completion from a few entries, IEEE Trans. on Information Theory 56 (2010), 2980–2998.
J. B. Kruskal, Three-way arrays: rank and uniqueness for trilinear decompositions, with application to arithmetic complexity and statistics, Linear Algebra Appl. 18 (2) (1977), 95–138.
J. M. Landsberg, Tensors: Geometry and Applications, Graduate Studies in mathematics 128, American Mathematical Society, RI, 2012.
J. M. Landsberg, New lower bounds for the rank of matrix multiplication, SIAM J. Comput. 53 (2014), no. 1, 144–149.
J. M. Landsberg and Z. Teitler, On the ranks and border ranks of symmetric tensors, Foundations of Computational Mathematics 10.3 (2010), 339–366.
L. de Lathauwer, B. de Moor and J. Vandewalle, A multilinear singular value decomposition, Siam J. Matrix Anal. Appl. 21, no. 4, 1253–1278.
F. Le Gall, Powers of tensors and fast matrix multiplication, Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation (ISSAC 2014), 2014, 296–303.
L.-H. Lim and P. Comon, Multiarray signal processing: tensor decomposition meets compressed sensing, Comptes Rendus Mecanique 338 (2010), 311–320.
L. H. Lim and P. Comon, Blind multilinear identification, IEEE Trans. Inf. Theory 60 (2014), 1260–1280.
A. Massarenti and E. Raviolo, The rank of \(n\times n\) matrix multiplication is at least \(3n^2-2\sqrt{2}n^{3/2}-3n\), Linear Algebra and its Applications 438 (2013), 4500–4509.
B. Recht, M. Fazel and P. A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimzation, SIAM review 52 (2010), no. 3, 471–501.
R. Schatten, A Theory of Cross-Spaces, Princeton University Press, Princeton, NJ, 1950.
M. Shafiei, Apolarity for determinants and permanents of generic matrices, preprint, ar**v:1212.0515.
V. Strassen, Gaussian elimination is not optimal, Numer. Math. 13 (1969), 354–356.
N. D. Sidiropoulos and R. Bro, On the uniqueness of multilinear decomposition of N-way arrays, J. Chemometrics 14 (2000), no. 3, 229–239.
B. Recht, M. Fazel and P. Parillo, Guaranteed minimum rank solutions of matrix equations via nuclear norm minimization, SIAM Review 52 (2010), no. 3, 471–501.
A. J. Stothers, On the Complexity of Matrix Multiplication, Ph.D. Thesis, Univ. Edinburgh, 2010.
V. V. Williams, Multiplying matrices faster than Coppersmith-Winograd, STOC 2012, Proceedings of the forty-fourth annual ACM symposium on Theory of computing, ACM, 2012, 887–898.
Acknowledgments
The author thanks Zach Teitler for useful comments and a correction.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Peter Bürgisser.
The author was partially supported by NSF Grants DMS 0901298 and DMS 1302032.
Appendix: The Tensor Rank of the Determinant and the Permanent
Appendix: The Tensor Rank of the Determinant and the Permanent
For a subset \(I=\{i_1,i_2,\ldots ,i_r\}\subseteq \{1,2,\ldots ,n\}\) with \(i_1<\cdots <i_r\), define
and
We have the following generalized Laplace expansion
where \(I\) runs over all \({n\atopwithdelims ()r}\) subsets of \(\{1,2,\ldots ,n\}\) with cardinality \(r\).
By flattening, we may view the tensor \(\det _n\) as a tensor in \({\mathbb {C}}^{n^r}\otimes {\mathbb {C}}^{n^{n-r}}\). The tensors \(\det _r(I)\) where \(I\) is a subset of \(\{1,2,\ldots ,n\}\) with \(r\) elements are linearly independent. The tensors \(\det _r(I^c)\) are linearly independent as well. This shows that the flattened tensor has rank at least \({n\atopwithdelims ()r}\). So, we have
We get the best lower bound if \(r=\lfloor n/2\rfloor \):
We have a similar Laplace expansion for the permanent, so we also get
So the ranks of the determinant and permanent grow at least exponentially. We also have an exponential lower bound for the permanent. An exponential upper bound for the rank of the determinant seems not to be known. However, the obvious bound \({\text {rank}}(\det _n)\le n!\) is not sharp for \(n\ge 3\).
For \(n=3\), we have
So \({\text {rank}}(\det _3)\le 5\). Zach Teitler pointed out that this implies that the Waring rank of a \(3\times 3\) matrix is at most 20. He also pointed out that one can show that \({\text {rank}}(\det _3)\ge 4\). If \(n>3\), then we can again use the generalized Laplace expansion
where \(I\) runs over all subsets of \(\{1,2,\ldots ,n\}\) with three elements. This proves that
We can rewrite this as
By induction, we get
Remark 8.1
Homogeneous polynomials can be thought of as symmetric tensors. For symmetric tensors, there is also a notion of rank, the so-called symmetric rank. The symmetric rank is different from, but closely related to the tensor rank. The determinant and permanent can be thought of as homogeneous polynomials. Lower bounds for the symmetric tensor rank of the determinant and permanent can be found in [23] and [31].
Rights and permissions
About this article
Cite this article
Derksen, H. On the Nuclear Norm and the Singular Value Decomposition of Tensors. Found Comput Math 16, 779–811 (2016). https://doi.org/10.1007/s10208-015-9264-x
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10208-015-9264-x