Log in

On the Nuclear Norm and the Singular Value Decomposition of Tensors

  • Published:
Foundations of Computational Mathematics Aims and scope Submit manuscript

Abstract

Finding the rank of a tensor is a problem that has many applications. Unfortunately, it is often very difficult to determine the rank of a given tensor. Inspired by the heuristics of convex relaxation, we consider the nuclear norm instead of the rank of a tensor. We determine the nuclear norm of various tensors of interest. Along the way, we also do a systematic study various measures of orthogonality in tensor product spaces and we give a new generalization of the singular value decomposition to higher-order tensors.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Thailand)

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. C. D. Aliprantis and K. C. Border, Infinite Dimensional Analysis – A Hitchhiker’s Guide, third edition, Springer, 2006.

  2. D. Bini, M. Capovani, G. Lotti and F. Romani, \(O(n^{2.7799})\) complexity for matrix multiplication, Inf. Proc. Letters 8 (1979), 234–235.

  3. M. Bläser, Beyond the Alder-Strassen bound, Theor. Comp. Science 331 (2005), 3–21.

    Article  MathSciNet  MATH  Google Scholar 

  4. P. Bürgisser, M. Clausen and M. A. Shokrollahi, Algebraic Complexity Theory, Grundlehren der Mathematischen WIssenschaften 315, Springer-Verlag, Berlin, 1997.

  5. E.J. Candès and B. Recht, Exact matrix completion via convex optimization, Foundations of Computational Mathematics 9, 2009, 717–772.

    Article  MathSciNet  MATH  Google Scholar 

  6. E. J. Candès and T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Trans. on Information Theory 56 (2010), no. 5, 2053–2080.

    Article  MathSciNet  Google Scholar 

  7. E. Carlen, E. H. Lieb and M. Loss (2006) An inequality of Hadamard type for permanents, Methods Appl. Anal. 13(1) 1–17.

    MathSciNet  MATH  Google Scholar 

  8. J. D. Carroll and J. Chang, Analysis of individual differences in multidimensional scaling via an n-way generalization of “Eckart-Young” decomposition, Psychometrika 35 (1970), 218–319.

  9. D. Coppersmith and S. Winograd, Matrix Multiplication via Arithmetic Progessions, J. Symbolic Computation, 9 (3) (1990), 251–280.

    Article  MathSciNet  MATH  Google Scholar 

  10. V. De Silva and L.-H. Lim, Tensor rank and the ill-posedness of the best low-rank approximation problem, SIAM J. Matrix Anal. Appl. 30 (2008), 1254–1279.

    Article  MathSciNet  MATH  Google Scholar 

  11. S. Gandy, B. Recht and I. Yamada, Tensor completion and low n-rank tensor recovery via convex optimization, Inverse Problems 27 (2011), no. 2.

  12. D. Glynn, The permanent of a square matrix, European J. Combin. 31 (2010), no. 7, 1887–1891.

    Article  MathSciNet  MATH  Google Scholar 

  13. A. Grothendieck, Produits tensoriels topologiques et espaces nucléares, Mem. Amer. Math. Soc. 1955 (1955), no. 16.

  14. R. A. Harshman, Foundations of the PARAFAC procedure: models and conditions for an “explanatory” multimodal factor analysis, UCLA working papers in Phonetics 16 (1970), 1–84.

  15. J. Håstad, Tensor rank is NP-complete, J. Algorithms 11 (1990), no. 4, 644–654.

    Article  MathSciNet  MATH  Google Scholar 

  16. J. Håstad, Tensor rank is NP-complete, Automata, languages and programming (Stresa, 1989), Lecture Notes in Comput. Sci. 372, Springer, Berlin, 1989, 451–460.

  17. F. L. Hitchcock, Multiple invariants and generalized rank of a p-way matrix or tensor, J. Math. and Phys. 7 (1927), no. 1, 40–79.

    MATH  Google Scholar 

  18. C. J. Hillar and L.-H. Lim, Most tensor problems are NP-hard, Journal of the ACM 60 (2013), no. 6, Art. 45.

  19. R. H. Keshavan, A. Montanari and S. Oh, Matrix completion from a few entries, IEEE Trans. on Information Theory 56 (2010), 2980–2998.

    Article  MathSciNet  MATH  Google Scholar 

  20. J. B. Kruskal, Three-way arrays: rank and uniqueness for trilinear decompositions, with application to arithmetic complexity and statistics, Linear Algebra Appl. 18 (2) (1977), 95–138.

    Article  MathSciNet  MATH  Google Scholar 

  21. J. M. Landsberg, Tensors: Geometry and Applications, Graduate Studies in mathematics 128, American Mathematical Society, RI, 2012.

  22. J. M. Landsberg, New lower bounds for the rank of matrix multiplication, SIAM J. Comput. 53 (2014), no. 1, 144–149.

    Article  MathSciNet  MATH  Google Scholar 

  23. J. M. Landsberg and Z. Teitler, On the ranks and border ranks of symmetric tensors, Foundations of Computational Mathematics 10.3 (2010), 339–366.

    Article  MathSciNet  MATH  Google Scholar 

  24. L. de Lathauwer, B. de Moor and J. Vandewalle, A multilinear singular value decomposition, Siam J. Matrix Anal. Appl. 21, no. 4, 1253–1278.

  25. F. Le Gall, Powers of tensors and fast matrix multiplication, Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation (ISSAC 2014), 2014, 296–303.

  26. L.-H. Lim and P. Comon, Multiarray signal processing: tensor decomposition meets compressed sensing, Comptes Rendus Mecanique 338 (2010), 311–320.

    Article  MATH  Google Scholar 

  27. L. H. Lim and P. Comon, Blind multilinear identification, IEEE Trans. Inf. Theory 60 (2014), 1260–1280.

    Article  MathSciNet  Google Scholar 

  28. A. Massarenti and E. Raviolo, The rank of \(n\times n\) matrix multiplication is at least \(3n^2-2\sqrt{2}n^{3/2}-3n\), Linear Algebra and its Applications 438 (2013), 4500–4509.

    Article  MathSciNet  MATH  Google Scholar 

  29. B. Recht, M. Fazel and P. A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimzation, SIAM review 52 (2010), no. 3, 471–501.

    Article  MathSciNet  MATH  Google Scholar 

  30. R. Schatten, A Theory of Cross-Spaces, Princeton University Press, Princeton, NJ, 1950.

    MATH  Google Scholar 

  31. M. Shafiei, Apolarity for determinants and permanents of generic matrices, preprint, ar**v:1212.0515.

  32. V. Strassen, Gaussian elimination is not optimal, Numer. Math. 13 (1969), 354–356.

    Article  MathSciNet  MATH  Google Scholar 

  33. N. D. Sidiropoulos and R. Bro, On the uniqueness of multilinear decomposition of N-way arrays, J. Chemometrics 14 (2000), no. 3, 229–239.

    Article  Google Scholar 

  34. B. Recht, M. Fazel and P. Parillo, Guaranteed minimum rank solutions of matrix equations via nuclear norm minimization, SIAM Review 52 (2010), no. 3, 471–501.

    Article  MathSciNet  MATH  Google Scholar 

  35. A. J. Stothers, On the Complexity of Matrix Multiplication, Ph.D. Thesis, Univ. Edinburgh, 2010.

  36. V. V. Williams, Multiplying matrices faster than Coppersmith-Winograd, STOC 2012, Proceedings of the forty-fourth annual ACM symposium on Theory of computing, ACM, 2012, 887–898.

Download references

Acknowledgments

The author thanks Zach Teitler for useful comments and a correction.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Harm Derksen.

Additional information

Communicated by Peter Bürgisser.

The author was partially supported by NSF Grants DMS 0901298 and DMS 1302032.

Appendix: The Tensor Rank of the Determinant and the Permanent

Appendix: The Tensor Rank of the Determinant and the Permanent

For a subset \(I=\{i_1,i_2,\ldots ,i_r\}\subseteq \{1,2,\ldots ,n\}\) with \(i_1<\cdots <i_r\), define

$$\begin{aligned} {\textstyle \det _r}(I)= & {} \sum {\text {sgn}}(\sigma ) e_{i_{\sigma (1)}}\otimes \cdots \otimes e_{i_{\sigma (r)}},\\ {\text {sgn}}(I)= & {} (-1)^{\textstyle i_1+\cdots +i_r-{r+1\atopwithdelims ()2}}, \end{aligned}$$

and

$$\begin{aligned} I^c=\{1,2,\ldots ,n\}\setminus I. \end{aligned}$$

We have the following generalized Laplace expansion

$$\begin{aligned} {\textstyle \det _n}=\sum _{I}{\text {sgn}}(I)\textstyle \det _r(I)\otimes \det _{n-r}(I^c), \end{aligned}$$

where \(I\) runs over all \({n\atopwithdelims ()r}\) subsets of \(\{1,2,\ldots ,n\}\) with cardinality \(r\).

By flattening, we may view the tensor \(\det _n\) as a tensor in \({\mathbb {C}}^{n^r}\otimes {\mathbb {C}}^{n^{n-r}}\). The tensors \(\det _r(I)\) where \(I\) is a subset of \(\{1,2,\ldots ,n\}\) with \(r\) elements are linearly independent. The tensors \(\det _r(I^c)\) are linearly independent as well. This shows that the flattened tensor has rank at least \({n\atopwithdelims ()r}\). So, we have

$$\begin{aligned} {\text {rank}}({\textstyle \det _n})\ge {n\atopwithdelims ()r}. \end{aligned}$$

We get the best lower bound if \(r=\lfloor n/2\rfloor \):

$$\begin{aligned} {\text {rank}}({\textstyle \det _n})\ge {n\atopwithdelims ()\lfloor n/2\rfloor }. \end{aligned}$$

We have a similar Laplace expansion for the permanent, so we also get

$$\begin{aligned} {\text {rank}}({\textstyle {\text {per}}_n})\ge {n\atopwithdelims ()\lfloor n/2\rfloor }. \end{aligned}$$

So the ranks of the determinant and permanent grow at least exponentially. We also have an exponential lower bound for the permanent. An exponential upper bound for the rank of the determinant seems not to be known. However, the obvious bound \({\text {rank}}(\det _n)\le n!\) is not sharp for \(n\ge 3\).

For \(n=3\), we have

$$\begin{aligned} {\textstyle \det _3}= & {} {\textstyle \frac{1}{2}}\Big ((e_3+e_2)\otimes (e_1-e_2)\otimes (e_1+e_2)+(e_1+e_2)\otimes (e_2-e_3)\otimes (e_2+e_3)\\&+ 2e_2\otimes (e_3-e_1)\otimes (e_3+e_1)\\&+(e_3-e_2)\otimes (e_2+e_1)\otimes (e_2-e_1)+(e_1-e_2)\otimes (e_3+e_2)\otimes (e_3-e_2)\Big ). \end{aligned}$$

So \({\text {rank}}(\det _3)\le 5\). Zach Teitler pointed out that this implies that the Waring rank of a \(3\times 3\) matrix is at most 20. He also pointed out that one can show that \({\text {rank}}(\det _3)\ge 4\). If \(n>3\), then we can again use the generalized Laplace expansion

$$\begin{aligned} {\textstyle \det _n}=\sum _{I}{\text {sgn}}(I)\textstyle \det _3(I)\otimes \det _{n-3}(I^c), \end{aligned}$$

where \(I\) runs over all subsets of \(\{1,2,\ldots ,n\}\) with three elements. This proves that

$$\begin{aligned} {\text {rank}}({\textstyle \det _n})\le {n\atopwithdelims ()3}{\text {rank}}({\textstyle \det _{n-3}}){\text {rank}}({\textstyle \det _3})\le \frac{5\cdot n!}{6\cdot (n-3)!} \end{aligned}$$

We can rewrite this as

$$\begin{aligned} \frac{{\text {rank}}(\det _n)}{n!}\le \frac{5}{6}\cdot \frac{{\text {rank}}(\det _{n-3})}{(n-3)!}. \end{aligned}$$

By induction, we get

$$\begin{aligned} {\text {rank}}({\textstyle \det _n})\le \Big (\frac{5}{6}\Big )^{\textstyle \lfloor \frac{n}{3}\rfloor } \cdot n!. \end{aligned}$$

Remark 8.1

Homogeneous polynomials can be thought of as symmetric tensors. For symmetric tensors, there is also a notion of rank, the so-called symmetric rank. The symmetric rank is different from, but closely related to the tensor rank. The determinant and permanent can be thought of as homogeneous polynomials. Lower bounds for the symmetric tensor rank of the determinant and permanent can be found in [23] and [31].

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Derksen, H. On the Nuclear Norm and the Singular Value Decomposition of Tensors. Found Comput Math 16, 779–811 (2016). https://doi.org/10.1007/s10208-015-9264-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10208-015-9264-x

Keywords

Mathematics Subject Classification

Navigation