Low-Rank Tensor Methods for Model Order Reduction

  • Reference work entry
  • First Online:
Handbook of Uncertainty Quantification

Abstract

Parameter-dependent models arise in many contexts such as uncertainty quantification, sensitivity analysis, inverse problems, or optimization. Parametric or uncertainty analyses usually require the evaluation of an output of a model for many instances of the input parameters, which may be intractable for complex numerical models. A possible remedy consists in replacing the model by an approximate model with reduced complexity (a so-called reduced order model) allowing a fast evaluation of output variables of interest. This chapter provides an overview of low-rank methods for the approximation of functions that are identified either with order-two tensors (for vector-valued functions) or higher-order tensors (for multivariate functions). Different approaches are presented for the computation of low-rank approximations, either based on samples of the function or on the equations that are satisfied by the function, the latter approaches including projection-based model order reduction methods. For multivariate functions, different notions of ranks and the corresponding low-rank approximation formats are introduced.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
EUR 29.95
Price includes VAT (Germany)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 1,069.99
Price includes VAT (Germany)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
EUR 1,390.99
Price includes VAT (Germany)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bachmayr, M., Dahmen, W.: Adaptive near-optimal rank tensor approximation for high-dimensional operator equations. Found. Comput. Math. 15(4), 839–898 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  2. Bachmayr, M., Schneider, R.: Iterative Methods Based on Soft Thresholding of Hierarchical Tensors (Jan 2015). Ar**v e-prints 1501.07714

    Google Scholar 

  3. Ballani, J., Grasedyck, L.: A projection method to solve linear systems in tensor format. Numer. Linear Algebra Appl. 20(1), 27–43 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  4. Ballani, J., Grasedyck, L., Kluge, M.: Black box approximation of tensors in hierarchical tucker format. Linear Algebra Appl. 438(2), 639–657 (2013). Tensors and Multilinear Algebra

    Google Scholar 

  5. Barrault, M., Maday, Y., Nguyen, N.C., Patera, A.T.: An empirical interpolation method: application to efficient reduced-basis discretization of partial differential equations. C. R. Math. 339(9), 667–672 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  6. Bebendorf, M., Maday, Y., Stamm, B.: Comparison of some reduced representation approximations. In: Quarteroni, A., Rozza, G. (eds.) Reduced Order Methods for Modeling and Computational Reduction. Volume 9 of MS&A – Modeling, Simulation and Applications, pp. 67–100. Springer International Publishing, Cham (2014)

    Google Scholar 

  7. Beylkin, G., Garcke, B., Mohlenkamp, M.J.: Multivariate regression and machine learning with sums of separable functions. J. Comput. Phys. 230, 2345–2367 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  8. Binev, P., Cohen, A., Dahmen, W., Devore, R., Petrova, G., Wojtaszczyk, P.: Convergence rates for greedy algorithms in reduced basis methods. SIAM J. Math. Anal. 43(3), 1457–1472 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  9. Buffa, A., Maday, Y., Patera, A.T., Prud’Homme, C., Turinici, G.: A priori convergence of the Greedy algorithm for the parametrized reduced basis method. ESAIM: Math. Model. Numer. Anal. 46(3), 595–603 (2012). Special volume in honor of Professor David Gottlieb

    Google Scholar 

  10. Cances, E., Ehrlacher, V., Lelievre, T.: Convergence of a greedy algorithm for high-dimensional convex nonlinear problems. Math. Models Methods Appl. Sci. 21(12), 2433–2467 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  11. Casenave, F., Ern, A., Lelièvre, T.: A nonintrusive reduced basis method applied to aeroacoustic simulations. Adv. Comput. Math. 41(5), 961–986 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  12. Chen, P., Quarteroni, A., Rozza, G.: A weighted reduced basis method for elliptic partial differential equations with random input data. SIAM J. Numer. Anal. 51(6), 3163–3185 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  13. Chen, Y., Gottlieb, S., Maday, Y.: Parametric analytical preconditioning and its applications to the reduced collocation methods. C. R. Math. 352(7/8), 661–666 (2014). Ar**v e-prints

    Google Scholar 

  14. Chevreuil, M., Lebrun, R., Nouy, A., Rai, P.: A least-squares method for sparse low rank approximation of multivariate functions. SIAM/ASA J. Uncertain. Quantif. 3(1), 897–921 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  15. Cohen, A., Devore, R.: Approximation of high-dimensional parametric PDEs. Acta Numer. 24, 1–159 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  16. Cohen, A., Devore, R.: Kolmogorov widths under holomorphic map**s. IMA J. Numer. Anal. (2015)

    Google Scholar 

  17. Defant, A., Floret, K.: Tensor Norms and Operator Ideals. North-Holland, Amsterdam/New York (1993)

    MATH  Google Scholar 

  18. DeVore, R., Petrova, G., Wojtaszczyk, P.: Greedy algorithms for reduced bases in banach spaces. Constr. Approx. 37(3), 455–466 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  19. Dolgov, S., Khoromskij, B.N., Litvinenko, A., Matthies, H. G.: Polynomial chaos expansion of random coefficients and the solution of stochastic partial differential equations in the tensor train format. SIAM/ASA J. Uncertain. Quantif. 3(1), 1109–1135 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  20. Doostan, A., Ghanem, R., Red-Horse, J.: Stochastic model reductions for chaos representations. Comput. Methods Appl. Mech. Eng. 196(37–40), 3951–3966 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  21. Doostan, A., Validi, A., Iaccarino, G.: Non-intrusive low-rank separated approximation of high-dimensional stochastic models. Comput. Methods Appl. Mech. Eng. 263(0), 42–55 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  22. Espig, M., Grasedyck, L., Hackbusch, W.: Black box low tensor-rank approximation using fiber-crosses. Constr. Approx. 30, 557–597 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  23. Espig, M., Hackbusch, W., Khachatryan, A.: On the convergence of alternating least squares optimisation in tensor format representations (May 2015). Ar**v e-prints 1506.00062

    Google Scholar 

  24. Falcó, A., Nouy, A.: Proper generalized decomposition for nonlinear convex problems in tensor banach spaces. Numerische Mathematik 121, 503–530 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  25. Falcó, A., Hackbusch, W., Nouy, A.: Geometric structures in tensor representations. Found. Comput. Math. (Submitted)

    Google Scholar 

  26. Giraldi, L., Liu, D., Matthies, H.G., Nouy, A.: To be or not to be intrusive? The solution of parametric and stochastic equations—proper generalized decomposition. SIAM J. Sci. Comput. 37(1), A347–A368 (2015)

    MATH  Google Scholar 

  27. Grasedyck, L.: Hierarchical singular value decomposition of tensors. SIAM J. Matrix Anal. Appl. 31, 2029–2054 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  28. Grasedyck, L., Kressner, D., Tobler, C.: A literature survey of low-rank tensor approximation techniques. GAMM-Mitteilungen 36(1), 53–78 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  29. Hackbusch, W.: Tensor Spaces and Numerical Tensor Calculus. Volume 42 of Springer Series in Computational Mathematics. Springer, Heidelberg (2012)

    Google Scholar 

  30. Hackbusch, W., Kuhn, S.: A new scheme for the tensor representation. J. Fourier Anal. Appl. 15(5), 706–722 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  31. Hackbusch, W., Khoromskij, B., Tyrtyshnikov, E.: Approximate iterations for structured matrices. Numerische Mathematik 109, 365–383 (2008). 10.1007/s00211-008-0143-0.

    Article  MathSciNet  MATH  Google Scholar 

  32. Holtz, S., Rohwedder, T., Schneider, R.: On manifolds of tensors of fixed tt-rank. Numerische Mathematik 120(4), 701–731 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  33. Kahlbacher, M., Volkwein, S.: Galerkin proper orthogonal decomposition methods for parameter dependent elliptic systems. Discuss. Math.: Differ. Incl. Control Optim. 27, 95–117 (2007)

    Google Scholar 

  34. Khoromskij, B.: Tensors-structured numerical methods in scientific computing: survey on recent advances. Chemom. Intell. Lab. Syst. 110(1), 1–19 (2012)

    Article  Google Scholar 

  35. Khoromskij, B.B., Schwab, C.: Tensor-structured Galerkin approximation of parametric and stochastic elliptic pdes. SIAM J. Sci. Comput. 33(1), 364–385 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  36. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51(3), 455–500 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  37. Kressner, D., Tobler, C.: Low-rank tensor krylov subspace methods for parametrized linear systems. SIAM J. Matrix Anal. Appl. 32(4), 1288–1316 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  38. Lassila, T., Manzoni, A., Quarteroni, A., Rozza, G.: Generalized reduced basis methods and n-width estimates for the approximation of the solution manifold of parametric pdes. In: Brezzi, F., Colli Franzone, P., Gianazza, U., Gilardi, G. (eds.) Analysis and Numerics of Partial Differential Equations. Volume 4 of Springer INdAM Series, pp. 307–329. Springer, Milan (2013)

    Chapter  Google Scholar 

  39. Lubich, C., Rohwedder, T., Schneider, R., Vandereycken, B.: Dynamical approximation by hierarchical tucker and tensor-train tensors. SIAM J. Matrix Anal. Appl. 34(2), 470–494 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  40. Maday, Y., Mula, O.: A generalized empirical interpolation method: application of reduced basis techniques to data assimilation. In: Brezzi, F., Colli Franzone, P., Gianazza, U., Gilardi, G. (eds.) Analysis and Numerics of Partial Differential Equations. Volume 4 of Springer INdAM Series, pP. 221–235. Springer, Milan (2013)

    Chapter  Google Scholar 

  41. Maday, Y., Nguyen, N.C., Patera, A.T., Pau, G.S.H.: A general multipurpose interpolation procedure: the magic points. Commun. Pure Appl. Anal. 8(1), 383–404 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  42. Matthies, H.G., Keese, A.: Galerkin methods for linear and nonlinear elliptic stochastic partial differential equations. Comput. Methods Appl. Mech. Eng. 194(12–16), 1295–1331 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  43. Matthies, H.G., Zander, E.: Solving stochastic systems with low-rank tensor compression. Linear Algebra Appl. 436(10), 3819–3838 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  44. Nouy, A.: A generalized spectral decomposition technique to solve a class of linear stochastic partial differential equations. Comput. Methods Appl. Mech. Eng. 196(45–48), 4521–4537 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  45. Nouy, A.: Generalized spectral decomposition method for solving stochastic finite element equations: invariant subspace problem and dedicated algorithms. Comput. Methods Appl. Mech. Eng. 197, 4718–4736 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  46. Nouy, A.: Proper generalized decompositions and separated representations for the numerical solution of high dimensional stochastic problems. Arch. Comput. Methods Eng. 17(4), 403–434 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  47. Oseledets, I.: Tensor-train decomposition. SIAM J. Sci. Comput. 33(5), 2295–2317 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  48. Oseledets, I., Tyrtyshnikov, E.: Breaking the curse of dimensionality, or how to use SVD in many dimensions. SIAM J. Sci. Comput. 31(5), 3744–3759 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  49. Oseledets, I., Tyrtyshnikov, E.: TT-cross approximation for multidimensional arrays. Linear Algebra Appl. 432(1), 70–88 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  50. Patera, A.T., Rozza, G.: Reduced Basis Approximation and A-Posteriori Error Estimation for Parametrized PDEs. MIT-Pappalardo Graduate Monographs in Mechanical Engineering. Massachusetts Institute of Technology, Cambridge (2007)

    Google Scholar 

  51. Pietsch, A.: Eigenvalues and s-Numbers. Cambridge University Press, Cambridge/New York (1987)

    MATH  Google Scholar 

  52. Prud’homme, C., Rovas, D., Veroy, K., Maday, Y., Patera, A.T., Turinici, G.: Reliable real-time solution of parametrized partial differential equations: reduced-basis output bound methods. J. Fluids Eng. 124(1), 70–80 (2002)

    Article  Google Scholar 

  53. Quarteroni, A., Rozza, G., Manzoni, A.: Certified reduced basis approximation for parametrized partial differential equations and applications. J. Math. Ind 1(1), 1–49 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  54. Rauhut, H., Schneider, R., Stojanac, Z.: Tensor completion in hierarchical tensor representations (Apr 2014). Ar**v e-prints

    Google Scholar 

  55. Schneider, R., Uschmajew, A.: Approximation rates for the hierarchical tensor format in periodic sobolev spaces. J. Complex. 30(2), 56–71 (2014) Dagstuhl 2012

    Google Scholar 

  56. Temlyakov, V.: Greedy approximation in convex optimization (June 2012). Ar**v e-prints

    Google Scholar 

  57. Uschmajew, A., Vandereycken, B.: The geometry of algorithms using hierarchical tensors. Linear Algebra Appl 439(1), 133–166 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  58. Zahm, O., Nouy, A.: Interpolation of inverse operators for preconditioning parameter-dependent equations (April 2015). Ar**v e-prints

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anthony Nouy .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing Switzerland

About this entry

Cite this entry

Nouy, A. (2017). Low-Rank Tensor Methods for Model Order Reduction. In: Ghanem, R., Higdon, D., Owhadi, H. (eds) Handbook of Uncertainty Quantification. Springer, Cham. https://doi.org/10.1007/978-3-319-12385-1_21

Download citation

Publish with us

Policies and ethics

Navigation