Abstract
The paper studies approaches to numerical solving huge-scale quasiseparable optimization problems. The main idea is based on using gradient methods with simple iteration structure instead more intelligent techniques, which is widely used for solving traditional, small-sized problems. The results of numerical experiments for a number of test quasiseparable optimization problems with dimensions up to 1010 variables are presented.
Similar content being viewed by others
References
Yu. Nesterov, Introductory Lectures on Convex Optimization (Springer US, Boston, 2004).
A. S. Anikin et al., “Effective numerical methods for huge-scale linear systems with double-sparsity and applications to Page Rank,” Proc. MIPT 7 (4), 74–94 (2015).
B. T. Polyak, “Minimization of unsmooth functionals,” USSR Comput. Math. Math. Phys. 9 (3), 14–29 (1969).
J. Barzilai and J. M. Borwein, “Two-point step size gradient methods,” IMA J. Numer. Anal., No. 8, 141–148 (1988).
Author information
Authors and Affiliations
Corresponding author
Additional information
Submitted by L. N. Shchur
Rights and permissions
About this article
Cite this article
Andrianov, A.N., Anikin, A.S., Bychkov, I.V. et al. Numerical solution of huge-scale quasiseparable optimization problems. Lobachevskii J Math 38, 870–873 (2017). https://doi.org/10.1134/S1995080217050031
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1134/S1995080217050031