Abstract
In this paper, we introduce two primal-dual active-set methods for solving large-scale constrained optimization problems. The first method minimizes a sequence of primal-dual augmented Lagrangian functions subject to bounds on the primal variables and artificial bounds on the dual variables. The basic structure is similar to the well-known optimization package Lancelot (Conn, et al. in SIAM J Numer Anal 28:545–572, 1991), which uses the traditional primal augmented Lagrangian function. Like Lancelot, our algorithm may use gradient projection-based methods enhanced by subspace acceleration techniques to solve each subproblem and therefore may be implemented matrix-free. The artificial bounds on the dual variables are a unique feature of our method and serve as a form of dual regularization. Our second algorithm is a two-phase method. The first phase computes iterates using our primal-dual augmented Lagrangian algorithm, which benefits from using cheap gradient projections and matrix-free linear CG calculations. The final iterate produced during this phase is then used as input for phase two, which is a stabilized sequential quadratic programming method (Gill and Robinson in SIAM J Opt 1–45, 2013). Obtaining superlinear local convergence under weak assumptions is an important benefit of the transition to a stabilized sequential quadratic programming algorithm. Interestingly, the bound-constrained subproblem used in phase one is equivalent to the stabilized subproblem used in phase two under certain assumptions. This fact makes our choice of algorithms a natural one.
Similar content being viewed by others
References
Shenoy, A., Heinkenschloss, M., Cliff, E.: Airfoil design by an all-at-once method. Int. J. Comput. Fluid Dyn. 11(1–2), 3–25 (1998)
Bouchouev, I., Isakov, V.: Uniqueness, stability and numerical methods for the inverse problem that arises in financial markets. Inverse Probl. 15, R95 (1999)
Feng, L., Linetsky, V., Morales, J.L., Nocedal, J.: On the solution of complementarity problems arising in American options pricing. Optim. Methods Softw. 26(4–5), 813–825 (2011). doi:10.1080/10556788.2010.514341
Robinson, D.P., Feng, L., Nocedal, J.M., Pang, J.-S.: Subspace accelerated matrix splitting algorithms for asymmetric and symmetric linear complementarity problems. SIAM J Optimiz.23(3), 1371–1397 (2013)
Klibanov, M., Lucas, T.: Numerical solution of a parabolic inverse problem in optical tomography using experimental data. SIAM J. Appl. Math. 59: 1763–1789 (1999)
Arridge, S.: Optical tomography in medical imaging. Inverse Probl. 15, R41 (1999)
Akcelik, V., Biros, G., Ghattas, O.: Parallel multiscale Gauss–Newton–Krylov methods for inverse wave propagation. In: Supercomputing, ACM/IEEE 2002 Conference (IEEE, 2002), pp. 41–41
Laird, C., Biegler, L., van Bloemen Waanders, B., Bartlett, R.: Contamination source determination for water networks. J. Water Resour. Plan. Manag. 131(2), 125–134 (2005)
Rao, C.V., Wright, S.J., Rawlings, J.B.: Application of interior-point methods to model predictive control. J. Optim. Theory Appl. 99(3), 723–757 (1998)
Shahzad, A., Kerrigan, E.C., Constantinides, G.A.: A warm-start interior-point method for predictive control. In: IET Conference Proceedings pp. 949–954(5) (2010). http://digital-library.theiet.org/content/conferences/10.1049/ic.2010.0409
Engau, A., Anjos, M.F., Vannelli, A.: Operations research and cyber-infrastructure. Springer, New York (2009)
Engau, A., Anjos, M.F., Vannelli, A.: On interior-point warmstarts for linear and combinatorial optimization. SIAM J. Optim. 20(4), 1828–1861 (2010)
Birgin, E.G., Castillo, R., Martínez, J.M.: Numerical comparison of Augmented Lagrangian algorithms for nonconvex problems. Comput. Optim. Appl. 31(1), 31–55 (2005)
Kočvara, M., Stingl, M.: PENNON: a generalized augmented Lagrangian method for semidefinite programming. In: High Performance Algorithms and Software for Nonlinear Optimization (Erice, 2001), Applied Optimization, vol. 82, pp. 303–321. Kluwer, Norwell, MA (2003). doi:10.1007/978-1-4613-0241-4_14
Moré, J.J., Toraldo, G.: Algorithms for bound constrained quadratic programming problems. Technical memorandum no. 117, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL (1988)
Conn, A.R., Gould, N.I.M., Toint, P.L.: LANCELOT: a Fortran package for large-scale nonlinear optimization (Release A). In: Lecture Notes in Computation Mathematics 17. Springer Verlag, Berlin, Heidelberg, New York, London, Paris and Tokyo (1992)
Boggs, P.T., Tolle, J.W.: Sequential quadratic programming. Acta Numer. 4, 1–51 (1995)
Gill, P.E., Murray, W., Saunders, M.A.: SNOPT: an SQP algorithm for large-scale constrained optimization. SIAM J. Optim. 12(4), 979–1006 (2002)
Gould, N.I.M., Robinson, D.P.: A second derivative SQP method: global convergence. SIAM J. Optim. 20(4), 2023–2048 (2010)
Gould, N.I.M., Robinson, D.P.: A second derivative SQP method: local convergence and practical issues. SIAM J. Optim. 20(4), 2049–2079 (2010)
Gould, N.I.M., Robinson, D.P.: A second derivative SQP method with a “trust-region-free” predictor step. IMA J. Numer. Anal. 32(2), 580–601 (2012)
Morales, J.L., Nocedal, J., Wu, Y.: A sequential quadratic programming algorithm with an additional equality constrained phase. IMA J. Numer. Anal. 32, 553–579 (2012)
Curtis, F.E., Johnson, T.C., Robinson, D.P., Wächter, A.: An inexact sequential quadratic optimization algorithm for nonlinear optimization. SIAM J. Optim. 24(3), 1041–1074 (2014)
Murray, W., Prieto, F.J.: A sequential quadratic programming algorithm using an incomplete solution of the subproblem. SIAM J. Optim. 5, 590–640 (1995)
Griva, I., Polyak, R.A.: Primal-dual nonlinear rescaling method with dynamic scaling parameter update. Math. program. 106(2), 237–259 (2006)
Gill, P.E., Robinson, D.P.: A primal-dual augmented Lagrangian. Comput. Optim. Appl. 51, 1–25 (2012). doi:10.1007/s10589-010-9339-1
Gill, P.E., Robinson, D.P.: A globally convergent stabilized SQP method. SIAM J. Optim. 23(4), 1983–2010 (2013)
Gill, P.E., Kungurtsev, V., Robinson, D.P.: A stabilized SQP method: global convergence. In: Center for Computational Mathematics Report CCoM 13-04. University of California, San Diego (2013)
Gill, P.E., Kungurtsev, V., Robinson, D.P.: A stabilized SQP method: superlinear convergence. Center for Computational Mathematics Report CCoM 14-01. University of California, San Diego (2014)
Nocedal, J., Wright, S.J.: Numer. Optim. Springer, New York (1999)
Kočvara, M., Zowe, J.: An iterative two-step algorithm for linear complementarity problems. Numer. Math. 68(1), 95–106 (1994). doi:10.1007/s002110050050
Wright, S.J., Nowak, R.D., Figueiredo, M.A.: Sparse reconstruction by separable approximation. IEEE Trans. Signal Process. 57(7), 2479–2493 (2009)
Robinson, D.P.: Primal-dual methods for nonlinear optimization. Ph.D. thesis, Department of Mathematics, University of California San Diego, La Jolla, CA (2007)
Fiacco, A.V., McCormick, G.P.: Nonlinear Programming: Sequential Unconstrained Minimization Techniques. Wiley, New York (1968)
Forsgren, A., Gill, P.E.: Primal-dual interior methods for nonconvex nonlinear programming. SIAM J. Optim. 8, 1132–1152 (1998)
Conn, A.R., Gould, N.I.M., Toint, P.L.: A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds. SIAM J. Numer. Anal. 28, 545–572 (1991)
Bertsekas, D.P.: Constrained optimization and Lagrange multiplier methods. Computer Science and Applied Mathematics. Academic Press Inc. [Harcourt Brace Jovanovich Publishers], New York (1982)
Conn, A.R., Gould, N.I.M., Toint, PhL: Trust-Region Methods. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (2000)
Saunders, M.A.: Cholesky-based methods for sparse least squares: the benefits of regularization. In: Adams, L., Nazareth, J.L. (eds) Linear and Nonlinear Conjugate Gradient-Related Methods, (SIAM Publications, Philadelphia, 1996), pp. 92–100. Proceedings of AMS-IMS-SIAM Joint Summer Research Conference, University of Washington, Seattle, WA (July 9–13, 1995)
Wright, S.J.: Superlinear convergence of a stabilized SQP method to a degenerate solution. Comput. Optim. Appl. 11(3), 253–275 (1998)
Hager, W.W.: Stabilized sequential quadratic programming. Comput. Optim. Appl. 12(1–3), 253–273 (1999). Computational optimization—a tribute to Olvi Mangasarian, Part I
Wright, S.J.: Modifying SQP for degenerate problems. SIAM J. Optim. 13(2), 470–497 (2002)
Oberlin, C., Wright, S.J.: Active set identification in nonlinear programming. SIAM J. Optim. 17(2), 577–605 (2006)
Fernández, D., Solodov, M.: Stabilized sequential quadratic programming for optimization and a stabilized Newton-type method for variational problems. Math. Program. Ser. A 125, 47–73 (2010). doi:10.1007/s10107-008-0255-4
Izmailov, A.F., Solodov, M.V.: On attraction of linearly constrained Lagrangian methods and of stabilized and quasi-Newton SQP methods to critical multipliers. Math. Program. 126(2, Ser. A), 231–257 (2011). doi:10.1007/s10107-009-0279-4
MathWorks Inc.: Natick, Massachusetts, Matlab User’s Guide (1992)
Curtis, F.E., Jiang, H., Robinson, D.P.: An adaptive augmented Lagrangian method for large-scale constrained optimization. Math. Program. pp. 1–45 (2013). doi:10.1007/s10107-014-0784-y
Baumrucker, B.T., Renfro, J.G., Biegler, L.T.: MPEC problem formulations and solution strategies with chemical engineering applications. Comput. Chem. Eng. 32(12), 2903–2913 (2008)
Dirkse, S.: MPECLib: a collection of mathematical programs with equilibrium constraints. http://www.gamsworld.eu/mpec/mpeclib.htm (2004)
Gould, N.I.M., Orban, D., Toint, P.L.: CUTEst : a constrained and unconstrained testing environment with safe threads for mathematical optimization. Comput. Optim. Appl. (2014). doi:10.1007/s10589-014-9687-3
Wolfe, P.: A technique for resolving degeneracy in linear programming. SIAM J. Appl. Math. 11, 205–211 (1963)
Acknowledgments
The author is grateful to the two referees whose comments and suggestions helped to significantly improve the paper. He is also grateful to Sven Leyffer from the Argonne National Laboratory for supplying the files used to solve the MPECS considered in Sect. 4.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Stefan Ulbrich.
This author was supported by US National Science Foundation Grant DMS-1217153.
Appendix
Appendix
A detailed summary of the runs for algorithm ALTR, algorithm PDSQP, and our two-phase Algorithm 2 on the MPECS considered in Sect. 4 may be found in Table 1. Specifically, we indicate the name (Name) of the problem along with the number of variables (\(n\)) and the number of equality constraints (\(m\)) for the problem once it has been transformed to standard form. For algorithm ALTR, we also give the termination flag (Flag) and the number of major iterations (Iter.), which is equivalent to the number of bound-constrained QPs solved. For algorithm PDSQP, we give the termination flag (Flag) and the number of QPs solved (QPs), which is equivalent to the number of iterations. Finally, for our two-phase algorithm, we give the termination flag (Flag), the number of AL iterations (Iter.) performed in phase 1, and the number of QPs solved (QPs) by algorithm PDSQP in phase 2. The flags indicate whether an optimal solution was found (Opt.) or the maximum number of allowed iterations was reached (Max). The integers in parenthesis after the number of QPs solved indicate the total number of QP iterations (minor iterations) performed by the active-set solver.
Rights and permissions
About this article
Cite this article
Robinson, D.P. Primal-Dual Active-Set Methods for Large-Scale Optimization. J Optim Theory Appl 166, 137–171 (2015). https://doi.org/10.1007/s10957-015-0708-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-015-0708-x