Recursive quadratic programming methods based on the augmented lagrangian

  • Chapter
  • First Online:
Computation Mathematical Programming

Part of the book series: Mathematical Programming Studies ((MATHPROGRAMM,volume 31))

Abstract

This paper describes a method for constrained optimization which obtains its search directions from a quadratic programming subproblem based on the well-known augmented Lagrangian function. The method can be viewed as a development of the algorithm REQP which is related to the classical exterior point penalty function: and it is argued that the new technique will have certain computational advantages arising from the fact that it need not involve a sequence of penalty parameters tending to zero. An algorithm with global convergence for equality constrained problems is presented. Computational results are also given for this algorithm and some alternative strategies are briefly considered for extending it to deal with inequality constraints.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • M.C. Bartholomew-Biggs, “Line search procedures for nonlinear programming algorithms using quadratic programming subproblems,” Technical Report 116, The Numerical Optimisation Centre, Hatfield Polytechnic (Hatfield, 1981).

    Google Scholar 

  • M.C. Bartholomew-Biggs, Recursive quadratic programming methods for nonlinear contraints,” in: M.J.D. Powell, ed., Nonlinear Optimization 1981, (Academic Press, London, 1982) pp. 213–221.

    Google Scholar 

  • M.C. Bartholomew-Biggs, “ALRQP for inequality constraints,” Technical Report 150, The Numerical Optimisation Centre, Hatfield Polytechnic (Hatfield, 1984).

    Google Scholar 

  • M.C. Bartholomew-Biggs, “The development of recursive quadratic programming methods based on the augmented Lagrangian,” Technical Report 160, The Numerical Optimisation Centre, Hatfield Polytechnic (Hatfield, 1985).

    Google Scholar 

  • M.C. Bartholomew-Biggs, “Matrix updating in nonlinear programming calculations,” Technical Report 156, The Numerical Optimisation Centre, Hatfield Polytechnic (Hatfield, 1985).

    Google Scholar 

  • M.C. Bartholomew-Biggs, “A Globally convergent version of REQP for contrained minimization,” Technical Report 168, Hatfield Polytechnic (Hatfield, 1986).

    Google Scholar 

  • M.C. Biggs, “Constrained minimization using recursive equality quadratic programming,” in: F.A. Lootsma, ed., Numerical Methods for Nonlinear Optimization (Academic Press, London, 1972) pp. 411–428.

    Google Scholar 

  • M.C. Biggs, “Constrained minimization using recursive quadratic programming: Some alternative subproblem formulation,” in: L.C.W. Dixon and G.P. Szegö, eds., Towards Global Optimisation (North-Holland, Amsterdam, 1974) pp. 341–349.

    Google Scholar 

  • M.C. Biggs, “On the convergence of some constrained minimization algorithms based on quadratic programming,” Journal of the Institute of Mathematics and Its Applications 21 (1978) 67–81.

    MATH  MathSciNet  Google Scholar 

  • C.G. Broyden, “The convergence of a class of double rank minimization algorithmms,” Journal of the Institute of Mathematics and Its Applications 6 (1970) 76–90.

    Article  MATH  MathSciNet  Google Scholar 

  • I.D. Coope and R. Fletcher, “Some numerical experience with a globally convergent algorithm for nonlinearly constrained optimization,” Report NA/30, The University of Dundee Mathematics Department (Dundee, 1979).

    Google Scholar 

  • R.M. Chamberlain, M.J.D. Powell, C. Lemarechal and H.C. Pedersen, “The watchdog technique for forcing convergence in algorithms for constrained optimization,” Mathematical Programming Study 16 (1982) 1–17.

    MATH  MathSciNet  Google Scholar 

  • R. Fletcher, “An ideal penalty function for constrained optimization,” in: O.L. Mangasarian, R.R. Meyer and S.M. Robinson, eds., Nonlinear programming 2 (Academic Press, New York, 1975) pp. 121–163.

    Google Scholar 

  • S.P. Han, “A globally convergent method for nonlinear programming,” Journal of Optimization Theory and Applications 22 (1977) 297–309.

    Article  MATH  MathSciNet  Google Scholar 

  • W. Hock and K. Schittkowski, Test examples for nonlinear programming codes (Springer-Verlag, Berlin, 1981).

    MATH  Google Scholar 

  • N. Maratos, “Exact penalty function algorithms for finite dimensional and control optimization problems,” Ph.D. Thesis, London University (London, 1978).

    Google Scholar 

  • W. Murray, “An algorithm for constrained minimization,” in: R. Fletcher, ed., Optimization (Academic Press, London, 1969) pp. 247–258.

    Google Scholar 

  • W. Murray and M. Wright, “Computation of the search direction in constrained optimization algorithms,” Mathematical Programming Study 16 (1982) 62–83.

    MATH  MathSciNet  Google Scholar 

  • J.F.A. Pantoja, De O., “Algorithms for constrained optimization problems,” Ph.D. Thesis, London University (London, 1984).

    Google Scholar 

  • M.J.D. Powell, “A fast algorithm for nonlinearly constrained optimization calculations,” in G.A. Watson ed., Numerical Analysis, Dundee 1977, Lecture Notes in Mathematics 630 (Springer-Verlag, Berlin, 1978) pp. 144–157.

    Google Scholar 

  • M.J.D. Powell, ed., Nonlinear optimization 1981 (Academic Press, London, 1982).

    MATH  Google Scholar 

  • M.J.D. Powell, “The performance of two subroutines for constrained optimization on some difficult test problems,” in: P.T. Boggs, R.H. Byrd and R.B. Schnabel, eds., Numerical Optimization 1984 (SIAM Philadelphia, 1985) 160–177.

    Google Scholar 

  • M.J.D. Powell and Y. Yuan, “A recursive quadratic programming algorithm that uses differentiable exact penalty functions,” Mathematical Programming 35 (1986) 265–278.

    Article  MATH  MathSciNet  Google Scholar 

  • K. Schittkowski, “A numerical comparison of 13 nonlinear programming codes with randomly generated test problems,” in: L.C.W. Dixon and G.P. Szegö, eds., Numerical optimization of dynamic systems (North-Holland, Amsterdam, 1980) pp. 213–234.

    Google Scholar 

  • K. Schittkowski, “The nonlinear programming method of Wilson, Han and Powell with an augmented Lagrangian type line search function,” Numerische Mathematik 38 (1981) 83–114.

    Article  MATH  MathSciNet  Google Scholar 

  • R. Tapia, “On secant updates for use in general constrained optimization,” Technical Report 84-3, Rice University Department of Mathematical Sciences (Houston, 1984).

    Google Scholar 

  • P. Wolfe, “Convergence conditions for ascent methods,” SIAM Review 1 (1969) 226–235.

    Article  MathSciNet  Google Scholar 

  • P. Wolfe, “Convergence conditions for ascent methods II,” SIAM Review 13 (1971) 185–188.

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

K. L. Hoffman R. H. F. Jackson J. Telgen

Rights and permissions

Reprints and permissions

Copyright information

© 1987 The Mathematical Programming Society, Inc.

About this chapter

Cite this chapter

Bartholomew-Biggs, M.C. (1987). Recursive quadratic programming methods based on the augmented lagrangian. In: Hoffman, K.L., Jackson, R.H.F., Telgen, J. (eds) Computation Mathematical Programming. Mathematical Programming Studies, vol 31. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0121177

Download citation

  • DOI: https://doi.org/10.1007/BFb0121177

  • Received:

  • Revised:

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-00932-7

  • Online ISBN: 978-3-642-00933-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics

Navigation