Abstract
This paper considers the problem of minimizing an expectation function over a closed convex set, coupled with a function or expectation constraint on either decision variables or problem parameters. We first present a new stochastic approximation (SA) type algorithm, namely the cooperative SA (CSA), to handle problems with the constraint on devision variables. We show that this algorithm exhibits the optimal \({{{\mathcal {O}}}}(1/\epsilon ^2)\) rate of convergence, in terms of both optimality gap and constraint violation, when the objective and constraint functions are generally convex, where \(\epsilon\) denotes the optimality gap and infeasibility. Moreover, we show that this rate of convergence can be improved to \({{{\mathcal {O}}}}(1/\epsilon )\) if the objective and constraint functions are strongly convex. We then present a variant of CSA, namely the cooperative stochastic parameter approximation (CSPA) algorithm, to deal with the situation when the constraint is defined over problem parameters and show that it exhibits similar optimal rate of convergence to CSA. It is worth noting that CSA and CSPA are primal methods which do not require the iterations on the dual space and/or the estimation on the size of the dual variables. To the best of our knowledge, this is the first time that such optimal SA methods for solving function or expectation constrained stochastic optimization are presented in the literature.
We’re sorry, something doesn't seem to be working properly.
Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.
References
Auslender, A., Teboulle, M.: Interior gradient and proximal methods for convex and conic optimization. SIAM J. Optim. 16(3), 697–725 (2006)
Bauschke, H., Borwein, J., Combettes, P.: Bregman monotone optimization algorithms. SIAM J. Control Optim. 42, 596–636 (2003)
Beck, A., Ben-Tal, A., Guttmann-Beck, N., Tetruashvili, L.: The comirror algorithm for solving nonsmooth constrained convex problems. Oper. Res. Lett. 38(6), 493–498 (2010)
Benveniste, A., Métivier, M., Priouret, P.: Algorithmes adaptatifs et approximations stochastiques. Masson, 1987. (English translation: Adaptive Algorithms and Stochastic Approximations). Springer, Belin (1993)
Bregman, L.M.: The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 7(3), 200–217 (1967)
Chapelle, O., Scholkopf, B., Zien, A.: Semi-supervised Learning. MIT Press, Cambridge, Mass (2006)
Chen, Y., Lan, G., Ouyang, Y.: Optimal primal-dual methods for a class of saddle point problems. SIAM J. Optim. 24(4), 1779–1814 (2014)
Duchi, J.C., Bartlett, P.L., Wainwright, M.J.: Randomized smoothing for stochastic optimization. SIAM J. Optim. 22, 674–701 (2012)
Duchi, J.C., Shalev-shwartz, S., Singer, Y., Tewari, A.: Composite objective mirror descent. In: Proceedings of the Twenty Third Annual Conference on Computational Learning Theory (2010)
Ermoliev, Y.: Stochastic quasigradient methods and their application to system optimization. Stochastics 9, 1–36 (1983)
Gaivoronski, A.: Nonstationary stochastic programming problems. Kybernetika 4, 89–92 (1978)
Ghadimi, S., Lan, G.: Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization, I: a generic algorithmic framework. SIAM J. Optim. 22, 1469–1492 (2012)
Ghadimi, S., Lan, G.: Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization, II: shrinking procedures and optimal algorithms. SIAM J. Optim. 23, 2061–2089 (2013)
Ghadimi, S., Lan, G.: Accelerated gradient methods for nonconvex nonlinear and stochastic optimization. Technical report, Department of Industrial and Systems Engineering, University of Florida, Gainesville, FL 32611, USA, June 2013
Goldfarb, D., Iyengar, G.: Robust portfolio selection problems. Math. Oper. Res. 28(1), 1–38 (2003)
Jiang, H., Shanbhag, U.V.: On the solution of stochastic optimization and variational problems in imperfect information regimes. ar**v preprint ar**v:1402.1457 (2014)
Kleywegt, A.J., Shapiro, A., de Mello, T.H.: The sample average approximation method for stochastic discrete optimization. SIAM J. Optim. 12, 479–502 (2001)
Lan, G.: An optimal method for stochastic composite optimization. Math. Program. 133(1), 365–397 (2012)
Lan, G., Lu, Z., Monteiro, R.D.C.: Primal-dual first-order methods with \({{{\cal{O}}}}(1/\epsilon )\) iteration-complexity for cone programming. Math. Program. 126, 1–29 (2011)
Lan, G., Nemirovski, A.S., Shapiro, A.: Validation analysis of mirror descent stochastic approximation method. Math. Program. 134, 425–458 (2012)
Nedic, A.: On stochastic subgradient mirror-descent algorithm with weighted averaging. Technical report (2012)
Nemirovski, A., Juditsky, A., Lan, G., Shapiro, A.: Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19(4), 1574–1609 (2009)
Nemirovski, A., Shapiro, A.: Convex approximations of chance constrained programs. SIAM J. Optim. 17(4), 969–996 (2006)
Nesterov, Y.E.: Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic Publishers, Massachusetts (2004)
Pflug, G.: Optimization of stochastic models. In: Pflug, G. (ed.) The Interface Between Simulation and Optimization. Kluwer, Boston (1996)
Polyak, B.: New stochastic approximation type procedures. Autom. i Telemekh. 7, 98–107 (1990)
Polyak, B., Juditsky, A.: Acceleration of stochastic approximation by averaging. SIAM J. Control Optim. 30, 838–855 (1992)
Polyak, B.T.: A general method of solving extremum problems. Doklady Akademii Nauk SSSR 174(1), 33 (1967)
Robbins, H., Monro, S.: A stochastic approximation method. Ann. Math. Stat. 22, 400–407 (1951)
Rockafellar, R., Uryasev, S.: Optimization of conditional value-at-risk. J. Risk 2, 21–41 (2000)
Ruszczyński, A., Sysk, W.: A method of aggregate stochastic subgradients with on-line stepsize rules for convex stochastic programming problems. Math. Program. Study 28, 113–131 (1986)
Schmidt, M., Roux, N.L., Bach, F.: Minimizing finite sums with the stochastic average gradient. Technical report, September 2013
Shalev-Shwartz, S., Singer, Y., Srebro, N.: Pegasos: Primal estimated sub-gradient solver for svm. In: ICML, pp. 807–814 (2007)
Shapiro, A.: Monte carlo sampling methods. In: Ruszczyński, A., Shapiro, A. (eds.) Stochastic Programming. North-Holland Publishing Company, Amsterdam (2003)
Spall, J.C.: Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control, vol. 65. Wiley, New York (2005)
Teboulle, M.: Convergence of proximal-like algorithms. SIAM J. Optim. 7, 1069–1083 (1997)
Wang, W., Ahmed, S.: Sample average approximation of expected value constrained stochastic programs. Oper. Res. Lett. 36(5), 515–519 (2008)
**ao, L.: Dual averaging methods for regularized stochastic learning and online optimization. J. Mach. Learn. Res. 12, 2543–2596 (2010)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Part of the results were first presented at the Annual INFORMS meeting in Oct, 2015, https://informs.emeetingsonline.com/emeetings/formbuilder/clustersessiondtl.asp?csnno=24236&mmnno=272&ppnno=91687 and summarized in a previous version entitled “Algorithms for stochastic optimization with expectation constraints” in 2016.
Guanghui Lan has been supported by NSF CMMI 1637474.
Rights and permissions
About this article
Cite this article
Lan, G., Zhou, Z. Algorithms for stochastic optimization with function or expectation constraints. Comput Optim Appl 76, 461–498 (2020). https://doi.org/10.1007/s10589-020-00179-x
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10589-020-00179-x