-
Article
Optimal methods for convex nested stochastic composite optimization
Recently, convex nested stochastic composite optimization (NSCO) has received considerable interest for its applications in reinforcement learning and risk-averse optimization. However, existing NSCO algorithm...
-
Article
Open AccessLevel constrained first order methods for function constrained optimization
We present a new feasible proximal gradient method for constrained optimization where both the objective and constraint functions are given by summation of a smooth, possibly nonconvex function and a convex si...
-
Article
Homotopic policy mirror descent: policy convergence, algorithmic regularization, and improved sample complexity
We study a new variant of policy gradient method, named homotopic policy mirror descent (HPMD), for solving discounted, infinite horizon MDPs with finite state and action spaces. HPMD performs a mirror descent...
-
Article
A unified single-loop alternating gradient projection algorithm for nonconvex–concave and convex–nonconcave minimax problems
Much recent research effort has been directed to the development of efficient algorithms for solving minimax problems with theoretical convergence guarantees due to the relevance of these problems to a few eme...
-
Article
Second-Order Semi-Lagrangian Exponential Time Differencing Method with Enhanced Error Estimate for the Convective Allen–Cahn Equation
The convective Allen–Cahn (CAC) equation has been widely used for simulating multiphase flows of incompressible fluids, which contains an extra convective term but still maintains the same maximum bound princi...
-
Article
Policy mirror descent for reinforcement learning: linear convergence, new sampling complexity, and generalized problem classes
We present new policy mirror descent (PMD) methods for solving reinforcement learning (RL) problems with either strongly convex or general convex regularizers. By exploring the structural properties of these o...
-
Living Reference Work Entry In depth
Stochastic Gradient Descent
-
Article
Stochastic first-order methods for convex and nonconvex functional constrained optimization
Functional constrained optimization is becoming more and more important in machine learning and operations research. Such problems have potential applications in risk-averse machine learning, semisupervised le...
-
Article
Correction to: Complexity of stochastic dual dynamic programming
In this paper, we point out some corrections needed in “Complexity of Stochastic Dual Dynamic Programming”, a paper accepted to Mathematical Programming, 2020, online-first issue,.
-
Article
Accelerated gradient sliding for structured convex optimization
Our main goal in this paper is to show that one can skip gradient computations for gradient descent type methods applied to certain structured convex programming (CP) problems. To this end, we first present a...
-
Article
Complexity of stochastic dual dynamic programming
Stochastic dual dynamic programming is a cutting plane type algorithm for multi-stage stochastic optimization originated about 30 years ago. In spite of its popularity in practice, there does not exist any ana...
-
Article
Existence of a Positive Solution for a Class of Choquard Equation with Upper Critical Exponent
In this paper, we investigate the existence of nontrivial solution for the following class of Choquard equation where
-
Article
Dynamic stochastic approximation for multi-stage stochastic optimization
In this paper, we consider multi-stage stochastic optimization problems with convex objectives and conic constraints at each stage. We present a new stochastic first-order method, namely the dynamic stochastic...
-
Article
A Novel Arbitrary Lagrangian–Eulerian Finite Element Method for a Mixed Parabolic Problem in a Moving Domain
In this paper, a novel arbitrary Lagrangian–Eulerian (ALE) map**, thus a novel ALE-mixed finite element method (FEM), is developed and analyzed for a type of mixed parabolic equations in a moving domain. By ...
-
Article
Algorithms for stochastic optimization with function or expectation constraints
This paper considers the problem of minimizing an expectation function over a closed convex set, coupled with a function or expectation constraint on either decision variables or problem parameters. We first p...
-
Article
Communication-efficient algorithms for decentralized and stochastic optimization
We present a new class of decentralized first-order methods for nonsmooth and stochastic optimization problems defined over multiagent networks. Considering that communication is a major bottleneck in decentra...
-
Article
A Monolithic Arbitrary Lagrangian–Eulerian Finite Element Analysis for a Stokes/Parabolic Moving Interface Problem
In this paper, an arbitrary Lagrangian–Eulerian (ALE)—finite element method (FEM) is developed within the monolithic approach for a moving-interface model problem of a transient Stokes/parabolic coupling with ...
-
Book
-
Chapter
Convex Optimization Theory
Many machine learning tasks can be formulated as an optimization problem given in the form of min x ∈ X f ( x ) , $$\displaystyle \min _{x \in X} f(x), $$ where f, x, and X denote the objective fu...
-
Chapter
Stochastic Convex Optimization
In this chapter, we focus on stochastic convex optimization problems which have found wide applications in machine learning. We will first study two classic methods, i.e., stochastic mirror descent and acceler...