Search
Search Results
-
Adaptive three-term PRP algorithms without gradient Lipschitz continuity condition for nonconvex functions
At present, many conjugate gradient methods with global convergence have been proposed in unconstrained optimization, such as MPRP algorithm proposed...
-
Theoretical analysis of Adam using hyperparameters close to one without Lipschitz smoothness
Convergence and convergence rate analyses of adaptive methods, such as Adaptive Moment Estimation (Adam) and its variants, have been widely studied...
-
An inertial spectral conjugate gradient projection method for constrained nonlinear pseudo-monotone equations
Consider the nonlinear pseudo-monotone equations over a nonempty closed convex set. A spectral conjugate gradient projection method with the inertial...
-
Certified Robust Models with Slack Control and Large Lipschitz Constants
Despite recent success, state-of-the-art learning-based models remain highly vulnerable to input changes such as adversarial examples. In order to... -
A family of three-term conjugate gradient projection methods with a restart procedure and their relaxed-inertial extensions for the constrained nonlinear pseudo-monotone equations with applications
Al-Baali et al. ( Comput. Optim. Appl. 60:89–110, 2015) have proposed a three-term conjugate gradient method which satisfies a sufficient descent...
-
Lipschitz constrained GANs via boundedness and continuity
One of the challenges in the study of generative adversarial networks (GANs) is the difficulty of its performance control. Lipschitz constraint is...
-
Modified projection method and strong convergence theorem for solving variational inequality problems with non-Lipschitz operators
In this paper, we introduce a modified projection method and give a strong convergence theorem for solving variational inequality problems in real...
-
A decentralized smoothing quadratic regularization algorithm for composite consensus optimization with non-Lipschitz singularities
Distributed algorithms are receiving renewed attention across multiple disciplines due to the dramatically increasing demand of big data processing....
-
Iterative methods for solving monotone variational inclusions without prior knowledge of the Lipschitz constant of the single-valued operator
In this work, we investigate a contraction-type method for solving monotone variational inclusion problems in real Hilbert spaces. We obtain strong...
-
Almost-Orthogonal Layers for Efficient General-Purpose Lipschitz Networks
It is a highly desirable property for deep networks to be robust against small input changes. One popular way to achieve this property is by... -
Stepsize Learning for Policy Gradient Methods in Contextual Markov Decision Processes
Policy-based algorithms are among the most widely adopted techniques in model-free RL, thanks to their strong theoretical groundings and good... -
Diagonal Barzilai-Borwein Rules in Stochastic Gradient-Like Methods
Minimization problems involving a finite sum as objective function often arise in machine learning applications. The number of components of the... -
Novel projection methods for solving variational inequality problems and applications
We introduce and analyze two modified subgradient extragradient methods with adaptive step sizes for solving variational inequality problems governed...
-
Model gradient: unified model and policy learning in model-based reinforcement learning
Model-based reinforcement learning is a promising direction to improve the sample efficiency of reinforcement learning with learning a model of the...
-
Local Optimisation of Nyström Samples Through Stochastic Gradient Descent
We study a relaxed version of the column-sampling problem for the Nyström approximation of kernel matrices, where approximations are defined from... -
New proximal bundle algorithm based on the gradient sampling method for nonsmooth nonconvex optimization with exact and inexact information
In this paper, we focus on a descent algorithm for solving nonsmooth nonconvex optimization problems. The proposed method is based on the proximal...
-
Distributed Gradient Optimization Algorithms
In this chapter, we will elaborate on state-of-the-art gradient optimization algorithms designed for distributed training of machine learning models.... -
Variance-based stochastic projection gradient method for two-stage co-coercive stochastic variational inequalities
The existing stochastic approximation (SA)-type algorithms for two-stage stochastic variational inequalities (SVIs) are based on the uniqueness of...
-
Bregman Proximal Gradient Algorithms for Deep Matrix Factorization
A typical assumption for the convergence of first order optimization methods is the Lipschitz continuity of the gradient of the objective function.... -
A convergence analysis of hybrid gradient projection algorithm for constrained nonlinear equations with applications in compressed sensing
In this paper, we propose a projection-based hybrid spectral gradient algorithm for nonlinear equations with convex constraints, which is based on a...