-
Article
A Fritz John optimality condition using the approximate subdifferential
A Fritz John type first-order optimality condition is derived for infinite-dimensional programming problems involving the approximate subdifferential. A discussion of the important properties of the approximat...
-
Article
Generalized convex relations with applications to optimization and models of economic dynamics
We examine a notion of generalized convex set-valued map**, extending the notions of a convex relation and a convex process. Under general conditions, we establish duality results for composite set-valued ma...
-
Article
Characterizing global optimality for DC optimization problems under convex inequality constraints
Characterizations of global optimality are given for general difference convex (DC) optimization problems involving convex inequality constraints. These results are obtained in terms of ε-subdifferentials of t...
-
Article
New Version of the Newton Method for Nonsmooth Equations
In this paper, an inexact Newton scheme is presented which produces a sequence of iterates in which the problem functions are differentiable. It is shown that the use of the inexact Newton scheme does not redu...
-
Chapter
Quasiconvexity via Two Step Functions
Quasiconvex functions are here defined and studied using a representation of these functions as generalized convex functions with respect to special classes of quasiaffine two step functions. Some of the main ...
-
Chapter
Approximations to the Clarke Generalized Jacobians and Nonsmooth Least-Squares Minimization
Here we use a uniform approximation to the Clarke generalized Jacobian to design an algorithm for solving a class of nonsmooth least-squares minimization problems:
-
Article
Increasing Convex-Along-Rays Functions with Applications to Global Optimization
Increasing convex-along-rays functions are defined within an abstract convexity framework. The basic properties of these functions including support sets and subdifferentials are outlined. Applications are pro...
-
Chapter
Nonlinear Unconstrained Optimization Methods: A Review
In this review paper, we report recent results in the study of nonlinear unconstrained optimization methods, such as nonlinear penalty function method and nonlinear Lagrangian method. One important feature of ...
-
Article
Extended Lagrange and Penalty Functions in Optimization
We consider nonlinear Lagrange and penalty functions for optimization problems with a single constraint. The convolution of the objective function and the constraint is accomplished by an increasing positively...