^{1}

^{*}

^{2}

^{*}

In this paper, some theoretical notions of well-posedness and o
f well-posedness in the generalized sense for scalar optimization problems are presented and some important results are analysed. Similar notions of well-posedness, respectively for a vector optimization problem and for a variational inequality of differential type, are** **discussed subsequently and, among the various vector well-posedness notions known in the literature, the attention is focused on the concept of pointwise well-posedness. Moreover, after a review of well-posedness properties, the study is further extended to** **a scalarizing procedure that preserve
s
well-posedness of the notions listed, namely to a result, obtained with a special scalarizing function, which links the notion of pontwise well-posedness to the well-posedness of a suitable scalar variational inequality of differential type.

The notion of well-posedness is significant for several mathematical problems and it is closely related to the stability of an optimization problem: it plays, in fact, a crucial role in the theoretical and in the numerical aspects of optimization theory [

Two different concepts of well-posedness are known in scalar optimization. The first, due to J. Hadamard, requires existence and uniqueness of the optimal solution and studies its continuous dependence from the data of the considered optimization problem. The second approach, introduced by A. N. Tykhonov, in 1966, requires, instead, besides the existence and the uniqueness of the optimal solution, the convergence of every minimizing sequence of approximate solutions to the unique minimum point. The links between Hadamard and Tykhonov well-posedness have been studied in [

The notion of well-posedness for a vector optimization problem is, instead, less developed, less advanced; there is no commonly accepted definition of well-posed problem, in vector optimization. Some attempts in this direction have been already done [

The well-posedness was generalized also to other contexts: variational inequalities, Nash equilibria and saddle point problems, all special cases of an equilibrium problem. For instance, [

The aim of this survey is twofold. The first aim is to recall some basic aspects of the mathematical theory of well-posedness in scalar optimization, to collect the two notions of well-posedness, Tykhonov well-posedness and Hadamard well-posedness, to give some strengthened versions of well-posedness and to show, in particular, some generalizations of the two types of well-posedness. The underlying idea is that sometimes the uniqueness of the solution could be dropped. Indeed, in different situations like in linear and quadratic programming, there is not always required the uniqueness of the solution; sometimes, namely, the uniqueness of the solution for a particular minimization problem is not of such an importance as its stability. The second aim is to present the notion of well-posedness in the vector optimization and, in particular, to verify if the well-posedness of the vector problem is equivalent to the well-posedness of the scalarized problem or better investigate the links between the well posedness of a vector optimization problem and of a vector variational inequality. Among the various vector well-posedness notions known in the literature, the attention is focused on the concept of pointwise well-posedness. After a review of well-posedness properties, the authors extend the study to a scalarizing procedure that preserve well-posedness of the notions listed, namely to a result, obtained with a special scalarizing function, which links the notion of pontwise well-posedness to the well-posedness of a suitable scalar variational inequality of a differential type.

The authors hope that the paper is very useful for stimulating the research and for providing fresh insights leading to new applications.

The paper is organized as follows. In Section 2, after the introduction, the research aims are analysed while in Section 3 some results on Tykhonov well-posedness and on Hadamard well-posedness and on their relations are analysed. In Section 4, some generalizations of the notion of well-posedness are investigate (in case in which there is not uniqueness of solutions) and some strenghthened versions of well-posedness (for istance well-posedness in the sense of Levitin and Polyak) while in Section 5 are studied some results of well-posedness of vector optimization problems and among the various vector well-posedness notions, known in the literature, the attention is focused on the concept of pointwise well-posedness, introduced in [

In scalar optimization the different notions of well-posedness are based either on the behaviour of “appropriate” minimizing sequences (converging to a solution of the problem) or on the dependence of the optimal solutions on the data of optimization problem. This section is devoted exactly to a study of two different notions of well-posedness. In particular, in it the authors give, initially, a characterization of the Tykhonov well-posedness, and a characterization of the Hadamard well-posedness, for a problem of minimizing a function f on a closed and convex set K; subsequently, they show the links between the two definitions and also some extensions and summarize some known results.

1) Tykhonov well-posedness

The first notion of well-posedness of an optimization problem was introduced in 1966 by A.N. Tykhonov and later took his name.

Let f : R n → R be a real-valued function and let K be a nonempty subset of R n . Throughout this paper, the scalar optimization problem:

min x ∈ K f (x)

is denoted by P ( f , K ) and consists in finding x * ∈ K such that

f ( x * ) = inf { f ( x ) , x ∈ K } = inf K f (x)

The set, possible empty, of the solutions of the optimization problem P ( f , K ) is denoted by argmin P ( f , K ) .

The optimization problem P ( f , K ) is said Tykhonov well-posed if it satisfies together the following properties:

a) existence of the solution (i.e. P ( f , K ) has a solution),

b) uniqueness of the solution (i.e. the solution set for P ( f , K ) is a singleton),

c) x * is a good approximation of the solution of P ( f , K ) , if f ( x * ) is close to inf K f ( x ) .

More precisely:

The problem P ( f , K ) is said Tykhonov well-posed if there exist exactly a unique x * ∈ K such that f ( x * ) ≤ f ( x ) for all x ∈ K , and if x n → x * for any sequence { x n } ⊂ K such that f ( x n ) → f ( x * ) (i.e. f ( x n ) → inf K f ( x ) ).

Recalling that a sequence { x n } ⊆ K is said minimizing sequence for problem P ( f , K ) when f ( x n ) → inf K f ( x ) = f ( x * ) as n → + ∞ , the previous definition can be rephrased in equivalent way, so [

Definition 3.1: The problem P ( f , K ) is said Tykhonov well-posed if it has, on K, a unique global minimum point, x * , and, moreover, every minimizing sequence for P ( f , K ) converges to x * .

The definition 2.1 is motivated by the fact that, usually, every numerical method for solving P ( f , K ) provides iteratively some minimizing sequences { x n } for P ( f , K ) ; such sequences are also called sequences of approximate solutions for the problem P ( f , K ) and therefore it is important to be sure that the approximate solutions { x n } are not far from the (unique) minimum x * .

In other words, the Tykhonov well-posedness of the optimization problem P ( f , K ) requires existence and uniqueness of minimum point x * towards which every sequence of approximate solutions of the problem P ( f , K ) converges. More precisely, to consider well-posedness of Tykhonov type, it is introduced the notion of “approximating sequence” for the solutions of optimization problems and it is required convergence of such sequences to a solution of the problem. For more details see [

Remark 3.1:

When K is compact, the uniqueness of the solution of a minimization problem P ( f , K ) is enough to guarantee its well-posedness but there are however simple examples in which the uniqueness of the solution of P ( f , K ) is not enough to guarantee its Tykhonov well-posedness even for continuous functions.

A simple example of a problem with a unique solution but which is not Tykhonov well-posed is the following:

f ( x ) = x 2 x 4 + 1

Obviously ( K = R ) . P ( f , K ) has a unique solution at zero, namely the argmin P ( f , K ) = { 0 } , while x n = n , n = 1 , 2 , ⋯ provides a minimizing sequence which does not converge to this unique solution. Hence P ( f , K ) is not Tykhonov well-posed. Therefore, for continuous functions the Tykhonov well-posedness of an optimization problem P ( f , K ) simply means that every minimizing sequence of P ( f , K ) is convergent.

Another example:

Let K = R . If f ( x ) = x 2 e − x , P ( f , K ) has a unique minimum x 0 = 0 but it is not Tykhonov well-posed, since the sequences { x n } = { n } is minimizing but it does not converges to x 0 = 0 .

If f ( x ) = x 2 , then P ( f , K ) is Tykhonov well-posed.

For convex functions in finite dimensions the uniqueness of the solution is enough to guarantee its Tykhonov well-posedness while this is no longer valid in infinite dimensions [

Proposition 3.1: ( [

Different characterizations of Tykhonov well-posedness for minimization problems determined by convex functions in Banach spaces can be found in [

The next fundamental theorem [

Theorem 3.1: If the minimization problem P ( f , K ) is Tykhonov well-posed, then

d i a m [ ε - arg min ( f , K ) ] → 0 as ε → 0

where

ε - arg min ( f , k ) = { x ∈ K : f ( x ) ≤ ε + inf K f ( x ) }

is the set of ε-minimizers (approximate solutions) of f over K and diam denotes the diameter of given set.

Conversely, if f is lower semicontinuous and bounded from below on K,

d i a m [ ε - arg min ( f , K ) ] → 0 as ε → 0

implies Tykhonov well-posedness of P ( f , K ) .

When K is closed and f is lower semicontinuous and bounded, from below it is possible to use the sets:

L K , f ( ε ) = { x ∈ R n : f ( x ) ≤ inf ( f , K ) + ε and d ( x , K ) ≤ ε } , ε > 0

to introduce the notion of well-posedness of P ( f , K ) :

Definition 3.3: Let K be closed and let f : K → R be lower semicontinuous. The minimization problem P ( f , K ) is said to be well-posed if:

inf { d i a m L K , f ( ε ) , ε > 0 } = 0

Of course, if to any of the notions of generalized well-posedness is added the uniqueness of the solution, it is obtained the corresponding non generalized notion. Different characterizations of Tykhonov well-posedness for minimization problems determined by convex functions in Banach spaces can be found in [

2) Hadamard well-posedness

The second notion of well-posedness is inspired by the classical idea of J. Hadamard to the beginning of previous century: it requires existence and uniqueness of solution of the optimization problem together with continuous dependence of the optimal solution and optimal value on the data of the problems.

Definition 3.3: The minimization problem P ( f , K ) is said to be Hadamard well-posed if it has unique solution x * ( x * ∈ K ) and x * depends continuously on the data of the problem.

This is the well-known condition of well-posedness considered in the study of differential equations, translated for minimum problems. The essence of this notion is that a “small” change of the data of the problem yields a “small” change of the solution.

In fact very often the mathematical model of a phenomenon is so complicated that it is necessary to simplify it and replace it by other model which is “near” the original and, at the same time, it is important to be sure that the new problem will have a solution which is “near” the original one. The well-known variational principle of Ekeland [

3) Relations between Hadamard and Tykhonow well-posedness

Almost all the literature deals with different notions of well-posedness, even if especially with Tykhonov well-posedness. Some researchers have investigated the relations between these notions of well-posedness but there is no general research to such relations. At first sight, the two notions seem to be independent but, at least in the convex case, there are some papers showing a connection between the two properties: for instance [

We remember the concept of Hausdorff convergence of sequences of sets.

Let D, E be subsets of R n and define

δ ( E , D ) = max { e ( E , D ) , e ( D , E ) }

where

e ( E , D ) = sup a ∈ E d ( a , D )

Definition 3.4: Let A k be a sequences of subsets of R n . We say that A k converges to A ⊆ R n in the sense of Hansdorff, and we write A k → A when δ ( A K , A ) → 0 .

The following theorems [

Theorem 3.2: Let K be a closed convex subset of R n and let f : K → R be a convex continuous function with one and only one minimum point on every closed and convex subset of K. If P ( f , K ) is

Hadamard well-posed, with respect to the well-known Hausdorff convergence, then P ( f , K ) is Tykhonov well-posed on every closed and convex subset of K.

Theorem 3.3: Let f : R n → R be a convex function uniformly continuous on every bounded set. If P ( f , K ) is Tykhonov well-posed on every closed and convex set, then P ( f , K ) is Hadamard well-posed, with respect to the Hausdorff convergence.

The Tykhonov well-posedness does not, in general, imply the Hadamard well-posedness if the objective function is only continuous.

In the above definitions it is required the existences and the uniqueness of solution towards which every minimizing sequence converges. The different notions of well-posedness, however, admit generalizations which do not require uniqueness of the solution. In other words, the uniqueness requirement can be relaxed and well-posed optimization problems with several solutions can be considered. Therefore, while the requirement of existence in the previous definitions is crucial, the uniqueness condition is more debatable. In fact, many problems in linear and quadratic programming or many multicriteria optimization problems are usually considered as well-posed problems, although uniqueness is usually not satisfied [

More precisely, in scalar optimization problems it is difficult to guarantee the uniqueness of the optimal solutions, uniqueness that is critical to the solution stability and calculation.

In other words, the different notions of well-posedness admit generalizations which do not require uniqueness of the solution. In particular, the concept of Tykhonov well posedness can be extended to minimum problems without uniqueness of the optimal solutions. It becomes imperative, namely, to generalize the notion of well-posedness for a minimization problem, introduced by Tykhonov, based on the fact that every minimizing sequence converges towards the unique minimum solution and to discuss the well-posedness for problems having more than one solution.

This new definition requires existence, but not uniqueness, of solution of P ( f , K ) , and, for every minimizing sequences, the convergence of some subsequence of the minimizing sequence towards some optimal solution.

Definition 4.1: The problem P ( f , K ) is called Tykhonov well-posed in the generalized sense if every minimizing sequence for P ( f , K ) has some subsequence converging to an optimal solution of P ( f , K ) , i.e. to an element of arg min ( f , K ) .

More precisely the problem P ( f , K ) is called Tykhonov well-posed in the generalized sense if arg min ( f , K ) ≠ 0 and every sequence x n ∈ K such that f ( x n ) → inf f ( K ) has some subsequence y n → y with y ∈ arg min ( f , K ) .

From the definition it follows, obviously, that, if the problem P ( f , K ) is Tykhonov well-posed in the generalized sense, then it has a non-empty compact set of solutions, i.e. arg min ( f , K ) is nonempty and compact. Moreover, when P ( f , K ) is well-posed in the generalized sense and arg min ( f , K ) is a singleton (i.e. its solution is unique), then P ( f , K ) is Tykhonov well-posed.

When arg min ( f , K ) is a singleton, the previous definition reduces to the classical notion of Tykhonov well-posedness or rather the problem P ( f , K ) is Tykhonov well-posedness if it is Tykhonov well-posed in the generalized Tykhonov sense and arg min ( f , K ) is a singleton; thus generalized well-posedness is really a generalization of Tykhonov well-posedness.

In order to weaken the requirement of uniqueness of the solution, other more general notions of well-posedness have been introduced, depending on the hypotheses made on f (and K). Here, the author recall the concept of well-setness introduced in [

Definition 4.2: Problem P ( f , K ) is said to be well-set when, for every minimizing sequence

{ x n } ⊆ K , d ( x n , arg min ( f , K ) ) → 0 , as n → + ∞ ,

where arg min ( f , K ) denotes the set of solutions of problem P ( f , K ) while d ( x , K ) [ d ( x n , K ) = inf { ‖ x n − y ‖ : y ∈ K } ] is the distance of the point x from the set K.

The idea of the behaviour of the minimizing sequences was used by different authors also to extend this concept to strengthened notions. These notions are not suitable for numerical methods, where the function f is approximated by a family or a sequence of functions. For this reason new notions of well-posedness have been introduced and studied.

Before, however, we consider two generalizations of the notion of minimizing sequence.

The first was introduced and studied by [

Konsulova and Revalski [

The well-posedness of the minimization problem P ( f , K ) in the sense of Tykhonov concerns the behaviour of the function f in the set K but it does not take into account the behaviour of f outside K [

Definition 4.3: Let K be a nonempty subset of R n . The sequences { x n } n = 1 ∞ ⊂ K is a Levitin-Polyak minimizing sequences for the minimization problem P ( f , K ) if

f ( x n ) → inf K f ( x ) and d ( x n , K ) → 0

where d ( x n , K ) = inf { ‖ x n − y ‖ : y ∈ K } is the distance from the point x n to the set K while ‖ ‖ is the Euclidean norm.

In other words, a sequences { x n } n = 1 ∞ is a Levitin-Polyak minimizing sequences for P ( f , K ) if not only { f ( x n ) } n = 1 ∞ approaches the greatest lower bound of f over K but also the sequence { x n } n = 1 ∞ tends to K.

Then, the well-posedness concept can be strengthened as follows:

Definition 4.4: The minimization problem P ( f , K ) is called Levitin-Polyak well-posed if it has unique solution x * ∈ K and, moreover, every Levitin-Polyak minimizing sequence for P ( f , K ) converges to x * .

Of course, this definition is stronger than that of Tykhonov since requires that each sequence, belonging to a larger set of minimizing sequences, convergs to the unique solution, namely Levitin-Polyak well-posedness implies Tykhonov well-posedness.

The converse is true provided that f is uniformly continuous but not necessarily true if f is only continuous. It is enough to consider

K = R × { 0 } ⊂ R 2 , f ( x , y ) = x 2 − y 2 ( x + x 4 )

and the generalized minimizing sequence { 1 \ n } .

As Tykhonov well-posedness can be characterized by the behaviour of d i a m [ ε - arg min ( f , K ) ] , as Levitin Polyak well-posedness can be characterized by the behaviour of the set:

L K , f ( ε ) = { x ∈ K : d ( x , K ) ≤ ε and f ( x ) ≤ inf ( f , K ) + ε }

defined for ε > 0 and for f bounded from below on K.

In analogy with Theorem 3.1, the following result gives [

Theorem 4.2: If K is closed and f is lower semicontinuous and bounded from below on K, then d i a m L ( ε ) → 0 as ε → 0 implies Levitin-Polyak well-posedness of P ( f , K ) .

A second generalization of the usual notion of minimizing sequences is the following:

Definition 4.5: A sequence { x n } n = 1 ∞ ⊂ K is said to be a generalized minimizing sequence for the minimization problem P ( f , K ) if are fulfilled both:

d ( x n , K ) → 0 and lim sup f ( x n ) ≤ inf K f (x)

Consequently another strengthened version of the well-posedness is the following:

Definition 4.6: The minimization problem P ( f , K ) is said strongly well-posed if it has unique solution x * ∈ K and, moreover, every generalized minimizing sequences for P ( f , K ) converges to x * .

Obviously, in general strong well-posedness of the problem P ( f , K ) implies that of Levitin-Polyak, which in its turn implies the Tykhonov well-posedness. It is important underline that, each of the previous definitions, widely studied in many papers [

The corresponding generalization of Levitin-Polyak well-posedness in the case of non-uniqueness of the solution, or when the uniqueness of the solution is dropped, is:

Definition 4.7: The minimization problem P ( f , K ) is called generalized Levitin-Polyak well-posed if every Levitin-Polyak minimizing sequence { x n } ( { x n } ⊂ K ) for P ( f , K ) has a subsequence converging to a solution of P ( f , K ) .

Of course, any of the notions of generalized well-posedness, at which is added the uniqueness of the solution, is equivalent, obviously, to corresponding non generalized notion.

In scalar optimization, the different notions of well-posedness are based either on the behaviour of “appropriate” minimizing sequences or on the dependence of optimal solution with respect to the data of optimization problems. In vector optimization, instead, there is not a commonly accepted definition of well-posedness but there are different notions of well-posedness of vector optimization problems. For a detailed survey on these problems it is possible to refer to [

In this section, we propose some of these definitions of well-posedness for a vector optimization problem; in particular, among the various vector well-posedness notions known in the literature, the attention is focused on the concept of pointwise well-posedness, introduced in [

We consider the vector optimization problem:

V P ( f , K ) min C f ( x ) x ∈ K

where K is a nonempty, closed, convex subset of R n , f : K ⊆ R n → R l is a continuous function and C ⊆ R l is a closed, convex, pointed cone and with nonempty interior. Denoted by int C the interior of C.

A point x * ∈ K is said to be an efficient solution or minimal solution of problem V P ( f , K ) when:

f ( x ) − f ( x * ) ∉ − C \ { 0 } ∀ x ∈ K

If, in the above definition, instead of the cone C is used the cone C ˜ = { 0 } ∪ int C , x * is said weak minimal solution. Then, a point x * ∈ K is said to be a weakly efficient solution or weak minimal solution of problem V P ( f , K ) when:

f ( x ) − f ( x * ) ∉ − int C ∀ x ∈ K

The set of all efficient solutions (minimal solutions) of problem V P ( f , K ) is denoted by E f f ( f , K ) while W E f f ( f , K ) denotes the set of weakly efficient solutions (weak minimal solutions) of V P ( f , K ) . Moreover, every minimal is also a weak minimal solution but the converse is not generally true.

In this section the authors recall a notion of well-posedness that considers a single point (a fixed efficient solution) and not the whole solution set: a particular type of pointwise well-posedness and strong pointwise well-posedness for vector optimization problems. This definition can be introduced considering, as in the scalar case, the diameter of the level sets of the function f.

Generalizing Tykhonov’s definition of well-posedness for a scalar optimization problem, in [

Definition 5.1: The vector optimization problem V P ( f , K ) is said to be pointwise well-posed at the efficient solution x * ∈ K or Tykhonov well-posed at x * ∈ E f f ( f , K ) , if:

inf d i a m L ( x * , k , α ) = 0 ∀ k ∈ C , ∀ α > 0

where:

L ( x * , k , α ) = { x ∈ K : f ( x ) ≤ C f ( x * ) + α k } = { x ∈ K : f ( x ) ∈ f ( x * ) + α k − C }

Definition 5.2: The vector optimization problem V P ( f , K ) is said to be strongly pointwise well-posed at the efficient solution x * , or Tykhonov strongly well-posed at x * ∈ E f f ( f , K ) , if:

inf d i a m L s ( x * , k , α ) = 0 ∀ k ∈ C

where:

L s ( x * , k , α ) = { x ∈ K : f ( x ) ≤ C f ( x * ) + α k and d ( x , K ) ≤ α }

For the sake of completeness, we recall that it is also possible to introduce another type of well-posedness of the vector optimization problem V P ( f , K ) at a point x * ∈ E f f ( f , K ) [

Definition 5.3: The vector optimization problem V P ( f , K ) is said to be H-well-posed at a point x * ∈ E f f ( f , K ) if x n → x * for any sequence { x n } ⊆ K , such that f ( x n ) → f ( x * ) .

Definition 5.4: The vector optimization problem V P ( f , K ) is said to be strongly H-well-posed at a point x * ∈ E f f ( f , K ) if x n → x * for any sequence { x n } such that f ( x n ) → f ( x * ) with d ( x n , K ) → 0 .

Remark 5.1:

If int C ≠ 0 , then well-posedness at a point x * ∈ E f f ( f , K ) of the vector optimization problem V P ( f , K ) , according to definition 5.1 [resp. to def. 5.2], implies well-posedness according to definition 5.3 [resp. to def. 5.4]. It is easy realize that the pointwise well-posedness of type 5.1 is weaker than pointwise well-posedness of type 5.3 [

An useful tool in the study of vector optimization problems is provided by the vector variational inequalities, that, introduced first by Giannessi in 1980, have been studied intensively because they can be efficient tools for investigating vector optimization problems and also because they provide a mathematical model for equilibrium problems; they provide, namely, an unified and efficient framework for a wide spectrum of applied problems.

Before, however, it is important to underline that the theory of variational inequalities provides a convenient mathematical apparatus for obtain result relating to a large number of problems with a wide range of applications in economics, finance, social, pure and applied sciences. In fact, it is well known that many equilibrium problems, arising in finance, economics, transportation science and contact problems in elasticity, can be formulated in terms of the variational inequalities [

There is a very close connection between the optimization problems and the variational inequalities. In fact, the well-posedness of a scalar minimization problem is linked to that of a scalar variational inequality and, in particular, to a variational inequality of differential type (i.e. in which the operator involved is the gradient of a given function). The links between variational inequalities of differential type and optimization problems have been deeply studied in [

In this section, are treated the vector variational inequalities of differential type.

Let f : R n → R l be a function differentiable on an open set containing the closed convex set K ⊆ R n . The vector variational inequality problem of differential type consists in finding a point x * ∈ K such that:

S V V I ( f ′ , K ) 〈 f ′ ( x * ) , y − x * 〉 l ∉ − int C ∀ y ∈ K

where f ′ denotes the Jacobian of f and 〈 f ′ ( x * ) , y − x * 〉 l is the vector whose components are the l inner products 〈 f ′ i ( x * ) , y − x * 〉 .

It is well known that S V V I ( f ′ , K ) provides a necessary condition for x * to be an efficient solution of V P ( f , K ) . It is, instead, a sufficient condition for x * to be an efficient solution of V P ( f , K ) if f is int C -convex while, if f is C-convex, S V V I ( f ′ , K ) is a sufficient condition for x * to be an weakly efficient solution of V P ( f , K ) . These remarks underline the links between optimization problems and variational inequalities also for vector case. This is a further reason for a suitable definition of well-posedness for a vector variational inequality which could be compared and related to the given definition for vector optimization. Then, a notion of well-posedness is introduced for the vector variational inequality problem S V V I ( f ′ , K ) , obtained by generalizing the definition of the scalar case and it is defined the following set:

T c 0 ( ε ) : = { x ∈ K : 〈 f ′ ( x ) , y − x 〉 l ∉ − ε ‖ y − x ‖ c 0 − int C , ∀ y ∈ K }

where ε > 0 and c 0 ∈ int C . T c 0 ( ε ) is a directional generalization of the set T ( ε ) of the scalar case.

Definition 5.5: The variational inequality S V V I ( f ′ , K ) is well-posed if, for every c 0 ∈ int C , e ( T c 0 ( ε ) , W E f f ( f , K ) ) → 0 where e ( A K , A ) = sup A i ⊂ A K d ( A i , A ) .

The following result states the relationship between well-posed optimization problem and a well-posed variational inequality, in the vector case [

Theorem 5.1: If the variational inequality S V V I ( f ′ , K ) is well-posed, then problem V P ( f , K ) is well-posed at x * .

For C-convex functions, in particular, well-posedness of V P ( f , K ) and S V V I ( f ′ , K ) substantially coincide. To show that, it is necessary to assume that f is differentiable on an open set containing K and observe that:

Definition 5.6: The function f : K ⊆ R n → R l is said to be C-convex when:

f ( λ x + ( 1 − λ ) y ) − [ λ f ( x ) + ( 1 − λ ) f ( y ) ] ∈ − C ∀ x , y ∈ K , ∀ λ ∈ [ 0 , 1 ]

Lemma 1: If f : R n → R l is C-convex, then:

T c 0 ( ε ) = { x ∈ K : f ( y ) − f ( x ) ∉ − ε ‖ y − x ‖ c 0 − int C }

Theorem 5.2: Let f be a C-convex function. Assume that c 0 ∈ int C , and that T c 0 ( ε ) is bounded for some ε > 0 . Then S V V I ( f ′ , K ) is well-posed.

Therefore, if f is a C-convex function, the well-posedness of S V V I ( f ′ , K ) is ensured and, namely, by theorem 3.2, substantially coincide with well-posedness of V P ( f , K ) .

In this section, the authors, after a review of well-posedness, focus their attention on a scalarization procedure that preserve well-posedness of the notions listed above and among various scalarization procedures known in the literature, they consider the one based on the so called “oriented distance” function from a point to a set. This special scalarizing function, introduced by Hiriart-Urruty in [

This function allows to establish a parallelism between the well-posedness of the original vector problem and the well-posedness of the associate scalar problem. Indeed, the authors show that one of the weakest notions of well-posedness in vector optimization is linked to the well-setness of the scalarized problem, while some stronger notion of well-posedness in the vector case is related to Tykhonov well-posedness of the associated scalarization.

These results constitute a simple tool to show that, under some additional compactness assumptions, quasiconvex vector optimization problems are well-posed. Thus, a known result about scalar problems can be extended to vector optimization and improves a previous result concerning convex vector problems.

Throughout this section we assume that f : R n → R l is differentiable on an open set containing the closed convex set K ⊆ R n .

Definition 6.1: For a set A ⊆ R l , let Δ A : R l → R ∪ { ± ∞ } be defined as:

Δ A ( y ) = d ( y , A ) − d ( y , A c )

where d A ( y ) = inf a ∈ A ‖ y − a ‖ is the distance from the point y to the set A.

Function Δ A ( y ) is called the oriented distance function from the point y to the set A and it has been introduced in the framework of nonsmooth scalar optimization.

Δ A ( y ) < 0 for y ∈ int A (the interior of A), Δ A ( y ) = 0 for y ∈ b d A (the boundary of A) and positive elsewhere.

The main properties of function Δ A are gathered in the following theorem [

Theorem 6.1:

1) if A ≠ 0 and A ≠ R l then Δ A is real valued;

2) Δ A is 1-Lipschitzian;

3) Δ A ( y ) < 0 , ∀ y ∈ int A , Δ A ( y ) = 0 , ∀ y ∈ b d A and Δ A ( y ) > 0 , ∀ y ∈ int A c

where the notation b d A denotes the frontier of the set A and A c the complementary of set A.

4) if A is closed, then it holds A = { y : Δ A ( y ) ≤ 0 } ;

5) if A is convex, then Δ A is convex;

6) if A is a cone, then Δ A is positively homogeneous;

7) if A is a closed convex cone, then Δ A is non increasing with respect to the ordering relation induced by A on R l , i.e. the following is true:

if y 1 , y 2 ∈ R l then y 1 − y 2 ∈ A ⇒ Δ A ( y 1 ) ≤ Δ A (y2)

if A has nonempty interior, then y 1 − y 2 ∈ int A ⇒ Δ A ( y 1 ) < Δ A (y2)

The oriented distance function Δ A , used also to obtain a scalarization of a vector optimization problem [

It has been proved in [

Δ − A ( y ) = max ξ ∈ A ′ ∩ S 〈 ξ , y 〉

where A ′ : = { x ∈ R l | 〈 x , a 〉 ≥ 0 , ∀ a ∈ A } is the positive polar of the cone of A and S the unit sphere in R l .

The function Δ − A is used in order to give scalar characterizations of some notions of efficiency for problem V P ( f , K ) . Furthermore, some results characterize pointwise well-posedness of problem V P ( f , K ) through function Δ − A [

φ x * ( x ) = max ξ ∈ C ′ ∩ S 〈 ξ , f ( x ) − f ( x * ) 〉

where C ′ denotes the positive polar of C and S the unit sphere in R l . Clearly

φ x * ( x ) = Δ − C ( f ( x ) − f ( x * ) )

The function φ x * is directionally differentiable [

φ ′ x * ( x ; d ) = lim t → 0 + φ x * ( x + t d ) − φ x * ( x ) t

and the associated scalar problem: find x * ∈ K , such that:

S V I ( φ ′ x * , K ) φ ′ x * ( x * ; y − x * ) ≥ 0 ∀ y ∈ K

The solutions of problem S V I ( φ ′ x * , K ) coincide with the solutions of S V V I ( f ′ , K ) .

Proposition 6.1: Let K be a convex set. If x * ∈ K solves problem S V I ( φ ′ x * , K ) for some x * ∈ K , then x * is a solution of S V V I ( f ′ , K ) . Conversely, if x * ∈ K solves S V V I ( f ′ , K ) , then x * solves problem S V I ( φ ′ x * , K ) .

The scalar problem associated with the vector problem V P ( f , K ) is:

P ( φ x * , K ) min φ x * ( x ) x ∈ K

The relations among the solutions of problem P ( φ x * , K ) and those of problem V P ( f , K ) are refers investigated in [

Proposition 6.2: The point x * ∈ K is a weak efficient solution of V P ( f , K ) if and only if x * is a solution of P ( φ x * , K ) .

The proof is omitted and for it refer to [

Also well-posedness of V P ( f , K ) can be linked to that of P ( φ x * , K ) [

Proposition 6.3: Let f be a continuous function and let x * ∈ K be an efficient solution of V P ( f , K ) . Problem V P ( f , K ) is pointwise well-posed at x * if and only if problem P ( φ x * , K ) is Tykhonov well-posed.

The next proposition links the well-posedness of S V I ( φ ′ x * , K ) to pointwise well-posedness of V P ( f , K ) . It is need to recall Ekeland’s variational principle [

Proposition 6.4: If S V I ( φ ′ x * , K ) is pointwise well-posed at x * ∈ K , then problem V P ( f , K ) is pointwise well-posed at x * .

Proof: By proposition 6.3, it is enough to prove that if S V I ( φ ′ x * , K ) is pointwise well-posed at x * , then problem P ( φ x * , K ) is Tykhonov well-posed.

In fact, for every ε > 0 and x ∈ ε - arg min ( φ x * , K ) , by Ekeland’s variational principle, there exists x ¯ such that:

‖ x ¯ − x ‖ ≤ ε and φ x * ( x ¯ ) ≤ φ x * ( y ) + ε ‖ x ¯ − y ‖ ∀ y ∈ K .

If it is introduced the set

Z ( ε ) = { x ∈ K : φ x * ( x ) ≤ φ x * ( y ) + ε ‖ x − y ‖ , ∀ y ∈ K }

then, it follows that

ε - arg min ( φ x * , K ) ⊆ Z ( ε ) + ε B .

It get, then, that ∀ u ∈ ε - arg min ( φ x * , K ) , there exist x such that ‖ u − x ‖ ≤ ε and

φ x * ( x + t ( y − x ) ) ≥ φ x * ( x ) − ε t ‖ y − x ‖ , 0 < t < 1 , y ∈ K

Since φ ′ x * ( x , y − x ) ≥ − ε ‖ y − x ‖ , it follows that x ∈ T x * ( ε ) and so:

ε - arg min ( φ x * , K ) ⊆ T x * ( ε ) + ε B

Since d i a m T x * ( ε ) → 0 as ε → 0 , then P ( φ x * , K ) is Tykhonov well-posed.

Now, the authors prove that the converse of the previous proposition holds under convexity assumptions, namely it is true if f is C-convex. Before, they need the following Lemma:

Lemma 6.1: If f : R n → R l is C-convex function, then the function φ x * ( x ) , is convex ∀ x ∈ K .

Then:

Proposition 6.5: Let f be C-convex and assume V P ( f , K ) is pointwise well posed at x * ∈ K . ( x * ^{ }is an efficient solution). Then S V I ( φ ′ x * , K ) is pointwise well-posed at x * .

Assuming, ab absurdo, that S V I ( φ ′ x * , K ) is not pointwise well-posed at x * , it follows that exist a > 0 and ε n → 0 , with d i a m T x * ( ε n ) > 2 a ( x * ∈ int C ) and one can find some x n ∈ T x * ( ε n ) , with ‖ x n ‖ ≥ a .

Without loss of generality, it is possible to put x * = 0 . Since φ x * is convex, it follows that:

φ x * ( 0 ) − φ x * ( y n ) ≥ φ ′ x * ( y n , − y n )

where y n = a x n ‖ x n ‖ . The boundedness of y n implies that it is can assume y n → y ¯ ∈ K (here it is need K closed). Further, since x n ∈ T x * ( ε n ) ,

φ ′ x * ( x n , − x n ) ≥ − ε n ‖ x n ‖

Since

φ ′ x * ( x n , − x n ) = lim t → 0 + φ x * ( x n − t x n ) − φ x * ( x n ) t = lim t → 0 + − φ x * ( x n + ( − t ) ( x n ) ) − φ x * ( x n ) − t = − φ ′ x * ( x n , x n )

and

φ ′ x * ( y n , − y n ) = − φ ′ x * ( y n , y n )

from the continuity of φ x * , it is possible to obtain

φ ′ x * ( y n , y n ) = φ ′ x * ( a x n ‖ x n ‖ , a x n ‖ x n ‖ ) ≤ a ‖ x n ‖ φ ′ x * ( a x n ‖ x n ‖ , x n ) ≤ a ‖ x n ‖ φ ′ x * ( x n , x n )

The last inequality follows from the convexity of φ ′ x * [

Hence

− φ ′ x * ( y n , − y n ) ≤ a ‖ x n ‖ φ ′ x * ( x n , x n )

it follows

a ‖ x n ‖ φ ′ x * ( x n , − x n ) ≤ φ ′ x * ( y n , − y n )

and so

φ x * ( 0 ) − φ x * ( y n ) ≥ a ‖ x n ‖ φ ′ x * ( x n , − x n ) ≥ − a ε n

Sending n to + ∞ we obtain φ x * ( 0 ) − φ x * ( y ¯ ) ≥ 0 which contradicts Tykhonov well-posedness by Proposition 6.3. So, the thesis is true.

In this paper, the authors have reviewed and studied some properties of well-posedness, a field that has attracted attentions of many researchers for various types of problems and that requests intellectual endeavours. In reality, almost all the literature deals with directly specific notions of well-posedness but there is no general research to the relations between them for different problems and therefore is much needed the research, mostly in this area, to develop and to foster new and innovative applications in various branches of pure and applied sciences. The authors have given only a brief review of this fast growing field and hope that the general theories and results surveyed in this paper can be used to formulate and to outline some connections with other mathematical fields.

The authors declare no conflicts of interest regarding the publication of this paper.

Ferrentino, R. and Boniello, C. (2019) On the Well-Posedness for Optimization Problems: A Theoretical Investigation. Applied Mathematics, 10, 19-38. https://doi.org/10.4236/am.2019.101003