**Applied Mathematics**

Vol.10 No.01(2019), Article ID:90309,20 pages

10.4236/am.2019.101003

On the Well-Posedness for Optimization Problems: A Theoretical Investigation

Rosa Ferrentino^{1}, Carmine Boniello^{2 }

^{1}Department of Economic and Statistics Sciences, University of Salerno, Fisciano, (Salerno, Italy

^{2}University of Salerno, Fisciano, Salerno, Italy

Copyright © 2019 by author(s) and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: December 20, 2018; Accepted: January 28, 2019; Published: January 31, 2019

ABSTRACT

In this paper, some theoretical notions of well-posedness and of well-posedness in the generalized sense for scalar optimization problems are presented and some important results are analysed. Similar notions of well-posedness, respectively for a vector optimization problem and for a variational inequality of differential type, are discussed subsequently and, among the various vector well-posedness notions known in the literature, the attention is focused on the concept of pointwise well-posedness. Moreover, after a review of well-posedness properties, the study is further extended to a scalarizing procedure that preserves well-posedness of the notions listed, namely to a result, obtained with a special scalarizing function, which links the notion of pontwise well-posedness to the well-posedness of a suitable scalar variational inequality of differential type.

**Keywords:**

Well-Posedness, Hadamard and Tykhonov Well-Posedness, Vector Optimization Problems, Scalarization Function

1. Introduction

The notion of well-posedness is significant for several mathematical problems and it is closely related to the stability of an optimization problem: it plays, in fact, a crucial role in the theoretical and in the numerical aspects of optimization theory [1] [2] . The study of well-posedness, used also in different areas as mathematical programming, calculus of variations and optimal control, becomes important mainly for problems in which, due to certain hypotheses, the optimization models are imprecise or when the existing algorithms in literature are sufficient enough to guarantee only the approximate solutions of such problems while the exact solution may not exist or may even be more difficult to compute. Under these hypotheses, the well-posedness of an optimization problem is fundamental, in the sense that it ensures the convergence of the sequence of approximate solutions, obtained through iterative techniques, to the exact solution of the problem.

Two different concepts of well-posedness are known in scalar optimization. The first, due to J. Hadamard, requires existence and uniqueness of the optimal solution and studies its continuous dependence from the data of the considered optimization problem. The second approach, introduced by A. N. Tykhonov, in 1966, requires, instead, besides the existence and the uniqueness of the optimal solution, the convergence of every minimizing sequence of approximate solutions to the unique minimum point. The links between Hadamard and Tykhonov well-posedness have been studied in [3] [4] [5] . There, besides uniqueness, additional structures are involved: in [6] [7] , for example, basic ingredient is convexity.

The notion of well-posedness for a vector optimization problem is, instead, less developed, less advanced; there is no commonly accepted definition of well-posed problem, in vector optimization. Some attempts in this direction have been already done [8] [9] [10] [11] and have been made some comparisons with their scalar counterparts. For instance, [12] gave a survey on various aspects on well-posedness of optimization problems.

The well-posedness was generalized also to other contexts: variational inequalities, Nash equilibria and saddle point problems, all special cases of an equilibrium problem. For instance, [13] investigated well-posedness for optimization problems with constraints defined by variational inequalities while Margiocco et al. [5] [14] [15] discussed Tykhonov well-posedness for Nash equilibria. [16] at last, gave a definition of well-posed for saddle point problems and related results. [3] introduced the notion of well-posedness for variational inequality problems based on the fact that an optimization problem can be formulated as a variational inequality problem involving the derivative of the objective function. In all these cases, the idea is an extension of the concept of minimizing sequences seen as approximate solutions.

2. Research Aims

The aim of this survey is twofold. The first aim is to recall some basic aspects of the mathematical theory of well-posedness in scalar optimization, to collect the two notions of well-posedness, Tykhonov well-posedness and Hadamard well-posedness, to give some strengthened versions of well-posedness and to show, in particular, some generalizations of the two types of well-posedness. The underlying idea is that sometimes the uniqueness of the solution could be dropped. Indeed, in different situations like in linear and quadratic programming, there is not always required the uniqueness of the solution; sometimes, namely, the uniqueness of the solution for a particular minimization problem is not of such an importance as its stability. The second aim is to present the notion of well-posedness in the vector optimization and, in particular, to verify if the well-posedness of the vector problem is equivalent to the well-posedness of the scalarized problem or better investigate the links between the well posedness of a vector optimization problem and of a vector variational inequality. Among the various vector well-posedness notions known in the literature, the attention is focused on the concept of pointwise well-posedness. After a review of well-posedness properties, the authors extend the study to a scalarizing procedure that preserve well-posedness of the notions listed, namely to a result, obtained with a special scalarizing function, which links the notion of pontwise well-posedness to the well-posedness of a suitable scalar variational inequality of a differential type.

The authors hope that the paper is very useful for stimulating the research and for providing fresh insights leading to new applications.

The paper is organized as follows. In Section 2, after the introduction, the research aims are analysed while in Section 3 some results on Tykhonov well-posedness and on Hadamard well-posedness and on their relations are analysed. In Section 4, some generalizations of the notion of well-posedness are investigate (in case in which there is not uniqueness of solutions) and some strenghthened versions of well-posedness (for istance well-posedness in the sense of Levitin and Polyak) while in Section 5 are studied some results of well-posedness of vector optimization problems and among the various vector well-posedness notions, known in the literature, the attention is focused on the concept of pointwise well-posedness, introduced in [9] , (in particular a type of pointwise well-posedness and strong pointwise well-posedness for vector optimization problems). Subsequently, always in the section, are established basic well-posedness results for a vector variational inequality. Section 6 is devoted to the main results of the paper obtained by means of a special scalarization function. The notion of pontwise well-posedness is linked to well-posedness of a suitable scalar variational inequality of a differential type whose construction represents an interesting application of the so-called “oriented distance function”, a special scalarizing function, which allows to establish a parallelism between the well-posedness of the original vector problem and the well-posedness of the associate scalar problem. Section 7, finally, contains a general discussion on directions for future research and provides a conclusion. In other words, the article ends with some concluding remarks while the last part of this article represents the reviewed references. The emphasis is layed on papers published in the last three decades.

3. Tykhonov and Hadamard Well-Posedness

In scalar optimization the different notions of well-posedness are based either on the behaviour of “appropriate” minimizing sequences (converging to a solution of the problem) or on the dependence of the optimal solutions on the data of optimization problem. This section is devoted exactly to a study of two different notions of well-posedness. In particular, in it the authors give, initially, a characterization of the Tykhonov well-posedness, and a characterization of the Hadamard well-posedness, for a problem of minimizing a function f on a closed and convex set K; subsequently, they show the links between the two definitions and also some extensions and summarize some known results.

1) Tykhonov well-posedness

The first notion of well-posedness of an optimization problem was introduced in 1966 by A.N. Tykhonov and later took his name.

Let $f:{R}^{n}\to R$ be a real-valued function and let K be a nonempty subset of ${R}^{n}$ . Throughout this paper, the scalar optimization problem:

$\underset{x\in K}{\mathrm{min}}f(x)$

is denoted by $P\left(f,K\right)$ and consists in finding ${x}^{*}\in K$ such that

$f\left({x}^{*}\right)=\mathrm{inf}\{f\left(x\right),x\in K\}={\mathrm{inf}}_{K}f(x)$

The set, possible empty, of the solutions of the optimization problem $P\left(f,K\right)$ is denoted by argmin $P\left(f,K\right)$ .

The optimization problem $P\left(f,K\right)$ is said Tykhonov well-posed if it satisfies together the following properties:

a) existence of the solution (i.e. $P\left(f,K\right)$ has a solution),

b) uniqueness of the solution (i.e. the solution set for $P\left(f,K\right)$ is a singleton),

c) ${x}^{*}$ is a good approximation of the solution of $P\left(f,K\right)$ , if $f\left({x}^{*}\right)$ is close to ${\mathrm{inf}}_{K}f\left(x\right).$

More precisely:

The problem $P\left(f,K\right)$ is said Tykhonov well-posed if there exist exactly a unique ${x}^{*}\in K$ such that $f\left({x}^{*}\right)\le f\left(x\right)$ for all $x\in K$ , and if ${x}_{n}\to {x}^{*}$ for any sequence $\{{x}_{n}\}\subset K$ such that $f\left({x}_{n}\right)\to f\left({x}^{*}\right)$ (i.e. $f\left({x}_{n}\right)\to {\mathrm{inf}}_{K}f\left(x\right)$ ).

Recalling that a sequence $\{{x}_{n}\}\subseteq K$ is said minimizing sequence for problem $P\left(f,K\right)$ when $f\left({x}_{n}\right)\to {\mathrm{inf}}_{K}f\left(x\right)=f\left({x}^{*}\right)$ as $n\to +\infty $ , the previous definition can be rephrased in equivalent way, so [2] :

Definition 3.1: The problem $P\left(f,K\right)$ is said Tykhonov well-posed if it has, on K, a unique global minimum point, ${x}^{*}$ , and, moreover, every minimizing sequence for $P\left(f,K\right)$ converges to ${x}^{*}$ .

The definition 2.1 is motivated by the fact that, usually, every numerical method for solving $P\left(f,K\right)$ provides iteratively some minimizing sequences $\{{x}_{n}\}$ for $P\left(f,K\right)$ ; such sequences are also called sequences of approximate solutions for the problem $P\left(f,K\right)$ and therefore it is important to be sure that the approximate solutions $\{{x}_{n}\}$ are not far from the (unique) minimum ${x}^{*}$ .

In other words, the Tykhonov well-posedness of the optimization problem $P\left(f,K\right)$ requires existence and uniqueness of minimum point ${x}^{*}$ towards which every sequence of approximate solutions of the problem $P\left(f,K\right)$ converges. More precisely, to consider well-posedness of Tykhonov type, it is introduced the notion of “approximating sequence” for the solutions of optimization problems and it is required convergence of such sequences to a solution of the problem. For more details see [3] [6] .

Remark 3.1:

When K is compact, the uniqueness of the solution of a minimization problem $P\left(f,K\right)$ is enough to guarantee its well-posedness but there are however simple examples in which the uniqueness of the solution of $P\left(f,K\right)$ is not enough to guarantee its Tykhonov well-posedness even for continuous functions.

A simple example of a problem with a unique solution but which is not Tykhonov well-posed is the following:

$f\left(x\right)=\frac{{x}^{2}}{{x}^{4}+1}$

Obviously $\left(K=R\right)$ . $P\left(f,K\right)$ has a unique solution at zero, namely the argmin $P\left(f,K\right)=\{0\}$ , while ${x}_{n}=n$ , $n=1,2,\cdots $ provides a minimizing sequence which does not converge to this unique solution. Hence $P\left(f,K\right)$ is not Tykhonov well-posed. Therefore, for continuous functions the Tykhonov well-posedness of an optimization problem $P\left(f,K\right)$ simply means that every minimizing sequence of $P\left(f,K\right)$ is convergent.

Another example:

Let $K=R$ . If $f\left(x\right)={x}^{2}{\text{e}}^{-x}$ , $P\left(f,K\right)$ has a unique minimum ${x}_{0}=0$ but it is not Tykhonov well-posed, since the sequences $\{{x}_{n}\}=\{n\}$ is minimizing but it does not converges to ${x}_{0}=0$ .

If $f\left(x\right)={x}^{2}$ , then $P\left(f,K\right)$ is Tykhonov well-posed.

For convex functions in finite dimensions the uniqueness of the solution is enough to guarantee its Tykhonov well-posedness while this is no longer valid in infinite dimensions [4] . It is, in fact, known the following result:

Proposition 3.1: ( [17] ) Let $f:K\subseteq {R}^{n}\to R$ be a convex function and let K be convex. If $P\left(f,K\right)$ has a unique solution, then $P\left(f,K\right)$ is Tykhonov well-posed.

Different characterizations of Tykhonov well-posedness for minimization problems determined by convex functions in Banach spaces can be found in [3] .

The next fundamental theorem [2] gives an alternative characterization of Tykhonov well-posed problems: it uses the set of ε-optimal solutions and states that Tykhonov well-posedness of $P\left(f,K\right)$ can be characterized by behaviour of $diam\left[\epsilon -\mathrm{arg}\mathrm{min}\left(f,K\right)\right]$ as $\epsilon \to 0$ .

Theorem 3.1: If the minimization problem $P\left(f,K\right)$ is Tykhonov well-posed, then

$diam\left[\epsilon \text{-}\mathrm{arg}\mathrm{min}\left(f,K\right)\right]\to 0$ as $\epsilon \to 0$

where

$\epsilon \text{-}\mathrm{arg}\mathrm{min}\left(f,k\right)=\{x\in K:f\left(x\right)\le \epsilon +{\mathrm{inf}}_{K}f\left(x\right)\}$

is the set of ε-minimizers (approximate solutions) of f over K and diam denotes the diameter of given set.

Conversely, if f is lower semicontinuous and bounded from below on K,

$diam\left[\epsilon \text{-}\mathrm{arg}\mathrm{min}\left(f,K\right)\right]\to 0$ as $\epsilon \to 0$

implies Tykhonov well-posedness of $P\left(f,K\right)$ .

When K is closed and f is lower semicontinuous and bounded, from below it is possible to use the sets:

${L}_{K,f}\left(\epsilon \right)=\{x\in {R}^{n}:f\left(x\right)\le \mathrm{inf}\left(f,K\right)+\epsilon \text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}d\left(x,K\right)\le \epsilon \}$ , $\epsilon >0$

to introduce the notion of well-posedness of $P\left(f,K\right)$ :

Definition 3.3: Let K be closed and let $f:K\to R$ be lower semicontinuous. The minimization problem $P\left(f,K\right)$ is said to be well-posed if:

$\mathrm{inf}\{diam\text{\hspace{0.05em}}{L}_{K,f}\left(\epsilon \right),\epsilon >0\}=0$

Of course, if to any of the notions of generalized well-posedness is added the uniqueness of the solution, it is obtained the corresponding non generalized notion. Different characterizations of Tykhonov well-posedness for minimization problems determined by convex functions in Banach spaces can be found in [3] .

2) Hadamard well-posedness

The second notion of well-posedness is inspired by the classical idea of J. Hadamard to the beginning of previous century: it requires existence and uniqueness of solution of the optimization problem together with continuous dependence of the optimal solution and optimal value on the data of the problems.

Definition 3.3: The minimization problem $P\left(f,K\right)$ is said to be Hadamard well-posed if it has unique solution ${x}^{*}$ ( ${x}^{*}\in K$ ) and ${x}^{*}$ depends continuously on the data of the problem.

This is the well-known condition of well-posedness considered in the study of differential equations, translated for minimum problems. The essence of this notion is that a “small” change of the data of the problem yields a “small” change of the solution.

In fact very often the mathematical model of a phenomenon is so complicated that it is necessary to simplify it and replace it by other model which is “near” the original and, at the same time, it is important to be sure that the new problem will have a solution which is “near” the original one. The well-known variational principle of Ekeland [18] , an important tool for nonlinear analysis and optimization, asserts just that a particular optimization problem can be replaced by other which is near the original and has a unique solution.

3) Relations between Hadamard and Tykhonow well-posedness

Almost all the literature deals with different notions of well-posedness, even if especially with Tykhonov well-posedness. Some researchers have investigated the relations between these notions of well-posedness but there is no general research to such relations. At first sight, the two notions seem to be independent but, at least in the convex case, there are some papers showing a connection between the two properties: for instance [6] [7] [17] . The two notions (Tykhonov and Hadamard well-posedness) are equivalent at least for continuous objective functions. The links between Hadamard and Tykhonov well-posedness have been studied in [4] [5] [7] . There, besides uniqueness, additional structures are involved: in [6] [7] , for example, basic ingredient is convexity. The object of this section is to describe generally the relations between Hadamard and Tykhonov well-posedness: a central role is provided by the well-known Hausdorff convergence.

We remember the concept of Hausdorff convergence of sequences of sets.

Let D, E be subsets of ${R}^{n}$ and define

$\delta \left(E,D\right)=\mathrm{max}\{e\left(E,D\right),e\left(D,E\right)\}$

where

$e\left(E,D\right)=\underset{a\in E}{\mathrm{sup}}d\left(a,D\right)$

Definition 3.4: Let ${A}_{k}$ be a sequences of subsets of ${R}^{n}$ . We say that ${A}_{k}$ converges to $A\subseteq {R}^{n}$ in the sense of Hansdorff, and we write ${A}_{k}\to A$ when $\delta \left({A}_{K},A\right)\to 0$ .

The following theorems [6] show the relations between the Tykhonov and the Hadamard well-posedness:

Theorem 3.2: Let K be a closed convex subset of ${R}^{n}$ and let $f:K\to R$ be a convex continuous function with one and only one minimum point on every closed and convex subset of K. If $P\left(f,K\right)$ is

Hadamard well-posed, with respect to the well-known Hausdorff convergence, then $P\left(f,K\right)$ is Tykhonov well-posed on every closed and convex subset of K.

Theorem 3.3: Let $f:{R}^{n}\to R$ be a convex function uniformly continuous on every bounded set. If $P\left(f,K\right)$ is Tykhonov well-posed on every closed and convex set, then $P\left(f,K\right)$ is Hadamard well-posed, with respect to the Hausdorff convergence.

The Tykhonov well-posedness does not, in general, imply the Hadamard well-posedness if the objective function is only continuous.

4. Some Generalizations

In the above definitions it is required the existences and the uniqueness of solution towards which every minimizing sequence converges. The different notions of well-posedness, however, admit generalizations which do not require uniqueness of the solution. In other words, the uniqueness requirement can be relaxed and well-posed optimization problems with several solutions can be considered. Therefore, while the requirement of existence in the previous definitions is crucial, the uniqueness condition is more debatable. In fact, many problems in linear and quadratic programming or many multicriteria optimization problems are usually considered as well-posed problems, although uniqueness is usually not satisfied [1] .

More precisely, in scalar optimization problems it is difficult to guarantee the uniqueness of the optimal solutions, uniqueness that is critical to the solution stability and calculation.

In other words, the different notions of well-posedness admit generalizations which do not require uniqueness of the solution. In particular, the concept of Tykhonov well posedness can be extended to minimum problems without uniqueness of the optimal solutions. It becomes imperative, namely, to generalize the notion of well-posedness for a minimization problem, introduced by Tykhonov, based on the fact that every minimizing sequence converges towards the unique minimum solution and to discuss the well-posedness for problems having more than one solution.

This new definition requires existence, but not uniqueness, of solution of $P\left(f,K\right)$ , and, for every minimizing sequences, the convergence of some subsequence of the minimizing sequence towards some optimal solution.

Definition 4.1: The problem $P\left(f,K\right)$ is called Tykhonov well-posed in the generalized sense if every minimizing sequence for $P\left(f,K\right)$ has some subsequence converging to an optimal solution of $P\left(f,K\right)$ , i.e. to an element of $\mathrm{arg}\mathrm{min}\left(f,K\right)$ .

More precisely the problem $P\left(f,K\right)$ is called Tykhonov well-posed in the generalized sense if $\mathrm{arg}\mathrm{min}\left(f,K\right)\ne \overline{)0}$ and every sequence ${x}_{n}\in K$ such that $f\left({x}_{n}\right)\to \mathrm{inf}f\left(K\right)$ has some subsequence ${y}_{n}\to y$ with $y\in \mathrm{arg}\mathrm{min}\left(f,K\right)$ .

From the definition it follows, obviously, that, if the problem $P\left(f,K\right)$ is Tykhonov well-posed in the generalized sense, then it has a non-empty compact set of solutions, i.e. $\mathrm{arg}\mathrm{min}\left(f,K\right)$ is nonempty and compact. Moreover, when $P\left(f,K\right)$ is well-posed in the generalized sense and $\mathrm{arg}\mathrm{min}\left(f,K\right)$ is a singleton (i.e. its solution is unique), then $P\left(f,K\right)$ is Tykhonov well-posed.

When $\mathrm{arg}\mathrm{min}\left(f,K\right)$ is a singleton, the previous definition reduces to the classical notion of Tykhonov well-posedness or rather the problem $P\left(f,K\right)$ is Tykhonov well-posedness if it is Tykhonov well-posed in the generalized Tykhonov sense and $\mathrm{arg}\mathrm{min}\left(f,K\right)$ is a singleton; thus generalized well-posedness is really a generalization of Tykhonov well-posedness.

In order to weaken the requirement of uniqueness of the solution, other more general notions of well-posedness have been introduced, depending on the hypotheses made on f (and K). Here, the author recall the concept of well-setness introduced in [1] .

Definition 4.2: Problem $P\left(f,K\right)$ is said to be well-set when, for every minimizing sequence

$\{{x}_{n}\}\subseteq K$ , $d({x}_{n},\mathrm{arg}\mathrm{min}\left(f,K\right))\to 0$ , as $n\to +\infty $ ,

where $\mathrm{arg}\mathrm{min}\left(f,K\right)$ denotes the set of solutions of problem $P\left(f,K\right)$ while $d\left(x,K\right)$ $\left[d\left({x}_{n},K\right)=\mathrm{inf}\{\Vert {x}_{n}-y\Vert :y\in K\}\right]$ is the distance of the point x from the set K.

The idea of the behaviour of the minimizing sequences was used by different authors also to extend this concept to strengthened notions. These notions are not suitable for numerical methods, where the function f is approximated by a family or a sequence of functions. For this reason new notions of well-posedness have been introduced and studied.

Before, however, we consider two generalizations of the notion of minimizing sequence.

The first was introduced and studied by [19] ; they introduced a new notion of well-posedness that strengthened the Tykhonov’s concept as it required the convergence to the optimal solution of each sequence belonging to a larger set of minimizing sequences. The Levitin-Polyak well-posedness has been investigated intensively in the literature, such as [20] [21] [22] [23] .

Konsulova and Revalski [24] studied Levitin-Polyak well-posedness for convex scalar optimization problems with functional constraints. While, recently, [20] generalized the results of Konsulova and Revalski [24] for non convex optimization problems with abstract and functional constraints.

The well-posedness of the minimization problem $P\left(f,K\right)$ in the sense of Tykhonov concerns the behaviour of the function f in the set K but it does not take into account the behaviour of f outside K [12] . Of course, often, one can come across with minimizing sequences that do not lie necessarily in K and one wants to control the behaviour of these minimizing sequences, as well. Levitin and Polyak in [25] considered such kind of sequences.

Definition 4.3: Let K be a nonempty subset of ${R}^{n}$ . The sequences ${\{{x}_{n}\}}_{n=1}^{\infty}\subset K$ is a Levitin-Polyak minimizing sequences for the minimization problem $P\left(f,K\right)$ if

$f\left({x}_{n}\right)\to {\mathrm{inf}}_{K}f\left(x\right)$ and $d\left({x}_{n},K\right)\to 0$

where $d\left({x}_{n},K\right)=\mathrm{inf}\{\Vert {x}_{n}-y\Vert :y\in K\}$ is the distance from the point ${x}_{n}$ to the set K while $\Vert \Vert $ is the Euclidean norm.

In other words, a sequences ${\{{x}_{n}\}}_{n=1}^{\infty}$ is a Levitin-Polyak minimizing sequences for $P\left(f,K\right)$ if not only ${\{f\left({x}_{n}\right)\}}_{n=1}^{\infty}$ approaches the greatest lower bound of f over K but also the sequence ${\{{x}_{n}\}}_{n=1}^{\infty}$ tends to K.

Then, the well-posedness concept can be strengthened as follows:

Definition 4.4: The minimization problem $P\left(f,K\right)$ is called Levitin-Polyak well-posed if it has unique solution ${x}^{*}\in K$ and, moreover, every Levitin-Polyak minimizing sequence for $P\left(f,K\right)$ converges to ${x}^{*}$ .

Of course, this definition is stronger than that of Tykhonov since requires that each sequence, belonging to a larger set of minimizing sequences, convergs to the unique solution, namely Levitin-Polyak well-posedness implies Tykhonov well-posedness.

The converse is true provided that f is uniformly continuous but not necessarily true if f is only continuous. It is enough to consider

$K=R\times \{0\}\subset {R}^{2}$ , $f\left(x,y\right)={x}^{2}-{y}^{2}\left(x+{x}^{4}\right)$

and the generalized minimizing sequence $\{1\backslash n\}$ .

As Tykhonov well-posedness can be characterized by the behaviour of $diam\left[\epsilon \text{-}\mathrm{arg}\mathrm{min}\left(f,K\right)\right]$ , as Levitin Polyak well-posedness can be characterized by the behaviour of the set:

${L}_{K,f}\left(\epsilon \right)=\{x\in K:d\left(x,K\right)\le \epsilon \text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}f\left(x\right)\le \mathrm{inf}\left(f,K\right)+\epsilon \}$

defined for $\epsilon >0$ and for f bounded from below on K.

In analogy with Theorem 3.1, the following result gives [3] :

Theorem 4.2: If K is closed and f is lower semicontinuous and bounded from below on K, then $diam\text{\hspace{0.05em}}L\left(\epsilon \right)\to 0$ as $\epsilon \to 0$ implies Levitin-Polyak well-posedness of $P\left(f,K\right).$

A second generalization of the usual notion of minimizing sequences is the following:

Definition 4.5: A sequence ${\{{x}_{n}\}}_{n=1}^{\infty}\subset K$ is said to be a generalized minimizing sequence for the minimization problem $P\left(f,K\right)$ if are fulfilled both:

$d\left({x}_{n},K\right)\to 0$ and $\mathrm{lim}\mathrm{sup}f\left({x}_{n}\right)\le {\mathrm{inf}}_{K}f(x)$

Consequently another strengthened version of the well-posedness is the following:

Definition 4.6: The minimization problem $P\left(f,K\right)$ is said strongly well-posed if it has unique solution ${x}^{*}\in K$ and, moreover, every generalized minimizing sequences for $P\left(f,K\right)$ converges to ${x}^{*}$ .

Obviously, in general strong well-posedness of the problem $P\left(f,K\right)$ implies that of Levitin-Polyak, which in its turn implies the Tykhonov well-posedness. It is important underline that, each of the previous definitions, widely studied in many papers [3] [4] [15] , is based on the behaviour of a certain set of minimizing sequences.

The corresponding generalization of Levitin-Polyak well-posedness in the case of non-uniqueness of the solution, or when the uniqueness of the solution is dropped, is:

Definition 4.7: The minimization problem $P\left(f,K\right)$ is called generalized Levitin-Polyak well-posed if every Levitin-Polyak minimizing sequence $\{{x}_{n}\}$ $\left(\{{x}_{n}\}\subset K\right)$ for $P\left(f,K\right)$ has a subsequence converging to a solution of $P\left(f,K\right)$ .

Of course, any of the notions of generalized well-posedness, at which is added the uniqueness of the solution, is equivalent, obviously, to corresponding non generalized notion.

5. Well-Posedness of Vector Optimization Problems

In scalar optimization, the different notions of well-posedness are based either on the behaviour of “appropriate” minimizing sequences or on the dependence of optimal solution with respect to the data of optimization problems. In vector optimization, instead, there is not a commonly accepted definition of well-posedness but there are different notions of well-posedness of vector optimization problems. For a detailed survey on these problems it is possible to refer to [1] [8] [9] [11] [25] .

In this section, we propose some of these definitions of well-posedness for a vector optimization problem; in particular, among the various vector well-posedness notions known in the literature, the attention is focused on the concept of pointwise well-posedness, introduced in [9] .

We consider the vector optimization problem:

$VP\left(f,K\right)$ ${\mathrm{min}}_{C}f\left(x\right)$ $x\in K$

where K is a nonempty, closed, convex subset of ${R}^{n}$ , $f:K\subseteq {R}^{n}\to {R}^{l}$ is a continuous function and $C\subseteq {R}^{l}$ is a closed, convex, pointed cone and with nonempty interior. Denoted by $\mathrm{int}C$ the interior of C.

A point ${x}^{*}\in K$ is said to be an efficient solution or minimal solution of problem $VP\left(f,K\right)$ when:

$f\left(x\right)-f\left({x}^{*}\right)\notin -C\backslash \{0\}$ $\forall x\in K$

If, in the above definition, instead of the cone C is used the cone $\tilde{C}=\{0\}\cup \mathrm{int}C$ , ${x}^{*}$ is said weak minimal solution. Then, a point ${x}^{*}\in K$ is said to be a weakly efficient solution or weak minimal solution of problem $VP\left(f,K\right)$ when:

$f\left(x\right)-f\left({x}^{*}\right)\notin -\text{\hspace{0.17em}}\mathrm{int}\text{\hspace{0.17em}}C$ $\forall x\in K$

The set of all efficient solutions (minimal solutions) of problem $VP\left(f,K\right)$ is denoted by $Eff\left(f,K\right)$ while $WEff\left(f,K\right)$ denotes the set of weakly efficient solutions (weak minimal solutions) of $VP\left(f,K\right)$ . Moreover, every minimal is also a weak minimal solution but the converse is not generally true.

In this section the authors recall a notion of well-posedness that considers a single point (a fixed efficient solution) and not the whole solution set: a particular type of pointwise well-posedness and strong pointwise well-posedness for vector optimization problems. This definition can be introduced considering, as in the scalar case, the diameter of the level sets of the function f.

Generalizing Tykhonov’s definition of well-posedness for a scalar optimization problem, in [26] are introduced the notions of well-posedness and of strong well-posedness of vector optimization problem $VP\left(f,K\right)$ at a point ${x}^{*}\in Eff\left(f,K\right)$ and are provided, also, some conditions to guarantee well-posedness according to these definitions.

Definition 5.1: The vector optimization problem $VP\left(f,K\right)$ is said to be pointwise well-posed at the efficient solution ${x}^{*}\in K$ or Tykhonov well-posed at ${x}^{*}\in Eff\left(f,K\right)$ , if:

$\mathrm{inf}diam\text{\hspace{0.05em}}L\left({x}^{*},k,\alpha \right)=0$ $\forall \text{\hspace{0.17em}}k\in C,\forall \alpha >0$

where:

$L\left({x}^{*},k,\alpha \right)=\{x\in K:f\left(x\right){\le}_{C}f\left({x}^{*}\right)+\alpha k\}=\{x\in K:f\left(x\right)\in f\left({x}^{*}\right)+\alpha k-C\}$

Definition 5.2: The vector optimization problem $VP\left(f,K\right)$ is said to be strongly pointwise well-posed at the efficient solution ${x}^{*}$ , or Tykhonov strongly well-posed at ${x}^{*}\in Eff\left(f,K\right)$ , if:

$\mathrm{inf}diam\text{\hspace{0.05em}}{L}_{s}\left({x}^{*},k,\alpha \right)=0$ $\forall \text{\hspace{0.17em}}k\in C$

where:

${L}_{s}\left({x}^{*},k,\alpha \right)=\{x\in K:f\left(x\right){\le}_{C}f\left({x}^{*}\right)+\alpha k\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}d\left(x,K\right)\le \alpha \}$

For the sake of completeness, we recall that it is also possible to introduce another type of well-posedness of the vector optimization problem $VP\left(f,K\right)$ at a point ${x}^{*}\in Eff\left(f,K\right)$ [27] .

Definition 5.3: The vector optimization problem $VP\left(f,K\right)$ is said to be H-well-posed at a point ${x}^{*}\in Eff\left(f,K\right)$ if ${x}_{n}\to {x}^{*}$ for any sequence $\{{x}_{n}\}\subseteq K$ , such that $f\left({x}_{n}\right)\to f\left({x}^{*}\right)$ .

Definition 5.4: The vector optimization problem $VP\left(f,K\right)$ is said to be strongly H-well-posed at a point ${x}^{*}\in Eff\left(f,K\right)$ if ${x}_{n}\to {x}^{*}$ for any sequence $\{{x}_{n}\}$ such that $f\left({x}_{n}\right)\to f\left({x}^{*}\right)$ with $d\left({x}_{n},K\right)\to 0$ .

Remark 5.1:

If $\mathrm{int}C\ne \overline{)0}$ , then well-posedness at a point ${x}^{*}\in Eff\left(f,K\right)$ of the vector optimization problem $VP\left(f,K\right)$ , according to definition 5.1 [resp. to def. 5.2], implies well-posedness according to definition 5.3 [resp. to def. 5.4]. It is easy realize that the pointwise well-posedness of type 5.1 is weaker than pointwise well-posedness of type 5.3 [27] .

An useful tool in the study of vector optimization problems is provided by the vector variational inequalities, that, introduced first by Giannessi in 1980, have been studied intensively because they can be efficient tools for investigating vector optimization problems and also because they provide a mathematical model for equilibrium problems; they provide, namely, an unified and efficient framework for a wide spectrum of applied problems.

Before, however, it is important to underline that the theory of variational inequalities provides a convenient mathematical apparatus for obtain result relating to a large number of problems with a wide range of applications in economics, finance, social, pure and applied sciences. In fact, it is well known that many equilibrium problems, arising in finance, economics, transportation science and contact problems in elasticity, can be formulated in terms of the variational inequalities [27] . In other words, the ideas and the techniques of variational inequalities are being applied in a variety of diverse areas of sciences and prove to be productive and innovative.

There is a very close connection between the optimization problems and the variational inequalities. In fact, the well-posedness of a scalar minimization problem is linked to that of a scalar variational inequality and, in particular, to a variational inequality of differential type (i.e. in which the operator involved is the gradient of a given function). The links between variational inequalities of differential type and optimization problems have been deeply studied in [3] [12] [15] [28] . Furthermore, by means of Ekeland’s variational principle [18] , that, as it is well known, is an important tool to prove some results in well-posedness for optimization, a notion of well-posed scalar variational inequality has been introduced and its links with the concept of well-posed optimization problem have been investigated [3] .

In this section, are treated the vector variational inequalities of differential type.

Let $f:{R}^{n}\to {R}^{l}$ be a function differentiable on an open set containing the closed convex set $K\subseteq {R}^{n}$ . The vector variational inequality problem of differential type consists in finding a point ${x}^{*}\in K$ such that:

$SVVI\left({f}^{\prime},K\right)$ ${\langle {f}^{\prime}\left({x}^{*}\right),y-{x}^{*}\rangle}_{l}\notin -\mathrm{int}C$ $\forall \text{\hspace{0.17em}}y\in K$

where ${f}^{\prime}$ denotes the Jacobian of f and ${\langle {f}^{\prime}\left({x}^{*}\right),y-{x}^{*}\rangle}_{l}$ is the vector whose components are the l inner products $\langle {{f}^{\prime}}_{i}\left({x}^{*}\right),y-{x}^{*}\rangle $ .

It is well known that $SVVI\left({f}^{\prime},K\right)$ provides a necessary condition for ${x}^{*}$ to be an efficient solution of $VP\left(f,K\right)$ . It is, instead, a sufficient condition for ${x}^{*}$ to be an efficient solution of $VP\left(f,K\right)$ if f is $\mathrm{int}C$ -convex while, if f is C-convex, $SVVI\left({f}^{\prime},K\right)$ is a sufficient condition for ${x}^{*}$ to be an weakly efficient solution of $VP\left(f,K\right)$ . These remarks underline the links between optimization problems and variational inequalities also for vector case. This is a further reason for a suitable definition of well-posedness for a vector variational inequality which could be compared and related to the given definition for vector optimization. Then, a notion of well-posedness is introduced for the vector variational inequality problem $SVVI\left({f}^{\prime},K\right)$ , obtained by generalizing the definition of the scalar case and it is defined the following set:

${T}_{{c}^{0}}\left(\epsilon \right):=\{x\in K:{\langle {f}^{\prime}\left(x\right),y-x\rangle}_{l}\notin -\sqrt{\epsilon}\Vert y-x\Vert {c}^{0}-\mathrm{int}C,\forall \text{\hspace{0.17em}}y\in K\}$

where $\epsilon >0$ and ${c}^{0}\in \mathrm{int}C$ . ${T}_{{c}^{0}}\left(\epsilon \right)$ is a directional generalization of the set $T\left(\epsilon \right)$ of the scalar case.

Definition 5.5: The variational inequality $SVVI\left({f}^{\prime},K\right)$ is well-posed if, for every ${c}^{0}\in \mathrm{int}C$ , $e\left({T}_{{c}^{0}}\left(\epsilon \right),WEff\left(f,K\right)\right)\to 0$ where $e\left({A}_{K},A\right)=\underset{{A}_{i}\subset {A}_{K}}{\mathrm{sup}}d\left({A}_{i},A\right)$ .

The following result states the relationship between well-posed optimization problem and a well-posed variational inequality, in the vector case [12] .

Theorem 5.1: If the variational inequality $SVVI\left({f}^{\prime},K\right)$ is well-posed, then problem $VP\left(f,K\right)$ is well-posed at ${x}^{*}$ .

For C-convex functions, in particular, well-posedness of $VP\left(f,K\right)$ and $SVVI\left({f}^{\prime},K\right)$ substantially coincide. To show that, it is necessary to assume that f is differentiable on an open set containing K and observe that:

Definition 5.6: The function $f:K\subseteq {R}^{n}\to {R}^{l}$ is said to be C-convex when:

$f\left(\lambda x+\left(1-\lambda \right)y\right)-\left[\lambda f\left(x\right)+\left(1-\lambda \right)f\left(y\right)\right]\in -C$ $\forall x,y\in K,\text{\hspace{0.17em}}\forall \lambda \in \left[0,1\right]$

Lemma 1: If $f:{R}^{n}\to {R}^{l}$ is C-convex, then:

${T}_{{c}^{0}}\left(\epsilon \right)=\{x\in K:f\left(y\right)-f\left(x\right)\notin -\sqrt{\epsilon}\Vert y-x\Vert \text{\hspace{0.17em}}{c}^{0}-\mathrm{int}C\}$

Theorem 5.2: Let f be a C-convex function. Assume that ${c}^{0}\in \mathrm{int}C$ , and that ${T}_{{c}^{0}}\left(\epsilon \right)$ is bounded for some $\epsilon >0$ . Then $SVVI\left({f}^{\prime},K\right)$ is well-posed.

Therefore, if f is a C-convex function, the well-posedness of $SVVI\left({f}^{\prime},K\right)$ is ensured and, namely, by theorem 3.2, substantially coincide with well-posedness of $VP\left(f,K\right)$ .

6. Main Results

In this section, the authors, after a review of well-posedness, focus their attention on a scalarization procedure that preserve well-posedness of the notions listed above and among various scalarization procedures known in the literature, they consider the one based on the so called “oriented distance” function from a point to a set. This special scalarizing function, introduced by Hiriart-Urruty in [29] , has been applied to scalarization of vector optimization problem [30] . The scalarization method is, namely, a powerful tool for studying vector optimization problems.

This function allows to establish a parallelism between the well-posedness of the original vector problem and the well-posedness of the associate scalar problem. Indeed, the authors show that one of the weakest notions of well-posedness in vector optimization is linked to the well-setness of the scalarized problem, while some stronger notion of well-posedness in the vector case is related to Tykhonov well-posedness of the associated scalarization.

These results constitute a simple tool to show that, under some additional compactness assumptions, quasiconvex vector optimization problems are well-posed. Thus, a known result about scalar problems can be extended to vector optimization and improves a previous result concerning convex vector problems.

Throughout this section we assume that $f:{R}^{n}\to {R}^{l}$ is differentiable on an open set containing the closed convex set $K\subseteq {R}^{n}$ .

Definition 6.1: For a set $A\subseteq {R}^{l}$ , let ${\Delta}_{A}:{R}^{l}\to R\cup \{\pm \infty \}$ be defined as:

${\Delta}_{A}\left(y\right)=d\left(y,A\right)-d\left(y,{A}^{c}\right)$

where ${d}_{A}\left(y\right)={\mathrm{inf}}_{a\in A}\Vert y-a\Vert $ is the distance from the point y to the set A.

Function ${\Delta}_{A}\left(y\right)$ is called the oriented distance function from the point y to the set A and it has been introduced in the framework of nonsmooth scalar optimization.

${\Delta}_{A}\left(y\right)<0$ for $y\in \mathrm{int}A$ (the interior of A), ${\Delta}_{A}\left(y\right)=0$ for $y\in bd\text{\hspace{0.05em}}A$ (the boundary of A) and positive elsewhere.

The main properties of function ${\Delta}_{A}$ are gathered in the following theorem [30] :

Theorem 6.1:

1) if $A\ne \overline{)0}$ and $A\ne {R}^{l}$ then ${\Delta}_{A}$ is real valued;

2) ${\Delta}_{A}$ is 1-Lipschitzian;

3) ${\Delta}_{A}\left(y\right)<0$ , $\forall \text{\hspace{0.17em}}y\in \mathrm{int}A$ , ${\Delta}_{A}\left(y\right)=0$ , $\forall \text{\hspace{0.05em}}y\in bd\text{\hspace{0.05em}}A$ and ${\Delta}_{A}\left(y\right)>0$ , $\forall \text{\hspace{0.05em}}y\in \mathrm{int}{A}^{c}$

where the notation $bd\text{\hspace{0.05em}}A$ denotes the frontier of the set A and ${A}^{c}$ the complementary of set A.

4) if A is closed, then it holds $A=\{y:{\Delta}_{A}\left(y\right)\le 0\}$ ;

5) if A is convex, then ${\Delta}_{A}$ is convex;

6) if A is a cone, then ${\Delta}_{A}$ is positively homogeneous;

7) if A is a closed convex cone, then ${\Delta}_{A}$ is non increasing with respect to the ordering relation induced by A on ${R}^{l}$ , i.e. the following is true:

if ${y}_{1},{y}_{2}\in {R}^{l}$ then ${y}_{1}-{y}_{2}\in A\Rightarrow {\Delta}_{A}\left({y}_{1}\right)\le {\Delta}_{A}(y2)$

if A has nonempty interior, then ${y}_{1}-{y}_{2}\in \mathrm{int}A\Rightarrow {\Delta}_{A}\left({y}_{1}\right)<{\Delta}_{A}(y2)$

The oriented distance function ${\Delta}_{A}$ , used also to obtain a scalarization of a vector optimization problem [12] [26] allows to establish a relationship between the well-posedness of the original vector problem and the well posedness of the associate scalar problem. More precisely, in [12] it is known that one of notions of well-posedness in vector optimization can be rephrased as a suitable well-posedness of a corresponding scalar optimization problem, i.e. is linked to well-posedness of a suitable scalar variational inequality of differential type. The construction of this scalar variational inequality represents on interesting application of the “oriented distance function”.

It has been proved in [31] that when A is closed, convex, pointed cone, then we have:

${\Delta}_{-A}\left(y\right)=\underset{\xi \in {A}^{\prime}\cap S}{\mathrm{max}}\langle \xi ,y\rangle $

where ${A}^{\prime}:=\{x\in {R}^{l}|\langle x,a\rangle \ge 0,\forall \text{\hspace{0.17em}}a\in A\}$ is the positive polar of the cone of A and S the unit sphere in ${R}^{l}$ .

The function ${\Delta}_{-A}$ is used in order to give scalar characterizations of some notions of efficiency for problem $VP\left(f,K\right)$ . Furthermore, some results characterize pointwise well-posedness of problem $VP\left(f,K\right)$ through function ${\Delta}_{-A}$ [5] . Given a point ${x}^{*}\in K$ , it is considered the function:

${\phi}_{{x}^{*}}\left(x\right)=\underset{\xi \in {C}^{\prime}\cap S}{\mathrm{max}}\langle \xi ,f\left(x\right)-f\left({x}^{*}\right)\rangle $

where ${C}^{\prime}$ denotes the positive polar of C and S the unit sphere in ${R}^{l}$ . Clearly

${\phi}_{{x}^{*}}\left(x\right)={\Delta}_{-\text{\hspace{0.17em}}C}\left(f\left(x\right)-f\left({x}^{*}\right)\right)$

The function ${\phi}_{{x}^{*}}$ is directionally differentiable [9] and hence it is can consider the directional derivative

${{\phi}^{\prime}}_{{x}^{*}}\left(x;d\right)=\underset{t\to {0}^{+}}{\mathrm{lim}}\frac{{\phi}_{{x}^{*}}\left(x+td\right)-{\phi}_{{x}^{*}}\left(x\right)}{t}$

and the associated scalar problem: find ${x}^{*}\in K$ , such that:

$SVI\left({{\phi}^{\prime}}_{{x}^{*}},K\right)$ ${{\phi}^{\prime}}_{{x}^{*}}\left({x}^{*};y-{x}^{*}\right)\ge 0$ $\forall \text{\hspace{0.17em}}y\in K$

The solutions of problem $SVI\left({{\phi}^{\prime}}_{{x}^{*}},K\right)$ coincide with the solutions of $SVVI\left({f}^{\prime},K\right)$ .

Proposition 6.1: Let K be a convex set. If ${x}^{*}\in K$ solves problem $SVI\left({{\phi}^{\prime}}_{{x}^{*}},K\right)$ for some ${x}^{*}\in K$ , then ${x}^{*}$ is a solution of $SVVI\left({f}^{\prime},K\right)$ . Conversely, if ${x}^{*}\in K$ solves $SVVI\left({f}^{\prime},K\right)$ , then ${x}^{*}$ solves problem $SVI\left({{\phi}^{\prime}}_{{x}^{*}},K\right)$ .

The scalar problem associated with the vector problem $VP\left(f,K\right)$ is:

$P\left({\phi}_{{x}^{*}},K\right)$ $\mathrm{min}{\phi}_{{x}^{*}}\left(x\right)$ $x\in K$

The relations among the solutions of problem $P\left({\phi}_{{x}^{*}},K\right)$ and those of problem $VP\left(f,K\right)$ are refers investigated in [30] . Here it refers only to the characterization of weak efficient solution.

Proposition 6.2: The point ${x}^{*}\in K$ is a weak efficient solution of $VP\left(f,K\right)$ if and only if ${x}^{*}$ is a solution of $P\left({\phi}_{{x}^{*}},K\right)$ .

The proof is omitted and for it refer to [2] [32] , for details.

Also well-posedness of $VP\left(f,K\right)$ can be linked to that of $P\left({\phi}_{{x}^{*}},K\right)$ [5] [8] .

Proposition 6.3: Let f be a continuous function and let ${x}^{*}\in K$ be an efficient solution of $VP\left(f,K\right)$ . Problem $VP\left(f,K\right)$ is pointwise well-posed at ${x}^{*}$ if and only if problem $P\left({\phi}_{{x}^{*}},K\right)$ is Tykhonov well-posed.

The next proposition links the well-posedness of $SVI\left({{\phi}^{\prime}}_{{x}^{*}},K\right)$ to pointwise well-posedness of $VP\left(f,K\right)$ . It is need to recall Ekeland’s variational principle [18] : it say that there is a “nearby point” which actually minimizes a slightly perturbed given functional. More precisely it asserts that a particular optimization problem can be replaced by other which is near the original and has a unique solution [4] . In fact, often the mathematical model of a phenomenon is so complicated that is necessary to replace it by other model which has a solution “near” the original one.

Proposition 6.4: If $SVI\left({{\phi}^{\prime}}_{{x}^{*}},K\right)$ is pointwise well-posed at ${x}^{*}\in K$ , then problem $VP\left(f,K\right)$ is pointwise well-posed at ${x}^{*}$ .

Proof: By proposition 6.3, it is enough to prove that if $SVI\left({{\phi}^{\prime}}_{{x}^{*}},K\right)$ is pointwise well-posed at ${x}^{*}$ , then problem $P\left({\phi}_{{x}^{*}},K\right)$ is Tykhonov well-posed.

In fact, for every $\epsilon >0$ and $x\in \epsilon \text{-}\mathrm{arg}\mathrm{min}\left({\phi}_{{x}^{*}},K\right)$ , by Ekeland’s variational principle, there exists $\overline{x}$ such that:

$\Vert \overline{x}-x\Vert \le \sqrt{\epsilon}$ and ${\phi}_{{x}^{*}}\left(\overline{x}\right)\le {\phi}_{{x}^{*}}\left(y\right)+\sqrt{\epsilon}\Vert \overline{x}-y\Vert $ $\forall \text{\hspace{0.17em}}y\in K$ .

If it is introduced the set

$Z\left(\epsilon \right)=\{x\in K:{\phi}_{{x}^{*}}\left(x\right)\le {\phi}_{{x}^{*}}\left(y\right)+\epsilon \Vert x-y\Vert ,\forall y\in K\}$

then, it follows that

$\epsilon \text{-}\mathrm{arg}\mathrm{min}\left({\phi}_{{x}^{*}},K\right)\subseteq Z\left(\epsilon \right)+\sqrt{\epsilon}B$ .

It get, then, that $\forall \text{\hspace{0.17em}}u\in \epsilon \text{-}\mathrm{arg}\mathrm{min}\left({\phi}_{{x}^{*}},K\right)$ , there exist x such that $\Vert u-x\Vert \le \sqrt{\epsilon}$ and

${\phi}_{{x}^{*}}\left(x+t\left(y-x\right)\right)\ge {\phi}_{{x}^{*}}\left(x\right)-\sqrt{\epsilon}t\Vert y-x\Vert ,\text{\hspace{0.17em}}\text{\hspace{0.17em}}0<t<1,\text{\hspace{0.17em}}y\in K$

Since ${{\phi}^{\prime}}_{{x}^{*}}\left(x,y-x\right)\ge -\sqrt{\epsilon}\Vert y-x\Vert $ , it follows that $x\in {T}_{{x}^{*}}\left(\sqrt{\epsilon}\right)$ and so:

$\epsilon \text{-}\mathrm{arg}\mathrm{min}\left({\phi}_{{x}^{*}},K\right)\subseteq {T}_{{x}^{*}}\left(\sqrt{\epsilon}\right)+\sqrt{\epsilon}B$

Since $diam\text{\hspace{0.05em}}{T}_{{x}^{*}}\left(\sqrt{\epsilon}\right)\to 0$ as $\epsilon \to 0$ , then $P\left({\phi}_{{x}^{*}},K\right)$ is Tykhonov well-posed.

Now, the authors prove that the converse of the previous proposition holds under convexity assumptions, namely it is true if f is C-convex. Before, they need the following Lemma:

Lemma 6.1: If $f:{R}^{n}\to {R}^{l}$ is C-convex function, then the function ${\phi}_{{x}^{*}}\left(x\right)$ , is convex $\forall \text{\hspace{0.17em}}x\in K$ .

Then:

Proposition 6.5: Let f be C-convex and assume
$VP\left(f,K\right)$
is pointwise well posed at
${x}^{*}\in K$
. (
${x}^{*}$
^{ }is an efficient solution). Then
$SVI\left({{\phi}^{\prime}}_{{x}^{*}},K\right)$
is pointwise well-posed at
${x}^{*}$
.

Assuming, ab absurdo, that $SVI\left({{\phi}^{\prime}}_{{x}^{*}},K\right)$ is not pointwise well-posed at ${x}^{*}$ , it follows that exist $a>0$ and ${\epsilon}_{n}\to 0$ , with $diam\text{\hspace{0.05em}}{T}_{{x}^{*}}\left({\epsilon}_{n}\right)>2a$ $\left({x}^{*}\in \mathrm{int}C\right)$ and one can find some ${x}_{n}\in {T}_{{x}^{*}}\left({\epsilon}_{n}\right)$ , with $\Vert {x}_{n}\Vert \ge a$ .

Without loss of generality, it is possible to put ${x}^{*}=0$ . Since ${\phi}_{{x}^{*}}$ is convex, it follows that:

${\phi}_{{x}^{*}}\left(0\right)-{\phi}_{{x}^{*}}\left({y}_{n}\right)\ge {{\phi}^{\prime}}_{{x}^{*}}\left({y}_{n},-{y}_{n}\right)$

where ${y}_{n}=a\frac{{x}_{n}}{\Vert {x}_{n}\Vert}$ . The boundedness of ${y}_{n}$ implies that it is can assume ${y}_{n}\to \overline{y}\in K$ (here it is need K closed). Further, since ${x}_{n}\in {T}_{{x}^{*}}\left({\epsilon}_{n}\right)$ ,

${{\phi}^{\prime}}_{{x}^{*}}\left({x}_{n},-{x}_{n}\right)\ge -{\epsilon}_{n}\Vert {x}_{n}\Vert $

Since

$\begin{array}{c}{{\phi}^{\prime}}_{{x}^{*}}\left({x}_{n},-{x}_{n}\right)=\underset{t\to {0}^{+}}{\mathrm{lim}}\frac{{\phi}_{{x}^{*}}\left({x}_{n}-t{x}_{n}\right)-{\phi}_{{x}^{*}}\left({x}_{n}\right)}{t}\\ =\underset{t\to {0}^{+}}{\mathrm{lim}}-\frac{{\phi}_{{x}^{*}}\left({x}_{n}+\left(-t\right)\left({x}_{n}\right)\right)-{\phi}_{{x}^{*}}\left({x}_{n}\right)}{-t}\\ =-{{\phi}^{\prime}}_{{x}^{*}}\left({x}_{n},{x}_{n}\right)\end{array}$

and

${{\phi}^{\prime}}_{{x}^{*}}\left({y}_{n},-{y}_{n}\right)=-{{\phi}^{\prime}}_{{x}^{*}}\left({y}_{n},{y}_{n}\right)$

from the continuity of ${\phi}_{{x}^{*}}$ , it is possible to obtain

${{\phi}^{\prime}}_{{x}^{*}}\left({y}_{n},{y}_{n}\right)={{\phi}^{\prime}}_{{x}^{*}}\left(\frac{a{x}_{n}}{\Vert {x}_{n}\Vert},\frac{a{x}_{n}}{\Vert {x}_{n}\Vert}\right)\le \frac{a}{\Vert {x}_{n}\Vert}{{\phi}^{\prime}}_{{x}^{*}}\left(\frac{a{x}_{n}}{\Vert {x}_{n}\Vert},{x}_{n}\right)\le \frac{a}{\Vert {x}_{n}\Vert}{{\phi}^{\prime}}_{{x}^{*}}\left({x}_{n},{x}_{n}\right)$

The last inequality follows from the convexity of ${{\phi}^{\prime}}_{{x}^{*}}$ [33] .

Hence

$-{{\phi}^{\prime}}_{{x}^{*}}\left({y}_{n},-{y}_{n}\right)\le \frac{a}{\Vert {x}_{n}\Vert}{{\phi}^{\prime}}_{{x}^{*}}\left({x}_{n},{x}_{n}\right)$

it follows

$\frac{a}{\Vert {x}_{n}\Vert}{{\phi}^{\prime}}_{{x}^{*}}\left({x}_{n},-{x}_{n}\right)\le {{\phi}^{\prime}}_{{x}^{*}}\left({y}_{n},-{y}_{n}\right)$

and so

${\phi}_{{x}^{*}}\left(0\right)-{\phi}_{{x}^{*}}\left({y}_{n}\right)\ge \frac{a}{\Vert {x}_{n}\Vert}{{\phi}^{\prime}}_{{x}^{*}}\left({x}_{n},-{x}_{n}\right)\ge -a{\epsilon}_{n}$

Sending n to $+\infty $ we obtain ${\phi}_{{x}^{*}}\left(0\right)-{\phi}_{{x}^{*}}\left(\overline{y}\right)\ge 0$ which contradicts Tykhonov well-posedness by Proposition 6.3. So, the thesis is true.

7. Concluding Remarks and Future Perspectives for Research

In this paper, the authors have reviewed and studied some properties of well-posedness, a field that has attracted attentions of many researchers for various types of problems and that requests intellectual endeavours. In reality, almost all the literature deals with directly specific notions of well-posedness but there is no general research to the relations between them for different problems and therefore is much needed the research, mostly in this area, to develop and to foster new and innovative applications in various branches of pure and applied sciences. The authors have given only a brief review of this fast growing field and hope that the general theories and results surveyed in this paper can be used to formulate and to outline some connections with other mathematical fields.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Cite this paper

Ferrentino, R. and Boniello, C. (2019) On the Well-Posedness for Optimization Problems: A Theoretical Investigation. Applied Mathematics, 10, 19-38. https://doi.org/10.4236/am.2019.101003

References

- 1. Bednarczuk, E. and Penot J.P. (1992) On the Positions of the Notions of Well-Posed Minimization Problems. Bollettino dell’Unione Matematica Italiana, 7, 665-683.
- 2. Dontchev, A. and Zolessi, T. (1993) Well-Posed Optimization Problems. Lecture Notes in Mathematics. Springer Verlag, Berlin, 1543. ps://doi.org/10.1007/BFb0084195
- 3. Lucchetti, R. and Patrone, F. (1981) A Characterization of Tykhonov Well-Posedness for Minimum Problems, with Applications to Variational Inequalities. Numerical Functional Analysis and Optimization, 3, 461-476.

https://doi.org/10.1080/01630568108816100 - 4. Revalski, J.P. (1987) Generic Well-Posedness in Some Classes of Optimization Problems. Acta University Carolinae: Mathematica et Physica, 28, 117-125.
- 5. Revalski, J.P. (1995) Various Aspects of Well-Posedness of Optimization Problems. In: Lucchetti, R. and Revalski, J., Eds., Recent Developments in Well-Posed Variational Problems, Kluwer Academic Publishers, Netherlands.

https://doi.org/10.1007/978-94-015-8472-2_10 - 6. Lucchetti, R. and Patrone, F. (1982) Hadamard and Tykhonov Well-Posedness of a Certain Class of Convex Functions. Journal of Mathematical Analysis and Applications, 88, 204-215.

https://doi.org/10.1016/0022-247X(82)90187-1 - 7. Lucchetti, R. and Patrone, F. (1982) Some Aspects of the Connection between Hadamard and Tykhonov Well-Posedness of Convex Problems. Bollettino dell’Unione Matematica Italiana, Sez. C, 6, 31-43.
- 8. Bednarczuk, E. (1987) Well Posedness of Vector Optimization Problems. Recent Advances and Historical Development of Vector Optimization Problems. In: Jahn, J. and Krabs, W., Eds., Lecture Notes in Economics and Mathematical Systems, Springer-Verlag, Berlin, 294.

https://doi.org/10.1007/978-3-642-46618-2_2 - 9. Dentcheva, D. and Helbig, S. (1996) On Variational Principles, Level Sets, Well-Posedness and ε-Solutions in Vector Optimization. Journal of Optimization Theory and Applications, 89, 325-349.

https://doi.org/10.1007/BF02192533 - 10. Huang, X.X. (2000) Extended Well-Posedness Properties of Vector Optimization Problems. Journal of Optimization Theory and Applications, 106, 165-182.

https://doi.org/10.1023/A:1004615325743 - 11. Lucchetti, R. (1987) Well-Posedness towards Vector Optimization, Lecture Notes in Economics and Mathematical Systems. Springer-Verlag, Berlin, 294.
- 12. Revalski, J.P. (1988) Well-Posedness of Optimization Problem—A Survey. In: Papini, P.L., Ed., Functional Analysis and Approximation, Bagni di Lucca.
- 13. Lignola M.B. and Morgan, J. (2000) Well-Posedness for Optimization Problems with Constraints Defined by Variational Inequalities Having a Unique Solution. Journal of Global Optimization, 16, 57-67.

https://doi.org/10.1023/A:1008370910807 - 14. Margiocco, M., Patrone, F. and Pusillo, C.L. (1997) A New Approach to Tykhonov Well-Posedness for Nash Equilibria. Optimization, 40, 385-400.

https://doi.org/10.1080/02331939708844321 - 15. Lucchetti, R. and Patrone, F. (1982) Some Properties of Well-Posed Variational Inequalities Governed by Linear Operators. Numerical Functional Analysis and Optimization, 83, 5.
- 16. Cavazzuti, E. and Morgan, J. (1983) Well-Posed Saddle Point Problems in Optimization, Theory and Algorithms. In: Hirriart-Urruty, J.B., Oettli, W. and Stoer, J., Eds., Optimization, Theory and Algorithms, Marcel Dekker, New York, 61-76.
- 17. Zolezzi, T. (1981) A Characterization of Well-Posed Optimal Control Systems. SIAM Journal Control and Optimization, 19, 604-616.
- 18. Ekeland, T. (1974) On the Variational Principle. Journal of Mathematical Analysis and Applications, 47, 324-353.

https://doi.org/10.1016/0022-247X(74)90025-0 - 19. Levitin, E.S. and Polyak, B.T. (1966) Convergence of Minimizing Sequences in Conditional Extremum Problems. Soviet Mathematics, Doklady, 7, 764-767.
- 20. Huang, X.X. and Yang, X.Q. (2007) Levitin-Polyak Well-Posedness of Constrained Vector Optimization Problems. Journal of Global Optimization, 37, 287-304.

https://doi.org/10.1007/s10898-006-9050-z - 21. Huang, X.X., Yang, X.Q. and Zhu, D.L. (2009) Levitin-Polyak Well-Posedness of Variational Inequality Problems Inequalities with Functional Constraints. Journal of Global Optimization, 44, 159-174.

https://doi.org/10.1007/s10898-008-9310-1 - 22. Jiang, B., Zhang, J. and Huang, X.X. (2009) Levitin-Polyak Well-Posedness of Generalized Quasivariational with Functional Constraints. Nonlinear Analysis, Theory, Methods and Applications, 70, 1492-1530.

https://doi.org/10.1016/j.na.2008.02.029 - 23. Li, S.J. and Li, M.H. (2009) Levitin-Polyak Well-Posedness of Vector Equilibrium Problems. Mathematical Methods of Operations Research, 69, 125-140.

https://doi.org/10.1007/s00186-008-0214-0 - 24. Konsulova, A.S. and Revalski, J.P. (1994) Constrained Convex Optimization Problems-Well-Posedness and Stability. Numerical Functional Analysis and Optimization, 15, 889-907.

https://doi.org/10.1080/01630569408816598 - 25. Loridan, P. (1995) Well-Posedness in Vector Optimization. In: Lucchetti, R., Revalski, J. and Serie, M.I.A., Eds., Recent Developments in Well-Posed Variational Problems, Kluwer Academic Publishers, Dordrecht, 331.

https://doi.org/10.1007/978-94-015-8472-2_7 - 26. Gorohovik, V.V. (1990) Convex and Nonsmooth Optimization. Navuka I Tèkhnika, Minsk, 240. (In Russian)
- 27. Huang, X.X. (2001) Pointwise Well-Posedness of Perturbed Vector Optimization Problems in a Vector-Valued Variational Principle. Journal of Optimization Theory and Applications, 108, 671-684.

https://doi.org/10.1023/A:1017595610700 - 28. Kinderlehrer, D. and Stampacchia, G. (1980) An Introduction to Variational Inequalities and Their Applications Pure and Applied Math. Academic Press, Cambridge, 88.
- 29. Hiriart-Urruty, J.-B. (1979) Tangent Cones, Generalized Gradients and Mathematical Programming in Banach Spaces. Mathematics of Operations Research, 4, 79-97.

https://doi.org/10.1287/moor.4.1.79 - 30. Zaffaroni, A. (2003) Degrees of Efficiency and Degrees of Minimality. SIAM Journal Control and Optimization, 42, 1071-1086.

https://doi.org/10.1137/S0363012902411532 - 31. Ginchev, I. and Hoffmann, A. (2002) Approximation of Set-Valued Functions by Single Valued One. Differential Inclusions, Control and Optimization, 22, 33-66.
- 32. Luc, D.T. (1989) Theory of Vector Optimization. Springer, Berlin.

https://doi.org/10.1007/978-3-642-50280-4 - 33. Rockafellar, R.T. (1997) Convex Analysis. Princeton University Press, Princeton.