_{1}

^{*}

The Kuhn-Tucker theorem in nondifferential form is a well-known classical optimality criterion for a convex programming problems which is true for a convex problem in the case when a Kuhn-Tucker vector exists. It is natural to extract two features connected with the classical theorem. The first of them consists in its possible “impracticability” (the Kuhn-Tucker vector does not exist). The second feature is connected with possible “instability” of the classical theorem with respect to the errors in the initial data. The article deals with the so-called regularized Kuhn-Tucker theorem in nondifferential sequential form which contains its classical analogue. A proof of the regularized theorem is based on the dual regularization method. This theorem is an assertion without regularity assumptions in terms of minimizing sequences about possibility of approximation of the solution of the convex programming problem by minimizers of its regular Lagrangian, that are constructively generated by means of the dual regularization method. The major distinctive property of the regularized Kuhn-Tucker theorem consists that it is free from two lacks of its classical analogue specified above. The last circumstance opens possibilities of its application for solving various ill-posed problems of optimization, optimal control, inverse problems.

We consider the convex programming problem

(P)

where is a convex continuous functional, is a linear continuous operator, is a fixed element, , , are convex functionals, D is a convex closed set, and Z and H are Hilbert spaces. It is well-known that the Kuhn-Tucker theorem in nondifferential form (e.g. see [1-3]) is the classical optimality criterion for Problem (P). This theorem is true if Problem (P) has a Kuhn-Tucker vector. It is stated in terms of the solution to the convex programming problem, the corresponding Lagrange multiplier, and the regular Lagrangian of the optimization problem (here, “regular” means that the Lagrange multiplier for the objective functional is unity).

Note two fundamental features of the classical Kuhn-Tucker theorem in nondifferential form (e.g. see [4-7]). The first feature is that this theorem is far from being always “correct”. If the regularity of the problem is not assumed, then, in general, the classical theorem does not hold even for the simplest finite-dimensional convex programming problems. In particular, the corresponding one-dimensional example can be found in [

The second important feature of the classical KuhnTucker theorem is its instability with respect to perturbations of the initial data. This instability occurs even for the simplest finite-dimensional convex programming problems. The following problem can be a particular example.

Example 1.1. Consider the minimization of a strongly convex quadratic function of two variables on a set specified by an affine equality constraint:

The exact normal solution is. The dual problem for (1) has the form

where

and .

Its solutions are the vectors. It is easy to verify that every vector of this form is a Kuhn-Tucker vector of problem (1). For consider the following perturbation of problem (1)

The corresponding dual problem

has the solution.

According to the classical Kuhn-Tucker theorem, the vector

is a solution to perturbed problem (2). At the same time, this vector is an “approximate” solution to original system (1), and no convergence to the unique exact solution occurs as.

It is natural to consider the above-mentioned features of the classical Kuhn-Tucker theorem in nondifferential form as a consequence of the classical approach long adopted in optimization theory. According to this approach, optimality conditions are traditionally written in terms of optimal elements. At the same time, it is well-known that optimization problems and their duals are typically unsolvable. The mentioned above examples from [

So-called regularized Kuhn-Tucker theorem in nondifferential sequential form was proved for Problem (P) with strongly convex objective functional and with parameters in constraints in [

In contrast to [

This article consists of an introduction and four main sections. In Section 2 the convex programming problem in a Hilbert space is formulated. Section 3 contains the formulation of the convergence theorem of dual regularization method for the case of a strongly convex objective functional including its iterated form and the proof of the analogous theorem when the objective functional is only convex. In turn, in Section 4 we give the formulation of the stable sequential Kuhn-Tucker theorem for the case of a strongly convex objective functional. Besides, here we prove the theorem for the same case but in iterated form and in the case of the convex objective functional also. Finally, in Section 5 we discuss possible applications of the stable sequential Kuhn-Tucker theorem in optimal control and in ill-posed inverse problems.

Consider the convex programming Problem (P) and suppose that it is solvable. Its solutions we denote by. We also assume that

where is a constant and

.

Below we use the notations:

Define the concave dual functional called the value functional

and the dual problem

In what follows the concept of a minimizing approximate solution to Problem (P) plays an important role. Recall that a sequence, , is called a minimizing approximate solution to Problem (P) if, for, and. Here is the generalized infimum:

If f is a strongly convex functional and also if D is a bounded set, can be written as

Recall that in this case the Kuhn-Tucker vector of Problem (P) is a pair such that

where is a solution to (P). It is well-known that every such Kuhn-Tucker vector is the same as a solution to the dual problem (3), and combined with constitutes a saddle point of the Lagrangian

.

In this section we consider dual regularization algorithm for solving Problem (P) which is a stable with respect to perturbations of its input data.

Let F be the set formed of all collections of initial data for Problem (P). Each collection consists of a functional f, which is continuous and convex on D, of a linear continuous operator A, an element h and functionals, that are continuous and convex on D. Moreover, it holds that

where the constant L_{M} is independent of the collection. If the objective functional of Problem (P) is strongly convex, then a functional f in each collection is continuous and strongly convex on D and has the constant of strong convexity that is independent of the collection.

Furthermore, we define collections and of unperturbed and perturbed data, respectively:

and where characterizes the error in initial data and is a fixed scalar. Assume that

(4)

where is independent of and

.

Denote by (P^{0}) Problem (P) with the collection of unperturbed initial data. Assume that (P^{0}) is a solvable problem. Since

is a convex and closed set, we denote the unique solution of Problem (P^{0}) in the case of strongly convex by. The same notation we leave for solutions of Problem (P^{0}) in the case of convex also.

The construction of the dual algorithm for Problem (P^{0}) relies heavily on the concept of a minimizing approximate solution in the sense of J. Warga [

where and nonnegative scalar sequences, , converge to zero. Here is the generalized infimum for Problem (P^{0}) defined in Section 2, and

.

It is obvious that, in general, where is the classical value of the problem. However, for Problem (P^{0}) defined above, we have. Also, we can assert that every minimizing approximate solution of Problem (P^{0}) obeys the limit relation

both in the case of convex and in the case of strongly convex.

Since the initial data are given approximately, instead of (P^{0}) we have the family of problems

depending on the “error”.

Define the Lagrange functional

and the concave dual functional (value functional)

If the functional f is strongly convex, then due to strong convexity of the continuous Lagrange functional, for all, where

the value is attained at a unique element.

If D is a bounded set, then obviously the dual functional is defined and finite for all elements. When the functional f is convex, in the last case the value is attained at elements of the non-empty set

.

Denote by the unique point that furnishes the maximum to the functional

on the set.

Assume that the consistency condition

is fulfilled.

In this subsection we formulate the convergence theorem of dual regularization method for the case of strongly convex objective functional of Problem (P^{0}). The proof of this theorem can be found in [

Theorem 3.1. Let the objective functional of Problem (P) is strongly convex and the consistency condition (5) be fulfilled. Then, regardless of whether or not the Kuhn-Tucker vector of Problem (P^{0}) exists, it holds that

Along with the above relations, it holds that

and, as a consequence,

If the dual problem is solvable, then it also holds that

where is the solution to the dual problem with minimal norm.

If the strongly convex functional is subdifferentiable in the sense of convex analysis on the set D, then it also holds that

In other words, regardless of whether or not the dual problem is solvable, the regularized dual algorithm is regularizing one in the sense of [

In this subsection we formulate the convergence theorem of iterative dual regularization method for the case of strongly convex objective functional of Problem (P^{0}). It is convenient for practical solving similar problems. The proof of this theorem can be found in [

We suppose here, that the set D is bounded and use the notation

where

is the sequence generated by dual regularization algorithm of Theorem 3.1. in the case,. Here is an arbitrary sequence of positive numbers converging to zero. Suppose that the sequence is constructed according to the iterated rule

where,

and the sequences, , obey the consistency conditions

The existence of sequences and, , satisfying relations (8) is easy to verify. For example, we can use and.

Then, as it is shown in [

hold and, as consequence, we have

Besides, if the strongly convex functional is subdifferentiable at the points of the set D, then we have also

The specified circumstances allow us to transform Theorem 3.1. into the following statement.

Theorem 3.2. Let the objective functional of Problem (P^{0}) is strongly convex, the set D is bounded and the consistency conditions (8) be fulfilled. Then, regardless of whether or not the Kuhn-Tucker vector of Problem (P^{0}) exists, it holds that

Along with the above relations, it holds that

and, as a consequence,

If the dual problem is solvable, then it also holds that

where is the solution to the dual problem with minimal norm.

If the strongly convex functional is subdifferentiable (in the sense of convex analysis) on the set D, then it also holds that

In this subsection we prove the convergence theorem of dual regularization method for the case of bounded D and convex objective functional of Problem (P^{0}).

Below, an important role is played by the following lemma, which provides an expression for the superdifferential of concave value function, in the case of a convex objective functional and a bounded set D. Here, the superdifferential of a concave function (in the sense of convex analysis) is understood as the subdifferential of the convex functional taken with an opposite sign. The proof is omitted, since it can be found for a more general case in [

Lemma 3.1. The superdifferential (in the sense of convex analysis of the concave functional at a point is expressed by the formula

where is the generalized Clarke gradient of at, and the limit is understood in the sense of weak convergence in.

Further, first of all we can write the inequality

for an element

.

Then, taking into account Lemma 3.1. we obtain

where is such sequence that

Suppose without loss of generality that the sequence converges weakly, as, to an element belonging obviously to the set

Due to weak lower semicontinuity of the convex continuous functionals and boundedness of D we obtain from (11) the following inequality

where is some subsequence of the sequence.

To justify this inequality we have to note that in the case for some k the limit relation

holds despite the fact that the sequence converges only weakly to. This circumstance is explained by specific of Lagrangian (it is the weighed sum of functionals) and the fact that the sequence is a minimizing one for it.

As consequence of the last inequality, we obtain

In turn, the limit relations (12)-(14) and boundedness of D lead to the equality

Further, we can write for any

the following inequalities

From here, due to the estimates (4), we obtain

or

or

or

or, because of the equality (15)

or

From the last estimate it follows that

where

.

Thus, we derive the following limit relations

The limit relations (12)-(14), (16) give the possibility to write

Further, let us denote by

any weak limit point of the sequence

,.

Then, because of the limit relations (17) and the obvious inequality

we obtain

and, as consequence, due to boundedness of D

Simultaneously, since

and the inequality

holds, we can write for any due to the limit relation (15)

From the last limit relation, the consistency condition (5), the estimate (4) and boundedness of D we obtain

or

Thus, due to boundedness of D and weak lower semicontinuity of we constructed the family of depending on elements such that

and simultaneously

where any weak limit point of any sequence , is obviously a solution of Problem (P^{0}).

Along with the construction of a minimizing sequence for the original Problem (P^{0}), the dual algorithm under discussion produces a maximizing sequence for the corresponding dual problem. We show that the family

is the maximizing one for the dual problem, i.e. the limit relation

holds.

First of all, note that due to boundedness of D the evident estimate

is true with a constant which depends on

but not depends on.

Since

we can write, thanks to (19), the estimates

whence we obtain

From here, we deduce, due to the consistency condition (5) and limit relations (16), that for any fixed and for any fixed there exists such for which the estimate

holds.

Let us, suppose now that the limit relation (18) is not true. Then there exists such convergent to zero a sequence that the inequality

is fulfilled for some.

Since

for, we deduce from the last estimate that for all sufficiently large positive M the inequality

takes a place. This estimate contradicts to obtained above estimate (20). The last contradiction proves correctness of the limit relation (18).

At last, we can assert that the duality relation

for Problem (P^{0}) holds. Indeed, similar duality relation is valid due to Theorem 3.1. (see relation (6)) for the problem

, with strongly convex objective functional. Writing this duality relation and passing to the limit as we get because of boundedness of D the duality relation (21).

In turn, from the duality relation (21), the estimate (19) and the limit relation (18) we deduce the limit relation

So, as a result of this subsection, the following theorem holds. To formulate it, introduce beforehand the notations

Theorem 3.3. Let the objective functional of Problem (P^{0}) is convex and the consistency condition (5) be fulfilled. Then, regardless of whether or not the Kuhn-Tucker vector of Problem (P^{0}) exists, it holds for some

that

Along with the above relations, it holds that

If the dual problem is solvable, then it also holds that

where

is the solution to the dual problem with minimal norm.

At first, in this section we give the formulation of the stable sequential Kuhn-Tucker theorem for the case of strongly convex objective functional. Next we prove the corresponding theorem in a form of iterated process in the same case and, at last, prove the theorem in the case of the convex objective functional.

Below the formulation of the stable sequential KuhnTucker theorem for the case of strongly convex objective functional is given. The proof of this theorem can be found in [

Theorem 4.1. Assume that is a continuous strongly convex subdifferentiable functional. For a bounded minimizing approximate solution to Problem (P^{0}) to exist (and, hence, to converge strongly to), it is necessary that there exists a sequence of dual variables

, such that

, the limit relations

are fulfilled, and the sequence

is bounded. Moreover, the latter sequence is the desired minimizing approximate solution to Problem (P^{0}); that is,

.

At the same time, the limit relations

are also valid; as a consequence,

is fulfilled. The points, may be chosen as the points

,

from Theorem 3.1. for, where, .

Conversely, for a minimizing approximate solution to Problem (P^{0}) to exist, it is sufficient that there exists a sequence of dual variables

,

such that

,

the limit relations (25) are fulfilled, and the sequence

is bounded. Moreover, the latter sequence is the desired minimizing approximate solution to Problem (P^{0}); that is,

.

If in addition the limit relations (25) are fulfilled, then (26) is also valid. Simultaneously, every weak limit point of the sequence

is a maximizer of the dual problem

.

In this subsection we prove the stable sequential KuhnTucker theorem in a form of iterated process for the case of strongly convex objective functional. Note that the regularizing stopping rule for this iterated process in the case when the input data of the optimization problem are specified with a fixed (finite) error can be found in [

Theorem 4.2. Assume that the set D is bounded and f^{0}: D → R^{1 }is a continuous strongly convex subdifferentiable functional. For a minimizing approximate solution to Problem (P^{0}) to exist and, hence, to converge strongly to, it is necessary that for the sequence of dual variables

, ,

generated by iterated process (7) with the consistency conditions (8) the limit relations

are fulfilled. In this case the sequence

is the desired minimizing approximate solution to Problem (P^{0}); that is,

.

Simultaneously, the limit relation

is fulfilled.

Conversely, for a minimizing approximate solution to Problem (P^{0}) to exist, it is sufficient that for the sequence of dual variables

, generated by iterated process (7) with the consistency conditions (8), the limit relations (P^{0}) are fulfilled. Moreover, the sequence

is the desired minimizing approximate solution to Problem (P^{0}); that is,

.

Simultaneously, the limit relation (28) is valid.

Proof. To prove the necessity we first observe that Problem (P^{0}) is solvable because of the conditions on its input data and existence of minimizing approximate solution. Now, the limit relations (27), (28) of the present theorem follow from Theorem 3.2. Further, to prove the sufficiency, we first can observe that Problem (P^{0}) is solvable in view of the inclusion

the boundedness of the sequence

and the conditions imposed on the initial data of Problem (P^{0}). Hence due to asserted in Subsection 3.3 there exists the sequence

generated by dual regularization algorithm of Subsection 2.2 and, as consequence, the sequence

generated by iterated process (7) with the consistency conditions (8), obey the limit relations (9), (10) and (28). Thus, the sequence

is the desired minimizing approximate solution to Problem (P^{0}).

In this subsection we prove the stable sequential KuhnTucker theorem in the case of the convex objective functional.

Theorem 4.3. Assume that the set D is bounded and is a continuous convex functional. For a minimizing approximate solution to problem (P^{0}) to exist (and, hence, every its weak limit point belongs to Z^{*}), it is necessary and sufficient that there exists a sequence of dual variables

,

such that

,

and the relations

hold for some elements

.

Moreover, the sequence

is the desired minimizing approximate solution, and every weak limit point of this sequence is a solution to Problem (P^{0}). As a consequence of the limit relations (29), (30) the limit relation

holds. Simultaneously, every weak limit point of the sequence

is a maximizer of the dual problem

.

Proof. To prove the necessity, we first observe that Problem (P^{0}) is solvable, i.e., because of the conditions on its input data and existence of minimizing approximate solutions. Now, the first two limit relations (29), (30) of the present theorem follow from Theorem 3.3. if and are chosen as the points

,

and respectively. Further, due to the estimate (19) and the limit relation

,

we have

Then, taking into account the equality (see (23))

the limit relation (30) and obtained limit relation

(see (22)) we can write

and, as consequence, the limit relation (31) is valid. So, we have shown that the limit relation (31) is a consequence of the limit relations (29), (30). Now, let the sequence

be bounded. Then, since the concave continuous functional V^{0} is weakly upper semicontinuous, every weak limit point of this sequence is a maximizer of the dual problem.

Now we prove the sufficiency. We first observe that the set

is nonempty in view of the inclusion

the boundedness of the sequence

and the conditions imposed on the initial data of Problem (P^{0}). Hence, problem (P^{0}) is solvable. Furthermore, since the point is a minimizer of the Lagrange functional, we can write

By the hypotheses of the theorem, this implies that

We set in this inequality and use the consistency condition

,

to obtain

,.

In addition we have

.

Using the classical properties of the weak compactness of a convex closed bounded set and the weak lower semicontinuity of a convex continuous functional, we easily deduce from the above facts that

i.e. the sequence

is a minimizing approximate solution of Problem (P^{0}).

Below in this section we consider two illustrative examples connected with possible applications of the results obtained in the previous sections. Its main purpose is to show principal possibilities of using various variants of the stable sequential Kuhn-Tucker theorem for solving optimal control and inverse problems.

First of all we consider the optimal control problem with fixed time and with functional equality and inequality constraints

Here and below, is a number characterising an error of initial data, is a fixed number,

is a convex functional,

are convex functionals,

,

are Lebesgue measurable and uniformly bounded with respect to matrices,

are Lebesgue measurable and uniformly bounded with respect to vectors,

,

is a convex compact set, is a solution to the Cauchy problem

Obviously, for each control this Cauchy problem has a unique solution and all these solutions are uniformly bounded with respect to and

Assume that

whence we obtain due to the estimate (32) for some constant

Define the Lagrange functional

and the concave dual functional

Define also the notations

Here and below, we leave the notation accepting in Section 1.

Let

denote the unique point in that maximizes on this set the functional

Applying in this situation, for example in the case of convexity of, Theorem 4.3 we obtain the following result.

Theorem 5.1. For a minimizing approximate solution to Problem to exist (and, hence, every its weak limit point belongs to), it is necessary and sufficient that there exists a sequence of dual variables

,

such that

,

and the relations

hold for some elements

.

Moreover, the sequence

is the desired minimizing approximate solution, and every weak limit point of this sequence is a solution to Problem. As a consequence of the limit relations (33), (34) the limit relation

holds. Simultaneously, every limit point of the sequence

is a maximizer of the dual problem

.

The points, may be chosen as the points, where,.

In conclusion of this subsection, we note that the Pontryagin maximum principle can be used for finding optimal elements in the problem

Denote

where is matrix with lines , is matrix with lines.

Then, due to convexity of the problem (35) we can assert that any its minimizer satisfies the following maximum principle.

Theorem 5.2. The maximum relation

holds for the Lagrange multipliers, where, is the solution of the adjoint problem

Now we consider the illustrative example of the ill-posed inverse problem of final observation for a linear parabolic equation in the divergent form for recovering a distributed right-hand side of the equation, initial function, and boundary function on the side surface of the cylindrical domain for the third boundary value problem. Here we study the simplified inverse problem with a view of compact presentation. Similar but more general inverse problem may be found in [

Let, and be convex compacts,

,

,

be a bounded domain in, ,

,

,

.

Let us consider inverse problem of finding a triple of unknown distributed, initial, and boundary coefficients for the third boundary value problem for the following linear parabolic equation of the divergent form

determined by a final observation

whose value is known approximately, at a certain. Here, similar to [

and is the angle between the external normal to and the axis, and is a number characterising an error of initial data, is a fixed number. The solution to the boundary value problem (36) corresponding to the desired actions is a weak solution in the sense of the class [

which we denote by.

It is easy to see that the above-formulated inverse problem of finding the normal solution by a given observation is equivalent to the following fixed-time optimal control problem on finding a minimum-norm control triple with strongly convex objective functional and a semi-state equality constraint

where

The input data for the inverse problem (and, hence, for Problem are assumed to meet the following conditions:

1) functions

,

are Lebesgue measurable;

2) the estimates

hold, where K > 0 is a constant not depending on;

3) the boundary S is piece-wise smooth.

Denote by approximate final observation (with parameter) and assume that

From conditions 1) - 3) and the theorem on the existence of a weak (generalized) solution of the third boundary value problem for a linear parabolic equation of the divergent type [

is true, where a constant C > 0 not depends on. The last facts together with the estimates (37) lead to corresponding necessary estimate for deviation of perturbed linear bounded operator,

from its unperturbed analog (details may be found in [

where a constant C > 0 not depends on.

Define the Lagrange functional

with the minimizer and the concave dual functional

Let denotes the unique point in that maximizes on this set the functional

Applying Theorem 4.1. in this situation of strong convexity of we obtain the following result.

Theorem 5.3. For a bounded minimizing approximate solution to Problem to exist (and, hence, to converge strongly to as), it is necessary that there exists a sequence of dual variable, , such that, , the limit relations

are fulfilled. Moreover, the latter sequence is the desired minimizing approximate solution to Problem; that is,

.

At the same time, the limit relation

is fulfilled.

Conversely, for a minimizing approximate solution to problem to exist, it is sufficient that there exists a sequence of dual variable, such that

, the limit relations (38) are fulfilled. Moreover, the latter sequence is the desired to Problem; that is,

.

Besides, the limit relation (39) is also valid. Simultaneously, every weak limit point of the sequence, is a maximizer of the dual problem

.

The points, may be chosen as the points

, where,.

Since the set D is bounded in Problem, we can apply for solving our inverse problem the regularized Kuhn-Tucker theorem in a form of iterated process also. Thus, Theorem 4.2. leads us to the following theorem.

Theorem 5.4. For a minimizing approximate solution to Problem to exist (and, hence, to converge strongly to), it is necessary that for the sequence of dual variable, , generated by iterated process

with the consistency conditions (8) the limit relations

are fulfilled. In this case the sequence

is the desired minimizing approximate solution to Problem; that is,

.

Simultaneously, the limit relation

is fulfilled.

Conversely, for a minimizing approximate solution to Problem to exist, it is sufficient that for the sequence of dual variable, , generated by iterated process (40) with the consistency conditions (8), the limit relations (41) are fulfilled. Moreover, the sequence

is the desired minimizing approximate solution to Problem; that is,

.

Simultaneously, the limit relation (42) is valid.