^{1}

^{2}

In this paper, a new augmented Lagrangian penalty function for constrained optimization problems is studied. The dual properties of the augmented Lagrangian objective penalty function for constrained optimization problems are proved. Under some conditions, the saddle point of the augmented Lagrangian objective penalty function satisfies the first-order Karush-Kuhn-Tucker (KKT) condition. Especially, when the KKT condition holds for convex programming its saddle point exists. Based on the augmented Lagrangian objective penalty function, an algorithm is developed for finding a global solution to an inequality constrained optimization problem and its global convergence is also proved under some conditions.

Augmented Lagrangian penalty functions are effective approaches to inequality constrained optimization. Their main idea is to transform a constrained optimization problem into a sequence of unconstrained optimization problems that are easier to solve. Theories on and algorithms of Lagrangian penalty function were introduced in Du’s et al. works [

All augmented Lagrangian functions consist of two parts, a Lagrangian function with a Lagrangian parameter and a penalty function with a penalty parameter (see [

In recent years, the penalty function method with an objective penalty parameter has been discussed in [

The main conclusions of this paper include that the optimal target value of the dual problem and the optimal target value of the original problem is zero gap, and saddle point is equivalent to the KKT condition of the original problem under the convexity conditions. A global algorithm and its convergence are presented. The remainder of this paper is organized as follows. In Section 2, an augmented Lagrangian objective penalty function is defined, its dual properties are proved, and an algorithm to find a global solution to the original problem (P) with convergence is presented. In Section 3, conclusions are given.

In this paper the following mathematical programming of inequality constrained optimization problem is considered:

where

Let functions

respectively. For example,

The augmented Lagrangian objective penalty function is defined as:

where

When

Define the augmented Lagrangian dual problem:

When

By (3), we have

According to (1), we have

Theorem 1. Let x be a feasible solution to (P), and u,v be a feasible solution to (DP). Then

Proof. According to the assumption, we have

and

Corollary 2.1. Let

By (5), if

and know that

A saddle point

By (10), the saddle point shows the connection between the dual problem and the original problem. The optimal solution to the original problem can be obtained by the optimal solution to the dual problem and the zero gap exists in Theorem 2. The following Theorems 3 and Theorem 4 show that under the condition of convexity, saddle points are equivalent to the optimality conditions of the original problem. By (10), we have

Hence, we have the following theorems.

Theorem 2. Let

Theorem 3. Let

Proof. According to the assumption,

and

where

And there are

By (12)-(16), let

For

Clearly, if

Theorem 4. Let

rentiable. Let

Proof. Let any

On the other hand, when

Example 2.1 Consider the problem:

When

The optimal solution to

holds. Then

Example 2.1 shows that the augmented Lagrangian objective penalty function can be as good in terms of the exactness as the traditional exact penalty function.

For any given

In Example 2.1,

Now, a generic algorithm is developed to compute a globally optimal solution to (P) which is similar to the algorithm in [

ALOPFA Algorithm:

Step 1: Choose

Step 2: Solve

Step 3: If

Otherwise, stop and

The convergence of the ALOPFA algorithm is proved in the following theorem. Let

which is called a Q-level set. We say that

Theorem 5. Let

Proof. The sequence

because

Since the Q-level set

Without loss of generality, we assume

Letting

which implies

Theorem 5 means that the ALOPFA Algorithm has global convergence in theory. When v is taken big enough, an approximate solution to (P) by the ALOPFA Algorithm is obtained.

This paper discusses dual properties and algorithm of an augmented Lagrangian penalty function for constrained optimization problems. The zero gap of the dual problem based on the augmented Lagrangian objective penalty function for constrained optimization problems is proved. Under some conditions, the saddle point of the augmented Lagrangian objective penalty function i.e. equivalent to the first-order Karush-Kuhn-Tucker (KKT) condition. Based on the augmented Lagrangian objective penalty function, an algorithm is presented for finding a global solution to (P) and its global convergence is also proved under some conditions. There are still some problems that need further study for the augmented Lagrangian objective penalty function, for example, the local algorithm, exactness, and so on.

We thank the editor and the referees for their comments. This research is supported by the National Natural Science Foundation of China under Grant No. 11271329 and the Natural Science Foundation of Ningbo City under Grant No. 2016A610043 and the Natural Science Foundation of Zhejiang Province under Grant No. LY15G010007.

Zheng, Y. and Meng, Z.Q. (2017) A New Augmented Lagrangian Objective Penalty Function for Constrained Optimization Problems. Open Journal of Optimization, 6, 39-46. https://doi.org/10.4236/ojop.2017.62004