Applied Mathematics, 2010, 1, 8-17
doi:10.4236/am.2010.11002 Published Online May 2010 (http://www.SciRP.org/journal/am)
Copyright © 2010 SciRes. AM
A Modified Limited SQP Method For Constrained
Optimization*
Gonglin Yuan1, Sha Lu2, Zengxin Wei1
1Department of Mathematics and Information Science, Guangxi University, Nanning, China
2School of Mathematics Science, Guangxi Teacher’s Education University, Nanning, China
E-mail: glyuan@gxu.edu.cn
Received December 23, 2009; revised February 24, 2010; accepted March 10, 2010
Abstract
In this paper, a modified variation of the Limited SQP method is presented for constrained optimization. This
method possesses not only the information of gradient but also the information of function value. Moreover,
the proposed method requires no more function or derivative evaluations and hardly more storage or arith-
metic operations. Under suitable conditions, the global convergence is established.
Keywords: Constrained Optimization, Limited Method, SQP Method, Global Convergence
1. Introduction
Consider the constrained optimization problem
Ijxg
Eixhts
xf
j
i


,0)(
,0)(..
)(min
(1)
where RRghf n
ji :,, are twice continuously diffe-
rentiable, },,,2,1{mE
0},,,2,1{
llmmmI
is an integer. Let the Lagrangian function be defined by
)()()(),,( xhxgxfxLTT

 (2)
where
and
are multipliers. Obviously, the La-
grangian function L is a twice continuously differenti-
able function. Let S be the feasible point set of the
problem (1). We define
I
to be the set of all the sub-
scripts of those inequality constraints which are active
at
x, i.e., }.0)(|{ 
xgandIiiI i
It is well known that the SQP methods for solving
twice continuously differentiable nonlinear programming
problems, are essentially Newton-type methods for find-
ing Kuhn-Tucher points of nonlinear programming
problems. These years, the SQP methods have been in
vogue [1-8]: Powell [5] gave the BFGS-Newton-SQP
method for the nonlinearly constrained optimization. He
gave some sufficient conditions, under which SQP me-
thod would yield 2-step Q-superlinear convergence rate
(assuming convergence) but did not show that his mod-
ified BFGS method satisfied these conditions. Coleman
and Conn [2] gave a new local convergence qua-
si-Newton-SQP method for the equality constrained non-
linear programming problems. The local 2-step
Q-superlinear convergence was established. Sun [6]
proposed quasi -Newton-SQP method for general 1
LC
constrained problems. He presented the locally conver-
gent sufficient conditions and superlinear convergent
sufficient conditions. But he did not prove whether the
modified BFGS-quasi-Newton-SQP method satisfies the
sufficient conditions or not. We know that, the BFGS
update exploits only the gradient information, while the
information of function values of the Lagrangian func-
tion (2) available is neglected.
If n
R
x
holds, then the problem (1) is called un-
constrained optimization problem (UNP). There are ma-
ny methods [9-13] for the UNP, where the BFGS method
is one of the most effective quasi-Newton method. The
normal BFGS update exploits only the gradient informa-
tion, while the information of function values available is
neglected for UNP too. These years, lots of modified
BFGS methods (see [14-19]) have been proposed for
UNP. Especially, many efficient attempts have been
made to modify the usual quasi-Newton methods using
both the gradient and function values information (e.g.
[19,20]). Lately, in order to get a higher order accuracy
in approximating the second curvature of the objective
function, Wei, Yu, Yuan, and Lian [18] proposed a new
BFGS-type method for UNP, and the reported numerical
results show that the average performance is better than
that of the standard BFGS method. The superlinear con-
vergence of this modified has been established for un-
iformly convex function. Its global convergence is estab-
lished by Wei, Li, and Qi [20]. Motivated by their ideas,
Yuan and Wei [21] presented a modified BFGS method
*This work is supported by the Chinese NSF grants 10761001 and the
Scientific Research Foundation of Guangxi University (Grant No.
X081082), and Guangxi SF grants 0991028.
G. L. Yuan ET AL.
Copyright © 2010 SciRes. AM
9
which can ensure that the update matrix are positive de-
finite for the general convex functions. Moreover, the
global convergence is proved for the general convex
functions.
The limited memory BFGS (L-BFGS) method (see
[22]) is an adaptation of the BFGS method for
large-scale problems. The implementation is almost
identical to that of the standard BFGS method, the only
difference is that the inverse Hessian approximation is
not formed explicitly, but defined by a small number of
BFGS updates. It is often provided a fast rate of linear
convergence, and requires minimal storage.
Inspired by the modified method of [21], we combine
this technique and the limited memory technique, and
give a limited SQP method for constrained optimization.
The global convergence of the proposed method will be
established for generally convex function. The major
contribution of this paper is an extension of, based on the
basic of the method in [21], the method for the UNP to
constrained optimization problems. Unlike the standard
SQP method, a distinguishing feature of our proposed
method is that a triple },,{
iii Ays being stored, where
1iii
s
xx
,,)()( 1iiixixi sAzLzLy

 1i
z
111
(,, )
iii
x

,),,( iiii
x
z
, i
and i
are the
multipliers which are according to the Lagrangian objec-
tive function at i
x, while 1i
and 1i
are the mul-
tipliers which are according to the Lagrangian objective
function at 1i
x, and
i
A is a scalar related to Lagran-
gian function value. Moreover, a limited memory SQP
method is proposed. Compared with the standard SQP
method, the presented method requires no more function
or derivative evaluations, and hardly more storage or
arithmetic operations.
This paper is organized as follows. In the next section,
we briefly review some modified method and the L-BFGS
method for UNP. In Section 3, we describe the modified
limited memory SQP algorithm for (2). The global con-
vergence will be established in Section 4. In the last sec-
tion, we give a conclusion. Throughout this paper, ||||
denotes the Euclidean norm of vectors or matrix.
2. Modified BFGS Update and the L-BFGS
Update for UNP
We will state the modified BFGS update and the
L-BFGS update for UNP in the following subsections,
respectively.
2.1. Modified BFGS Update
Quasi-Newton methods for solving UNP often need to
update the iterate matrixk
B. In tradition, }{ k
Bsatisfies
the following quasi -Newton equation:
kkk SB
1 (3)
where kkkxxS
1,)()( 1kkk xfxf 
.The very
famous updatek
Bis the BFGS formula
k
T
k
T
kk
kk
T
k
k
T
kkk
kkSSBS
BSSB
BB


1 (4)
Let k
H be the inverse of k
B, then the inverse up-
date formula of (4) method is represented as
,)()(
)(
)()(
)(
)(
2
2
1
k
T
k
T
kk
k
T
k
T
kk
k
k
T
k
T
kk
k
T
k
T
kkkk
T
kkkk
k
T
k
T
kkkkk
T
k
kk
S
SS
S
S
IH
S
S
I
S
HSSSHS
S
SSHS
HH






(5)
which is the dual form of the DF
P
update formula
in the sense thatkk BH
, 11 kk BH , and kk ys
.
It has been shown that the BFGS method is the most ef-
fective in quasi-Newton methods from computation point
of view. The authors have studied the convergence
of fand its characterizations for convex minimization
[23-27]. Our pioneers made great efforts in order to find
a quasi-Newton method which not only possess global
convergence but also is superior than the BFGS method
from the computation point of view [15-17,20,28-31].
For general functions, it is now known that the BFGS
method may fail for non-convex functions with inexact
line search [32], Mascarenhas [33] showed that the non-
convergence of the standard BFGS method even with
exact line search. In order to obtain a global convergence
of BFGS method without convexity assumption on the
objective function, Li and Fukushima [15,16] made a
slight modification to the standard BFGS method. Now
we state their work [15] simply. Li and Fukushima (see
[15]) advised a new quasi-Newton equation with the fol-
lowing form
1
1kkk SB
, where,
1
kkkkk Sgt

0
k
t
is determined by }0,
||||
max{1 2
k
k
T
k
kS
S
t
 . Un-
der appropriate conditions, these two methods [15,16]
are globally and superlinearly convergent for nonconvex
minimization problems.
In order to get a better approximation of the objective
function Hessian matrix, Wei, Yu, Yuan, and Lian (see
[18]) also proposed a new quasi-Newton equation:
,)3()2( 2
1kkkkkk SASB
where
2
||||
)]()([)]()([2
)3(
k
k
T
kkkkkkkk
kS
Sxfdxfdxfxf
A

.
Then the new BFGS update formula is
G. L. Yuan ET AL.
Copyright © 2010 SciRes. AM
10
.
)2(
)2()2(
)2()2( 2
22
1


k
T
k
T
kk
kk
T
k
k
T
kkk
kk SSBS
BSSB
BB

(6)
Note that this quasi-Newton formula (6) contains both
gradient and function value information at the current
and the previous step. This modified BFGS update for-
mula differs from the standard BFGS update, and a
higher order approximation of )(
2xf can be obtained
(see [18,20]).
It is well known that the matrix k
B are very impor-
tant for convergence if they are positive definite [24,25].
It is not difficult to see that the condition 0
2
k
T
k
S
can ensure that the update matrix )2(
1k
B from (6) in-
herits the positive definiteness of)2(
k
B. However this
condition can be obtained only under the objective func-
tion is uniformly convex. If f is a general convex
function, then 2
k
T
k
S
and k
T
k
S
may equal to 0. In
this case, the positive definiteness of the update matrix
k
B can not be sure. Then we conclude that, for the gen-
eral convex functions, the positive definiteness of the
update matrix k
B generated by (4) and (6) can not be
satisfied.
In order to get the positive definiteness of )2(
k
B
based on the definition of 2
k
and k
for the general
convex functions, Yuan and Wei [21] give a modified
BFGS update, i. e., the modified update formula is de-
fined by
,
)3(
)3()3(
)3()3( 3
33
1
k
T
k
T
kk
kk
T
k
k
T
kkk
kk S
SBS
BSSB
BB



(7)
where }0),3(max{,
3
kkkkkk AASA 

. Then the
corresponding quasi-Newton equation is
3
1)3( kkk SB
(8)
which can ensure that the condition 0
3
k
T
k
S
holds
for the general convex function f(see [21] in detail).
Therefore, the update matrix 1k
B from (8) inherits the
positive definiteness of k
B for the general convex
function.
2.2. Limited Memory BFGS-Type Method
The limited memory BFGS (L-BFGS) method (see [22])
is an adaptation of the BFGS method for large-scale
problems. In the L-BFGS method, matrix k
H is ob-
tained by updating the basic matrix )0
~
(
0mH times
using BFGS formula with the previous m
~ iterations.
The standard BFGS correction (5) has the following
form
T
kkkkk
T
kkSSVHVH

1 (9)
where
k
T
k
kS
1
, T
kkkkSIV

 ,
I
is the unit ma-
trix. Thus, 1k
H in the L-BFGS method has the fol-
lowing form:
.
][][
][][
][
12
~
2
~
1
~
2
~
11
~
1
~
1
~
1
~
111111
1
T
kkk
kmk
T
mkmk
T
mk
T
kmk
kmkmk
T
mk
T
k
T
kkkk
T
kkkkk
T
k
T
k
T
kkkkk
T
kk
SS
VVSSVV
VVHVV
SSVSSVHVV
SSVHVH








(10)
3. Modified SQP Method
In this section, we will state the normal SQP method and
the modified limited memory SQP method, respectively.
3.1. Normal SQP Method
The first-order Kuhn-Tucker condition of (2) is




.0)(
,,0)(,0,0)(
,0)()()(
xh
Ijforxgxg
xhxgxf
jjj
TT


(11)
The system (11) can be represented by the following
system:
,0)( zH (12)
where Szz
),,(
and lmnlmn RRH
: is
defined by
.
)(
}),(min{
)()()(
)(

xh
xg
xhxgxf
zH
TT

(13)
Since ,, gf
and h
are continuously differentia-
ble functions, it is obviously that )(zH is continuously
differentiable function. Then, for all lmn
Rd
, the
directional derivative ):( dzH
of the function )(zH
exists. Denote the index sets by
)}(|{)( xgiz ii 
(14)
and )}.(|{)( xgiz ii 
(15)
G. L. Yuan ET AL.
Copyright © 2010 SciRes. AM
11
Under the complementary condition, it is clearly that
)(z
is an index set of strongly active inequality con-
straints, and )(z
is an index set of weakly active and
inactive inequality constraints. In terms of these sets, the
directional derivative along the direction ),,(

dddd x
is given as follows
,
)(
)}(,min{
)(
):(
)(
)(


x
T
zix
T
i
zix
T
i
dxh
dgd
dg
Gd
dzH
i

(16)
where Gis a matrix which elements are the partial deriv-
atives of )(zL
x
to ,
x
d,
d,
drespectively. If
ii ddgd zix
T
i

 )(
)}(,min{ holds, then the set
.
000)(
000
000)(
)()()(
)(

T
T
xh
I
xg
xhxgxgV
zW

(17)
By (33) in [6], we know than the system
),( kkk zHdW  (18)
where ),,(kkk dddd xk

and )( kk zWW, define
the Kuhn-Tucker condition of problem (2), which also
defines the Kuhn-Tucker condition of the following qua-
dratic programming :),( kk VzQP
,0)()(
,0)()(
,0)()(..
,
2
1
)(min




sxhxh
sxgxg
sxgxgts
sVssxf
T
kk
T
kk
T
kk
k
TT
k


(19)
where ).(, 2
kxxkk zLVxxs 
Generally, suppose that )1(
k
B is an estimate of k
V
and )1(
k
B can be updated by BFGS method of qua-
si-Newton formula
,
)1(
)1()1(
)1()1(
1
k
T
k
T
kk
kk
T
k
k
T
kkk
kk sy
yy
sBs
BssB
BB 
(20)
where kkk xxs 1, )()( 1kxkxk zLzLy  ,
),,,( 1111  kkkk xz
),,,( kkkk xz
k
and k
are the multipliers which are according to the Lagrangian
objective function at k
x, while 1k
and 1k
are the
multipliers which are according to the Lagrangian objec-
tive function at 1k
x. Particularly, when we use the up-
date formula (20) to (19), the above quadratic program-
ming problem can be written as :),(kk BzQP
.0)()(
,0)()(
,0)()(..
,)1(
2
1
)(min




sxhxh
sxgxg
sxgxgts
sBssxf
T
kk
T
kk
T
kk
k
TT
k


(21)
Suppose that ),,(
s is a Kuhn-Tucker triple of the
sub problem),( kk BzQP, therefore, it is obviously
that 0
s if ),,(
kk
x is a Kuhn-Tucker triple of (2).
3.2. Modified Limited Memory SQP Method
The normal limited memory BFGS formula of qua-
si-Newton-SQP method with k
H for constrained opti-
mization (2) is defined by







][][
][][
][
12
~
2
~
1
~
2
~
11
~
1
~
1
~
1
~
111111
1
kmk
T
mkmk
T
mk
T
kmk
kmkmk
T
mk
T
k
T
kkkk
T
kkkkk
T
k
T
k
T
kkkkk
T
kk
VVssVV
VVHVV
ssVssVHVV
ssVHVH

(22)
where ,
1
k
T
k
kys
,
T
kkkk syIV

I
is the unit
matrix. To maintain the positive definiteness of the li-
mited memory BFGS matrix, some researchers suggested
to discard correction },{ kk ys if 0
k
T
kys does not
hold (e.g. [34]). Another technique was proposed by
Powell [35] in which k
y is defined by

,,)1(
,2.0,
otherwisesBy
sBsysify
y
kkkkk
kk
T
kk
T
kk

where ,
8.0
k
T
kkk
T
k
kk
T
k
kyssBs
sBs
kk HB
1 of (22). How-
ever, if the Lagrangian objective function ),,(
xL is
a general convex function, then k
T
kys may equal to 0.
In this case, the positive definiteness of the update matrix
k
H of (22) can not be sure.
Whether there exists a limited memory SQP method
which can ensure that the update matrix are positive de-
finite for general convex Lagrangian objective func-
tion ),,(
xL . This paper gives a positive answer. Let
2
11
||||
)]()([)]()([2
~
k
k
T
kxkxkk
ks
szLzLzLzL
A
.
Con-
sidering the discussion of the above section, we discuss
k
A
~ for general convex Lagrangian objective function
G. L. Yuan ET AL.
Copyright © 2010 SciRes. AM
12
),,(
xL in the following cases to state our motivation.
case i: If ,0
~
k
A we have
.0||||
~
)
~
(2 k
T
kkkk
T
kkkk
T
kyssAyssAys (23)
case ii: If 0
~
k
A, we get
,
||||
||||
)]()([)(2
||||
)]()([)]()([2
~
0
2
2
11
2
11
k
k
T
k
k
k
T
kxkxkkx
k
k
T
kxkxkk
k
s
ys
s
szLzLszL
s
szLzLzLzL
A





(24)
which means that 0
k
T
kys holds. Then we present our
modified limited memory SQP formula
1
111 111
111
1121121
[]
[][]
[][]
kkk
TT
kkkk
TTT T
kkkkkkk kkkk
TT
kkmkmkmk
TT T
km kkmkmkm kmk
T
kkk
HVHV ss
VVHVss Vss
VVH VV
VV ssVV
ss


 
 
 
 
 
  






(25)
where ,
1
k
T
k
kys
,
T
kkkk syIV  
and
kkkk sAyy }0,
~
max{
. It is not difficult to see that the
modified limited memory SQP formula (25) contains
both the gradient and function value information of La-
grangian function at the current and the previous step if
0
~
k
A holds.
Let
k
B be the inverse of
k
H. More generally, sup-
pose that
k
B is an estimate of k
V. Then the above
quadratic programming problem (19) can be written as
:),(
kk BzQP
.0)()(
,0)()(
,0)()(..
,
2
1
)(min




sxhxh
sxgxg
sxgxgts
sBssxf
T
kk
T
kk
T
kk
k
TT
k


(26)
Suppose that ),,(
s is a Kuhn-Tucker triple of the
subproblem ),(
kk BzQP , therefore, it is obviously that
0s if ),,(
kk
x is a Kuhn-Tucker triple of (2).
Now we state our algorithm as follows.
Modified limited memory SQP algorithm 1 for (2)
(M-L-SQP-A1)
Step 0: Star with an initial point ),,( 0000
xz
and an estimate
0
H of )( 0
2
0zLV xx
 ,
0
H is a
symmetric and positive definite matrix, positive con-
stants 10
,0
0m is a positive constant. Set
0
k;
Step 1: For given k
z and
k
H, solve the subproblem
,0)()(
,0)()(
,0)()(..
,
2
1
)(min 1



 
sxhxh
sxgxg
sxgxgts
sHssxf
T
kk
T
kk
T
kk
k
TT
k


(27)
and obtain the unique optimal solution k
d;
Step 2: k
is chosen by the modified weak
Wolfe-Powell (MWWP) step-size rule
,)()()(k
T
kxkkkkk dzLzLdzL 

(28)
and
,)()(k
T
kxk
T
kkkx dzLddzL 

(29)
then let .
1kkkkdxx
Step 3: If 1k
z satisfies a prescribed termination cri-
terion (18), stop. Otherwise, go to step 4;
Step 4: Let },1min{
~
0
mkm
. Update
0
H for m
~
times to get
1k
H by formula (25).
Step 5: Set 1
kk and go to step 1.
Clearly, we note that the above algorithm is as simple
as the limited memory SQP method, form storage and
cost point of a view at each iteration.
In the following, we assume that the algorithm updates
k
B-the inverse of
k
H. The M-L-SQP-A1 with Hessian
approximation
k
B can be stated as follows.
Modified limited memory SQP algorithm 2 for (2)
(M-L-SQP-A2)
Step 0: Star with an initial point ),,( 0000
xz
and an estimate
0
B of )( 0
2
0zLV xx
 ,
0
B is a sym-
metric and positive definite matrix, positive constants
10
, 0
0m is a positive constant. Set
0
k;
Step 1: For given k
z and
k
B, solve the subproblem
),(
kk BzQP and obtain the unique optimal solution k
d;
Step 2: Let },1min{
~
0
mkm
. Update
k
B with
the triples k
mkiiii Ays 1
~
},,{ 
, i.e.,
for kmkl ,,1
~
, compute
G. L. Yuan ET AL.
Copyright © 2010 SciRes. AM
13
,
1
l
T
l
T
ll
k
l
k
T
l
l
k
T
ll
l
k
l
k
l
ksy
yy
sBs
BssB
BB



 (30)
where lll xxs  1, llll sAyy   and
 1
~
mk
k
B for
all k.
Note that M-L-SQP-A1 and M-L-SQP-A2 are mathe-
matically equivalent. In the next section, we will estab-
lish the global convergence of M-L-SQP-A2.
4. Convergence analysis of M-L-SQP-A2
Let
xbe a local optimal solution and ),,( 

xz
be the corresponding Kuhn-Tucker triple of problem (1).
In order to get the global convergence of M-L-SQP-A2,
the following assumptions are needed.
Assumption A. 1) i
hf , and i
g are twice conti-
nuously differentiable functions for all Sx and S
is bounded.
2) }),({}),({   IjxgEixh iiare positive li-
near independence.
3) (Strict complementarity) For0, 
j
Ij
.
(iv) 0Vss Tfor all0swith sxh T
i)(
Ei
,0
and   Ijsxg T
i,0)(, where )(
2
 zLV xx .
(v) }{k
zconverges to
z
where 0)( 
zL
x.
(vi) The Lagrangian function )(zL is convex for all
Sz .
Assumption A(vi) implies that there exists a constant
0H such that
.,|||| SzHV  (31)
Due to the strict complementary Assumption A(3), at a
neighborhood of
z, the method (26) is equivalent to
the following equality constrained quadratic program-
ming:
.0)()(
,0)()(..
,
2
1
)(min



sxhxh
sxgxgts
sBssxf
T
kk
T
kk
k
TT
k

(32)
Without loss of generality for the locally convergent
analysis, we may discuss that there are only active con-
straints in (2). Then (18) becomes the following system
with
k
B instead of k
V:
)(
)(
)(
)(
00)(
00)(
)()(
k
k
k
kxx
T
TzH
xh
xg
zL
d
d
d
xh
xg
xhxgB
k
k
k




(33)
In the case of only considering active constraints, we
can suppose that

00)(
00)(
)()(
T
T
k
k
xh
xg
xhxgV
W
(34)
And
,
00)(
00)(
)()(
,

T
T
k
KH
xh
xg
xhxgB
D
(35)
when
k
B is close to k
V, KH
D, is close to k
W.
Lemma 4.1 Let Assumption A hold. Then there exists
a positive number 1
M such that
.,2,1,0,
||||
1
2

kM
ys
y
k
T
k
k
Proof. By Assumption A, then there exists a positive
number 0
M such that (see [36])
.0,
||||
0
2
 kM
ys
y
k
T
k
k (36)
Since the function )(xL is convex, then we have
k
T
kxkk szLzLzL )()()( 1
and
,)()()( 11 k
T
kxkk szLzLzL   the above two in-
equalities together with the definition of k
A
~
imply that
2
||||
||
|
~
|
k
k
T
k
ks
ys
A. (37)
Using the definition of
k
y, we get
k
T
kkk
T
kk
T
kysAysys 
}0,
~
max{ (38)
and
||,||2||||||||||}0,
~
max{|||||||||| kkkkkkk yyysAyy 
(39)
where the second inequality of (39) follows (37). Com-
bining (38), (39), and (36), we obtain:
.4
||||4||||
0
22
M
ys
y
ys
y
k
T
k
k
k
T
k
k
Let 01 4MM
, we get the conclusion of this lemma.
The proof is complete.
Lemma 4.2 Let k
B is generated by (30). Then we have
,)det()det(
1
~
1
~
1

k
mkl ll
T
l
l
T
l
mk
kk sBs
ys
BB (40)
where )det(
k
B denotes the determinant of
k
B.
Proof. To begin with, we take the determinant in both
sides of (20)
G. L. Yuan ET AL.
Copyright © 2010 SciRes. AM
14
,
)1(
))1((
)])1(
)1(
))1((
)((
)))1((1)(
)1(
)1(
1))[(1((
)
)1(
)1(
)1(
())1((
))
)1(
)1(
)1(
)(1(())1((
1
1
1
1
1
kk
T
k
k
T
k
k
kk
kk
T
k
T
kk
k
T
k
k
T
k
k
T
k
k
T
kk
kk
T
k
kk
T
kk
k
T
k
T
kkk
kk
T
k
k
T
kk
k
k
T
k
T
kkk
kk
T
k
k
T
kk
kk
sBs
sy
BDet
yB
sBs
sB
sy
y
s
sy
y
yB
sBs
sB
sBDet
ys
yyB
sBs
Bss
IDetBDet
ys
yyB
sBs
Bss
IBDetBDet




where the third equality follows from the formula (see,
e.g., [37] Lemma 7.6)
).)(()1)(1()det( 324143214321 uuuuuuuuuuuuI TTTTTT 
Therefore, there is also a simple expression for the de-
terminant of (30)
.)det()det(
1
~
1
~
1

k
mkl ll
T
l
l
T
l
mk
kk sBs
ys
BB
Then we complete the proof.
Lemma 4.3 Let Assumption A hold. Then there exists a
positive constant 1
such that
,||||1kk
s
where ||||
)(
k
k
T
kx
kd
dzL
.
Proof. By Assumption A, we have
).1(||||)(
))()((
2
1
0
1


HddtddtzVd
dzLzL
kkkkkk
T
kk
k
T
kxkx

On the other hand, using (29), we get
.)()1())()(( 1k
T
kxk
T
kxkx dzLdzLzL 
Therefore, ,
1
1
|||| kk
H
s
let 1
1
1
H
. The
proof is complete.
Using Assumption A, it is not difficult to get the fol-
lowing lemma.
Lemma 4.4 Let Assumption A hold. Then the sequence
)}({ k
zL monotonically decreases, and Szk for all
0k. Moreover,
.))((
0

k
k
T
kxk dzL
Similar to Lemma 2.6 in [38], it is not difficult to get
the following lemma. Here we also give the proof
process.
Lemma 4.5 If the sequence of nonnegative numbers
),1,0(kmk satisfy

k
j
k
jkccm
0
11 ,,2,1,0, (41)
then 0suplim
kk m.
Proof. We will get this result by contradiction. As-
sume that 0suplim
kk m, then, for 11
0c
, there
exists 0
1k, such that 1
k
m for all 1
kk . Hence,
for all 1
kk,


1
0
11
1
1
k
j
k
kj
j
kmc


1
11
1
1
0
1
1
suplim k
k
j
j
k
km
c
,
which is a contradiction, thus, 0suplim
kk m.
Lemma 4.6 Let }{ k
x be generated by M-L-SQP-A2 and
Assumption A hold. If 0||)(||inflim 
 kx
kzL , then, there
exists a constant 0
0
such that

.0,
1
0
0

kallfor
k
k
j
j

Proof. Assume that 0||)(||inflim 
 kx
kzL , i.e., there
exists a constant 0
2csuch that
,2,1,0,||)(|| 2
kczL kx . (42)
Now we prove that the update matrix
1k
B will al-
ways be generated by the update formula (30), i.e.,
1k
B
inherits the positive definiteness of
k
B or0
k
T
kys
always holds. For 0
k, this conclusion holds at hand.
For all 1k, assume that
k
B is positive definite. We
will deduce that 0
k
T
kys always holds from the fol-
lowing three cases.
Case 1. If 0
~
k
A. By the definition of
k
y and As-
sumption A, we have
0}0,
~
max{ 
k
T
kkk
T
kk
T
kysAysys .
Case 2. If 0
~
k
A. By the definition of
k
y, (24), and
Assumption A, we get 0
k
T
kk
T
kysys .
Case 3. If 0
~
k
A. By the definition of
k
y, (29), As-
sumption A, )(
1
kxkk zLBd , and the positive defi-
niteness of
k
B, we obtain
0)1()()1(  
kk
T
kkkx
T
kkk
T
kk
T
kdBdzLdysys

,
So, we have 0
k
T
kys , and
1k
B will be generated by the
update formula (30). Thus, the update matrix
1k
B will
always be generated by the update formula (30).
Taking the trace operation in both sides of (30), we get
,
||||||||
)(
)(
2
1
~
2
1
~
1
~
1




l
T
l
l
k
mkl
ll
T
l
ll
k
mkl
mk
k
k
ys
y
sBs
sB
BTr
BTr
(43)
G. L. Yuan ET AL.
Copyright © 2010 SciRes. AM
15
where )(
k
BTr denotes the trace of
k
B. Repeating
this trace operation, we have
.
||||||||
)(
||||||||
)()(
0
2
0
2
0
2
1
~
2
1
~
1
~
1







k
ll
T
l
l
k
lll
T
l
ll
l
T
l
l
k
mkl
ll
T
l
ll
k
mkl
mk
kk
ys
y
sBs
sB
BTr
ys
y
sBs
sB
BTrBTr
(44)
Combining (42), (44), )(
1
kxkk zLBd   , and
Lemma 4.1, we obtain
.)1(
)()(
)()( 1
0
2
2
01 Mk
zLHzL
c
BTrBTr
k
ljxj
T
jx
k



(45)
Using
1k
Bis positive definite, we have 0)( 1
k
BTr .
By (45), we obtain
2
2
10
0
2
2)1()(
)()( c
MkBTr
zLHzL
c
k
ljxj
T
jx


(46)
and
.)1()()( 101 MkBTrBTr k
(47)
By the geometric-arithmetic mean value formula we
get
.
)1()(
)1(
)()(
1
10
2
2
0


k
k
j
jxj
T
jx MkBTr
ck
zLHzL (48)
Using Lemma 4.2, (30), and (38), we have
.
1
)det(
1
)det(
)det(
)det()det(
0
0
1
~
1
~
1
~
1
~
1
~
1
~
1






k
jj
k
mkl l
mk
k
k
mkl ll
T
l
l
T
l
mk
k
k
mkl ll
T
l
l
T
l
mk
kk
B
B
sBs
ys
B
sBs
ys
BB
This implies
.
1
)det(
)det(
0
1
0
k
j
j
k
B
B
(49)
By using the geometric-arithmetic mean value formula
again, we get
.
)(
)det( 1
1
n
k
kn
BTr
B
(50)
Using (47), (49) and (50), we obtain
1
3
10
0
10
0
1
10
0
10
0
0
1,
])([
)det(
min
}1,
])([
)det(
min{
)exp(
1
])([
)det(
1
1
])1()([
)det(
1

k
n
n
n
n
k
n
n
n
n
k
j
j
C
MBTr
nB
MBTr
nB
n
MBTr
nB
k
MkBTr
nB
(51)
where }1,
])([
)det(
min{
)exp(
1
10
0
3n
n
MBTr
nB
n
c
. Let
.
||||)(||
)(
cos
jjx
j
T
jx
jdzL
dzL

Multiplying (48) with (51), for all0k, we get
1
10
2
2
1
3
0
]
)1()(
)1(
[cos||)(||||||


kk
k
j
jjxk MkBTr
ck
czLs
.]
)(
[1
10
2
23
k
MBTr
cc (52)
According to Lemma 4.4 and Assumption A we know
that there exists a constant 0
2
M
such that
2112|||||||||||||||| Mxxxxs kkkkk

 . (53)
Combining the definition of k
and (53), and noting
that jjjx zL
cos||)(|| , we get for all 0k,
.]
2))((
[1
0
1
210
2
23
0

kk
k
j
jMMBTr
cc

The proof is complete.
Now we establish the global convergence theorem for
M-L-SQP-A2.
Theorem 4.1 Let Assumption (i) hold and let the se-
quence }{ k
z be generated by M-L-SQP-A2. Then we
have
0||)(||inflim 
 kx
kzL . (54)
Proof. By Lemma 4.3 and (28), we get
.)(
||||)()(
2
1
1
kk
kkkk
zL
szLzL




(55)
By (55), we have 
0
2
k
k
, this implies that
G. L. Yuan ET AL.
Copyright © 2010 SciRes. AM
16
0lim
 k
k
. (56)
Therefore, relation (54) can be obtained from (56) and
Lemma 4.6 directly.
5. Conclusion
For further research, we should study the properties of
the modified limited memory SQP method under weak
conditions. Moreover, numerical experiments for practi-
cally constrained problems should be done in the future.
6. References
[1] P. T. Boggs, J. W. Tolle and P. Wang, “On The Local
Convergence of Quasi-Newton Methods for Constrained
Optimization,” SIAM Journal on Control and Optimiza-
tion, Vol. 20, No. 2, 1982, pp. 161-171.
[2] F. H. Clarke, “Optimization and Nonsmooth Analysis,”
Wiley, New York, 1983.
[3] T. F. Coleman and A. R. Conn, “Nonlinear Programming
Via Exact Penlty Function: Asymptotic Analysis,” Ma-
thematical Programming, Vol. 24, No. 1, 1982, pp. 123-
136.
[4] M. Fukushima, “A Successive Quadratic Programming
Algorithm with Global and Superlinear Convergence
Properties,” Mathematical Programming, Vol. 35, No. 3,
1986, pp. 253-264.
[5] M. J. D. Powell, “The Convergence of Variable Metric
methods for Nonlinearly Constrained Optimization Cal-
culations,” In O. L. Mangasarian, R. R. Meyer and S. M.
Robinson Eds., Nonlinear Programming 3, Academic
Press, New York, 1978, pp. 27-63.
[6] W. Sun, “Newton's Method and Quasi-Newton-SQP Me-
thod for General LC1 Constrained Optimization,” Applied
Mathematics and Computation, Vol. 92, No. 1, 1998, pp.
69-84.
[7] F. C. Thomas and Q. C. Andrew, “On The Local Con-
Vergence of a Quasi-Newton Methods for The Nonlinear
Programming Problem,” SIAM Journal on Numerical
Analysis, Vol. 21, No. 4, 1984, pp. 755-769.
[8] Y. Yuan and W. Sun, “Theory and Methods of Optimiza-
tion,” Science Press of China, Beijing, 1999.
[9] G. Yuan, “Modified Nonlinear Conjugate Gradient Me-
thods with Sufficient Descent Property for Large-Scale
Optimization Problems,” Optimization Letters, Vol. 3, No.
1, 2009, pp. 11-21.
[10] G. L. Yuan and X. W. Lu, “A New Line Search Method
with Trust Region for Unconstrained Optimization, ”
Communications on Applied Nonlinear Analysis, Vol. 15,
No. 1, 2008, pp. 35-49.
[11] G. L. Yuan and X. W. Lu, “A Modified PRP Conjugate
Gradient Method,” Annals of Operations Research, Vol.
166, No. 1, 2009, pp. 73-90.
[12] G. Yuan, X. Lu and Z. Wei, “A Conjugate Gradient Me-
thod with Descent Direction for Unconstrained Optimiza-
tion,” Journal of Computational and Applied Mathemat-
ics, Vol. 233, No. 2, 2009, pp. 519-530.
[13] G. L. Yuan and Z. X. Wei, “New Line Search Methods
for Unconstrained Optimization,” Journal of the Korean
Statistical Society, Vol. 38, No. 1, 2009, pp. 29-39.
[14] W. C. Davidon, “Variable Metric Methods for Minimiza-
tion,” SIAM Journal on Optimization, Vol. 1, No. 1, 1991,
pp. 1-17.
[15] D. H. Li and M. Fukushima, “A Modified BFGS Method
and Its Global Convergence in Non-convex Minimiza-
tion,” Journal of Computational and Applied Mathemat-
ics, Vol. 129, No. 1-2, 2001, pp. 15-35.
[16] D. H. Li and M. Fukushima, “On The Global Conver-
gence of The BFGS Methods for Non-convex Uncon-
strained Optimization Problems,” SIAM Journal on Op-
timization, Vol. 11, No. 4, 2000, pp.1054-1064.
[17] M. J. D. Powell, “A New Algorithm for Unconstrained
Optimation,” In J. B. Rosen, O. L. Mangasarian and K.
Ritter Eds., Nonlinear Programming, Academic Press,
New York, 1970.
[18] Z. Wei, G. Yu, G. Yuan and Z. Lian, “The Superlinear
Convergence of A Modified BFGS-type Method for Un-
constrained Optimization,” Computational Optimization
and Applications, 29(2004), pp. 315-332.
[19] J. Z. Zhang, N. Y. Deng and L. H. Chen, “New Qua-
si-Newton Equation and Related Methods for Uncon-
strained Optimization,” Journal of Optimization Theory
and Applications, Vol. 102, No. 1, pp. 147-167.
[20] Z. Wei, G. Li and L. Qi, “New Quasi-Newton Methods
for Unconstrained Optimization Problems,” Applied Ma-
thematics and Computation, Vol. 175, No. 2, 2006, pp.
1156-1188.
[21] G. L. Yuan and Z. X. Wei, “Convergence Analysis of A
Modified BFGS Method on Convex Minimizations,”
Computational Optimization and Applications, doi:
10.1007/s10 589-008-9219-0.
[22] R. H. Byrd, J. Nocedal and R. B. Schnabel, “Representa-
tions of Quasi-Newton Matrices and Their Use in Limited
Memory Methods,” Mathematical Programming, Vol. 63,
No. 1-3, 1994, pp. 129-156.
[23] C. G. Broyden, J. E. Dennis and J. J. Moré, “On The Lo-
cal and Supelinear Convergence of Quasi-Newton Me-
thods,” IMA Journal of Applied Mathematics, Vol. 12 No.
3, 1973, pp. 223-246.
[24] R. H. Byrd and J. Nocedal, “A Tool for The Analysis of
Quasi-Newton Methods with Application to Uncon-
strained Minimization,” SIAM Journal on Numerical
Analysis, Vol. 26, No. 3, 1989, pp. 727-739.
[25] R. H. Byrd, J. Nocedal and Y. Yuan, “Global Conver-
gence of A Class of Quasi-Newton Methods on Convex
Problems,” SIAM Journal on Numerical Analysis, Vol. 24,
No. 5, 1987, pp.1171-1189.
[26] J. E. Dennis adn J. J. Moré, “A Characteization of Super-
Linear Convergence and Its Application to Quasi-Newton
Methods,” Mathematics of Computation, Vol. 28, No.
126, 1974, pp. 549-560.
[27] A. Griewank and Ph. L. Toint, “Local Convergence
G. L. Yuan ET AL.
Copyright © 2010 SciRes. AM
17
Analysis for Partitioned Quasi-Newton Updates,” Nume-
rische Mathematik, Vol. 39, No. 3, 1982, pp. 429-448.
[28] A. Perry, “A Class of Conjugate Algorithms with a Two
Step Variable Metric Memory,” Discussion paper, No.
269, Center for Mathematical Studies in Economics and
Management Science, Northwestern University, 1977.
[29] D. F. Shanno, “On the Convergence of A New Conjugate
Gradient Algorithm,” SIAM Journal on Numerical Anal-
ysis, 15, No. 6, 1978, pp. 1247-1257.
[30] Z. Wei, L. Qi and X. Chen, “An SQP-type Method and
Its Application in Stochastic Programming,” Journal of
Optimization Theory and Applications, Vol. 116, No. 1,
2003, pp. 205-228.
[31] G. L. Yuan and Z. X. Wei, “The Superlinear Conver-
gence Analysis of A Nonmonotone BFGS Algorithm on
Convex Objective Functions,” Acta Mathematica Sinica,
Vol. 24, No. 1, 2008, pp. 35-42.
[32] Y. Dai, “Convergence Properties of the BFGS Algo-
rithm,” SIAM Journal on Optimization, Vol. 13, No. 3,
2003, pp. 693-701.
[33] W. F. Mascarenhas, “The BFGS Method with Exact Line
Searchs Fals for Non-convex Objective Functions,” Ma-
thematical Programming, Vo. 99 No. 1, 2004, pp. 49-61.
[34] R. H. Byrd, P. Lu, J. Nocedal and C. Zhu, “A Limited
Memory Algorithm for Bound Constrained Optimiza-
tion,” SIAM Journal on Scientific Computing, Vol. 16,
No. 5, 1995, pp. 1190-1208.
[35] M. J. D. Powell, “A fast Algorithm for Nonlinear Con-
strained Optimization Calculations,” Lecture Notes in
Mathematics, Vol. 630, Springer, Berlin, pp. 144-157.
[36] M. J. D. Powell, “Some Properties of The Variable Me-
tric Algorithm,” In F. A. Lootsma Ed., Numerical me-
thods for Nonlinear Optimization, Academia Press, Lon-
don, 1972.
[37] J. E. Dennis and J. J. Moré, “Quasi-Newton Methods,
Motivation and Theory,” SIAM Review, Vol. 19, No. 1,
1977, pp. 46-89.
[38] J. Y. Han and G. H. Liu, “Global Convergence Analysis
of a New Nonmonotone BFGS Algorithm on Convex
Objective Functions,” Computational Optimization and
Applications, Vol. 7, No. 3, 1997, pp. 277-289.