 Applied Mathematics, 2010, 1, 8-17 doi:10.4236/am.2010.11002 Published Online May 2010 (http://www.SciRP.org/journal/am) Copyright © 2010 SciRes. AM A Modified Limited SQP Method For Constrained Optimization* Gonglin Yuan1, Sha Lu2, Zengxin Wei1 1Department of Mathematics and Information Science, Guangxi University, Nanning, China 2School of Mathematics Science, Guangxi Teacher’s Education University, Nanning, China E-mail: glyuan@gxu.edu.cn Received December 23, 2009; revised February 24, 2010; accepted March 10, 2010 Abstract In this paper, a modified variation of the Limited SQP method is presented for constrained optimization. This method possesses not only the information of gradient but also the information of function value. Moreover, the proposed method requires no more function or derivative evaluations and hardly more storage or arith-metic operations. Under suitable conditions, the global convergence is established. Keywords: Constrained Optimization, Limited Method, SQP Method, Global Convergence 1. Introduction Consider the constrained optimization problem IjxgEixhtsxfji,0)(,0)(..)(min (1) where RRghf nji :,, are twice continuously diffe-rentiable, },,,2,1{mE 0},,,2,1{ llmmmI  is an integer. Let the Lagrangian function be defined by )()()(),,( xhxgxfxLTT (2) where and  are multipliers. Obviously, the La-grangian function L is a twice continuously differenti-able function. Let S be the feasible point set of the problem (1). We define I to be the set of all the sub-scripts of those inequality constraints which are active at x, i.e., }.0)(|{ xgandIiiI i It is well known that the SQP methods for solving twice continuously differentiable nonlinear programming problems, are essentially Newton-type methods for find-ing Kuhn-Tucher points of nonlinear programming problems. These years, the SQP methods have been in vogue [1-8]: Powell  gave the BFGS-Newton-SQP method for the nonlinearly constrained optimization. He gave some sufficient conditions, under which SQP me-thod would yield 2-step Q-superlinear convergence rate (assuming convergence) but did not show that his mod-ified BFGS method satisfied these conditions. Coleman and Conn  gave a new local convergence qua-si-Newton-SQP method for the equality constrained non-linear programming problems. The local 2-step Q-superlinear convergence was established. Sun  proposed quasi -Newton-SQP method for general 1LC constrained problems. He presented the locally conver-gent sufficient conditions and superlinear convergent sufficient conditions. But he did not prove whether the modified BFGS-quasi-Newton-SQP method satisfies the sufficient conditions or not. We know that, the BFGS update exploits only the gradient information, while the information of function values of the Lagrangian func-tion (2) available is neglected. If nRx holds, then the problem (1) is called un-constrained optimization problem (UNP). There are ma- ny methods [9-13] for the UNP, where the BFGS method is one of the most effective quasi-Newton method. The normal BFGS update exploits only the gradient informa-tion, while the information of function values available is neglected for UNP too. These years, lots of modified BFGS methods (see [14-19]) have been proposed for UNP. Especially, many efficient attempts have been made to modify the usual quasi-Newton methods using both the gradient and function values information (e.g. [19,20]). Lately, in order to get a higher order accuracy in approximating the second curvature of the objective function, Wei, Yu, Yuan, and Lian  proposed a new BFGS-type method for UNP, and the reported numerical results show that the average performance is better than that of the standard BFGS method. The superlinear con-vergence of this modified has been established for un-iformly convex function. Its global convergence is estab-lished by Wei, Li, and Qi . Motivated by their ideas, Yuan and Wei  presented a modified BFGS method *This work is supported by the Chinese NSF grants 10761001 and the Scientific Research Foundation of Guangxi University (Grant No. X081082), and Guangxi SF grants 0991028. G. L. Yuan ET AL. Copyright © 2010 SciRes. AM 9which can ensure that the update matrix are positive de-finite for the general convex functions. Moreover, the global convergence is proved for the general convex functions. The limited memory BFGS (L-BFGS) method (see ) is an adaptation of the BFGS method for large-scale problems. The implementation is almost identical to that of the standard BFGS method, the only difference is that the inverse Hessian approximation is not formed explicitly, but defined by a small number of BFGS updates. It is often provided a fast rate of linear convergence, and requires minimal storage. Inspired by the modified method of , we combine this technique and the limited memory technique, and give a limited SQP method for constrained optimization. The global convergence of the proposed method will be established for generally convex function. The major contribution of this paper is an extension of, based on the basic of the method in , the method for the UNP to constrained optimization problems. Unlike the standard SQP method, a distinguishing feature of our proposed method is that a triple },,{ iii Ays being stored, where 1iiisxx,,)()( 1iiixixi sAzLzLy  1iz 111(,, )iiix,),,( iiiixz, i and i are the multipliers which are according to the Lagrangian objec-tive function at ix, while 1i and 1i are the mul-tipliers which are according to the Lagrangian objective function at 1ix, and iA is a scalar related to Lagran-gian function value. Moreover, a limited memory SQP method is proposed. Compared with the standard SQP method, the presented method requires no more function or derivative evaluations, and hardly more storage or arithmetic operations. This paper is organized as follows. In the next section, we briefly review some modified method and the L-BFGS method for UNP. In Section 3, we describe the modified limited memory SQP algorithm for (2). The global con-vergence will be established in Section 4. In the last sec-tion, we give a conclusion. Throughout this paper, |||| denotes the Euclidean norm of vectors or matrix. 2. Modified BFGS Update and the L-BFGS Update for UNP We will state the modified BFGS update and the L-BFGS update for UNP in the following subsections, respectively. 2.1. Modified BFGS Update Quasi-Newton methods for solving UNP often need to update the iterate matrixkB. In tradition, }{ kBsatisfies the following quasi -Newton equation: kkk SB1 (3) where kkkxxS1,)()( 1kkk xfxf .The very famous updatekBis the BFGS formula kTkTkkkkTkkTkkkkkSSBSBSSBBB1 (4) Let kH be the inverse of kB, then the inverse up-date formula of (4) method is represented as ,)()()()()( )()(221kTkTkkkTkTkkkkTkTkkkTkTkkkkTkkkkkTkTkkkkkTkkkSSSSSIHSSISHSSSHSSSSHSHH (5) which is the dual form of the DFP update formula in the sense thatkk BH, 11 kk BH , and kk ys. It has been shown that the BFGS method is the most ef-fective in quasi-Newton methods from computation point of view. The authors have studied the convergence of fand its characterizations for convex minimization [23-27]. Our pioneers made great efforts in order to find a quasi-Newton method which not only possess global convergence but also is superior than the BFGS method from the computation point of view [15-17,20,28-31]. For general functions, it is now known that the BFGS method may fail for non-convex functions with inexact line search , Mascarenhas  showed that the non-convergence of the standard BFGS method even with exact line search. In order to obtain a global convergence of BFGS method without convexity assumption on the objective function, Li and Fukushima [15,16] made a slight modification to the standard BFGS method. Now we state their work  simply. Li and Fukushima (see ) advised a new quasi-Newton equation with the fol-lowing form11kkk SB, where,1kkkkk Sgt 0kt is determined by }0,||||max{1 2kkTkkSSt . Un-der appropriate conditions, these two methods [15,16] are globally and superlinearly convergent for nonconvex minimization problems. In order to get a better approximation of the objective function Hessian matrix, Wei, Yu, Yuan, and Lian (see ) also proposed a new quasi-Newton equation: ,)3()2( 21kkkkkk SASBwhere 2||||)]()([)]()([2)3(kkTkkkkkkkkkSSxfdxfdxfxfA. Then the new BFGS update formula is G. L. Yuan ET AL. Copyright © 2010 SciRes. AM 10.)2()2()2()2()2( 2221kTkTkkkkTkkTkkkkk SSBSBSSBBB (6) Note that this quasi-Newton formula (6) contains both gradient and function value information at the current and the previous step. This modified BFGS update for-mula differs from the standard BFGS update, and a higher order approximation of )(2xf can be obtained (see [18,20]). It is well known that the matrix kB are very impor-tant for convergence if they are positive definite [24,25]. It is not difficult to see that the condition 02kTkS can ensure that the update matrix )2(1kB from (6) in-herits the positive definiteness of)2(kB. However this condition can be obtained only under the objective func-tion is uniformly convex. If f is a general convex function, then 2kTkS and kTkS may equal to 0. In this case, the positive definiteness of the update matrix kB can not be sure. Then we conclude that, for the gen-eral convex functions, the positive definiteness of the update matrix kB generated by (4) and (6) can not be satisfied. In order to get the positive definiteness of )2(kB based on the definition of 2kand k for the general convex functions, Yuan and Wei  give a modified BFGS update, i. e., the modified update formula is de-fined by ,)3()3()3()3()3( 3331kTkTkkkkTkkTkkkkk SSBSBSSBBB  (7) where }0),3(max{,3kkkkkk AASA . Then the corresponding quasi-Newton equation is 31)3( kkk SB (8) which can ensure that the condition 03kTkS holds for the general convex function f(see  in detail). Therefore, the update matrix 1kB from (8) inherits the positive definiteness of kB for the general convex function. 2.2. Limited Memory BFGS-Type Method The limited memory BFGS (L-BFGS) method (see ) is an adaptation of the BFGS method for large-scale problems. In the L-BFGS method, matrix kH is ob-tained by updating the basic matrix )0~(0mH times using BFGS formula with the previous m~ iterations. The standard BFGS correction (5) has the following form TkkkkkTkkSSVHVH1 (9) where kTkkS1, TkkkkSIV , Iis the unit ma-trix. Thus, 1kH in the L-BFGS method has the fol-lowing form: .][][][][][12~2~1~2~11~1~1~1~1111111TkkkkmkTmkmkTmkTkmkkmkmkTmkTkTkkkkTkkkkkTkTkTkkkkkTkkSSVVSSVVVVHVVSSVSSVHVVSSVHVH (10) 3. Modified SQP Method In this section, we will state the normal SQP method and the modified limited memory SQP method, respectively. 3.1. Normal SQP Method The first-order Kuhn-Tucker condition of (2) is .0)(,,0)(,0,0)(,0)()()(xhIjforxgxgxhxgxfjjjTT (11) The system (11) can be represented by the following system: ,0)( zH (12) where Szz),,( and lmnlmn RRH: is defined by .)(}),(min{)()()()(xhxgxhxgxfzHTT (13) Since ,, gf and h are continuously differentia-ble functions, it is obviously that )(zH is continuously differentiable function. Then, for all lmnRd , the directional derivative ):( dzH of the function )(zH exists. Denote the index sets by )}(|{)( xgiz ii  (14) and )}.(|{)( xgiz ii  (15) G. L. Yuan ET AL. Copyright © 2010 SciRes. AM 11Under the complementary condition, it is clearly that )(z is an index set of strongly active inequality con-straints, and )(z is an index set of weakly active and inactive inequality constraints. In terms of these sets, the directional derivative along the direction ),,(dddd x is given as follows ,)()}(,min{)():()()(xTzixTizixTidxhdgddgGddzHi (16) where Gis a matrix which elements are the partial deriv-atives of )(zLx to ,xd,d,drespectively. If ii ddgd zixTi )()}(,min{ holds, then the set .000)(000000)()()()()(TTxhIxgxhxgxgVzW (17) By (33) in , we know than the system ),( kkk zHdW  (18) where ),,(kkk dddd xk and )( kk zWW, define the Kuhn-Tucker condition of problem (2), which also defines the Kuhn-Tucker condition of the following qua-dratic programming :),( kk VzQP ,0)()(,0)()(,0)()(..,21)(minsxhxhsxgxgsxgxgtssVssxfTkkTkkTkkkTTk (19) where ).(, 2kxxkk zLVxxs  Generally, suppose that )1(kB is an estimate of kV and )1(kB can be updated by BFGS method of qua-si-Newton formula ,)1()1()1()1()1(1kTkTkkkkTkkTkkkkk syyysBsBssBBB  (20) where kkk xxs 1, )()( 1kxkxk zLzLy  , ),,,( 1111  kkkk xz),,,( kkkk xzkand k are the multipliers which are according to the Lagrangian objective function at kx, while 1k and 1k are the multipliers which are according to the Lagrangian objec-tive function at 1kx. Particularly, when we use the up-date formula (20) to (19), the above quadratic program-ming problem can be written as :),(kk BzQP .0)()(,0)()(,0)()(..,)1(21)(minsxhxhsxgxgsxgxgtssBssxfTkkTkkTkkkTTk (21) Suppose that ),,(s is a Kuhn-Tucker triple of the sub problem),( kk BzQP, therefore, it is obviously that 0s if ),,(kkx is a Kuhn-Tucker triple of (2). 3.2. Modified Limited Memory SQP Method The normal limited memory BFGS formula of qua-si-Newton-SQP method with kH for constrained opti-mization (2) is defined by ][][][][][12~2~1~2~11~1~1~1~1111111kmkTmkmkTmkTkmkkmkmkTmkTkTkkkkTkkkkkTkTkTkkkkkTkkVVssVVVVHVVssVssVHVVssVHVH (22) where ,1kTkkys ,Tkkkk syIV I is the unit matrix. To maintain the positive definiteness of the li-mited memory BFGS matrix, some researchers suggested to discard correction },{ kk ys if 0kTkys does not hold (e.g. ). Another technique was proposed by Powell  in which ky is defined by ,,)1(,2.0,otherwisesBysBsysifyykkkkkkkTkkTkk where ,8.0kTkkkTkkkTkkyssBssBs kk HB 1 of (22). How-ever, if the Lagrangian objective function ),,(xL is a general convex function, then kTkys may equal to 0. In this case, the positive definiteness of the update matrix kH of (22) can not be sure. Whether there exists a limited memory SQP method which can ensure that the update matrix are positive de-finite for general convex Lagrangian objective func-tion ),,(xL . This paper gives a positive answer. Let 211||||)]()([)]()([2~kkTkxkxkkksszLzLzLzLA. Con-sidering the discussion of the above section, we discuss kA~ for general convex Lagrangian objective function G. L. Yuan ET AL. Copyright © 2010 SciRes. AM 12),,(xL in the following cases to state our motivation. case i: If ,0~kA we have .0||||~)~(2 kTkkkkTkkkkTkyssAyssAys (23) case ii: If 0~kA, we get ,||||||||)]()([)(2||||)]()([)]()([2~02211211kkTkkkTkxkxkkxkkTkxkxkkksyssszLzLszLsszLzLzLzLA (24) which means that 0kTkys holds. Then we present our modified limited memory SQP formula 1111 1111111121121[][][][][]kkkTTkkkkTTT Tkkkkkkk kkkkTTkkmkmkmkTT Tkm kkmkmkm kmkTkkkHVHV ssVVHVss VssVVH VVVV ssVVss       (25) where ,1kTkkys ,Tkkkk syIV   and kkkk sAyy }0,~max{. It is not difficult to see that the modified limited memory SQP formula (25) contains both the gradient and function value information of La-grangian function at the current and the previous step if 0~kA holds. Let kB be the inverse of kH. More generally, sup-pose that kB is an estimate of kV. Then the above quadratic programming problem (19) can be written as :),( kk BzQP .0)()(,0)()(,0)()(..,21)(min sxhxhsxgxgsxgxgtssBssxfTkkTkkTkkkTTk (26) Suppose that ),,(s is a Kuhn-Tucker triple of the subproblem ),(kk BzQP , therefore, it is obviously that 0s if ),,(kkx is a Kuhn-Tucker triple of (2). Now we state our algorithm as follows. Modified limited memory SQP algorithm 1 for (2) (M-L-SQP-A1) Step 0: Star with an initial point ),,( 0000xz and an estimate 0H of )( 020zLV xx , 0H is a symmetric and positive definite matrix, positive con-stants 10,00m is a positive constant. Set 0k; Step 1: For given kz and kH, solve the subproblem ,0)()(,0)()(,0)()(..,21)(min 1 sxhxhsxgxgsxgxgtssHssxfTkkTkkTkkkTTk (27) and obtain the unique optimal solution kd; Step 2: k is chosen by the modified weak Wolfe-Powell (MWWP) step-size rule ,)()()(kTkxkkkkk dzLzLdzL  (28) and ,)()(kTkxkTkkkx dzLddzL  (29) then let .1kkkkdxx Step 3: If 1kz satisfies a prescribed termination cri-terion (18), stop. Otherwise, go to step 4; Step 4: Let },1min{~0mkm. Update 0H for m~ times to get 1kH by formula (25). Step 5: Set 1kk and go to step 1. Clearly, we note that the above algorithm is as simple as the limited memory SQP method, form storage and cost point of a view at each iteration. In the following, we assume that the algorithm updates kB-the inverse of kH. The M-L-SQP-A1 with Hessian approximation kB can be stated as follows. Modified limited memory SQP algorithm 2 for (2) (M-L-SQP-A2) Step 0: Star with an initial point ),,( 0000xz and an estimate 0B of )( 020zLV xx , 0B is a sym-metric and positive definite matrix, positive constants 10, 00m is a positive constant. Set 0k; Step 1: For given kz and kB, solve the subproblem ),( kk BzQP and obtain the unique optimal solution kd; Step 2: Let },1min{~0mkm. Update kB with the triples kmkiiii Ays 1~},,{ , i.e., for kmkl ,,1~, compute G. L. Yuan ET AL. Copyright © 2010 SciRes. AM 13,1lTlTllklkTllkTlllklklksyyysBsBssBBB  (30) where lll xxs  1, llll sAyy   and  1~mkkB for all k. Note that M-L-SQP-A1 and M-L-SQP-A2 are mathe-matically equivalent. In the next section, we will estab-lish the global convergence of M-L-SQP-A2. 4. Convergence analysis of M-L-SQP-A2 Let xbe a local optimal solution and ),,(  xz be the corresponding Kuhn-Tucker triple of problem (1). In order to get the global convergence of M-L-SQP-A2, the following assumptions are needed. Assumption A. 1) ihf , and ig are twice conti-nuously differentiable functions for all Sx  and S is bounded. 2) }),({}),({   IjxgEixh iiare positive li-near independence. 3) (Strict complementarity) For0, jIj. (iv) 0Vss Tfor all0swith sxh Ti)( Ei,0 and   Ijsxg Ti,0)(, where )(2 zLV xx . (v) }{kzconverges to zwhere 0)(  zLx. (vi) The Lagrangian function )(zL is convex for all Sz . Assumption A(vi) implies that there exists a constant 0H such that .,|||| SzHV  (31) Due to the strict complementary Assumption A(3), at a neighborhood of z, the method (26) is equivalent to the following equality constrained quadratic program-ming: .0)()(,0)()(..,21)(min sxhxhsxgxgtssBssxfTkkTkkkTTk (32) Without loss of generality for the locally convergent analysis, we may discuss that there are only active con-straints in (2). Then (18) becomes the following system with kB instead of kV: )()()()(00)(00)()()(kkkkxxTTzHxhxgzLdddxhxgxhxgBkkk (33) In the case of only considering active constraints, we can suppose that 00)(00)()()(TTkkxhxgxhxgVW (34) And ,00)(00)()()(,TTkKHxhxgxhxgBD (35) when kB is close to kV, KHD, is close to kW. Lemma 4.1 Let Assumption A hold. Then there exists a positive number 1M such that .,2,1,0,||||12kMysykTkk Proof. By Assumption A, then there exists a positive number 0M such that (see ) .0,||||02 kMysykTkk (36) Since the function )(xL is convex, then we have kTkxkk szLzLzL )()()( 1and ,)()()( 11 kTkxkk szLzLzL   the above two in-equalities together with the definition of kA~ imply that 2|||||||~|kkTkksysA. (37) Using the definition of ky, we get kTkkkTkkTkysAysys }0,~max{ (38) and ||,||2||||||||||}0,~max{|||||||||| kkkkkkk yyysAyy  (39) where the second inequality of (39) follows (37). Com-bining (38), (39), and (36), we obtain: .4||||4||||022MysyysykTkkkTkk Let 01 4MM, we get the conclusion of this lemma. The proof is complete. Lemma 4.2 Let kB is generated by (30). Then we have ,)det()det(1~1~1 kmkl llTllTlmkkk sBsysBB (40) where )det( kB denotes the determinant of kB. Proof. To begin with, we take the determinant in both sides of (20) G. L. Yuan ET AL. Copyright © 2010 SciRes. AM 14,)1())1(()])1()1())1(()(()))1((1)()1()1(1))[(1(())1()1()1(())1(()))1()1()1()(1(())1((11111kkTkkTkkkkkkTkTkkkTkkTkkTkkTkkkkTkkkTkkkTkTkkkkkTkkTkkkkTkTkkkkkTkkTkkkksBssyBDetyBsBssBsyyssyyyBsBssBsBDetysyyBsBsBssIDetBDetysyyBsBsBssIBDetBDetwhere the third equality follows from the formula (see, e.g.,  Lemma 7.6) ).)(()1)(1()det( 324143214321 uuuuuuuuuuuuI TTTTTT Therefore, there is also a simple expression for the de-terminant of (30) .)det()det(1~1~1 kmkl llTllTlmkkk sBsysBB Then we complete the proof. Lemma 4.3 Let Assumption A hold. Then there exists a positive constant 1 such that ,||||1kks where ||||)(kkTkxkddzL. Proof. By Assumption A, we have ).1(||||)())()((2101HddtddtzVddzLzLkkkkkkTkkkTkxkx On the other hand, using (29), we get .)()1())()(( 1kTkxkTkxkx dzLdzLzL  Therefore, ,11|||| kkHslet 111H. The proof is complete. Using Assumption A, it is not difficult to get the fol-lowing lemma. Lemma 4.4 Let Assumption A hold. Then the sequence )}({ kzL monotonically decreases, and Szk for all 0k. Moreover, .))((0kkTkxk dzL Similar to Lemma 2.6 in , it is not difficult to get the following lemma. Here we also give the proof process. Lemma 4.5 If the sequence of nonnegative numbers ),1,0(kmk satisfy kjkjkccm011 ,,2,1,0, (41) then 0suplim kk m. Proof. We will get this result by contradiction. As-sume that 0suplimkk m, then, for 110c, there exists 01k, such that 1km for all 1kk . Hence, for all 1kk, 101111kjkkjjkmc 11111011suplim kkjjkkmc, which is a contradiction, thus, 0suplimkk m. Lemma 4.6 Let }{ kx be generated by M-L-SQP-A2 and Assumption A hold. If 0||)(||inflim  kxkzL , then, there exists a constant 00 such that .0,100 kallforkkjj Proof. Assume that 0||)(||inflim  kxkzL , i.e., there exists a constant 02csuch that ,2,1,0,||)(|| 2kczL kx . (42) Now we prove that the update matrix 1kB will al-ways be generated by the update formula (30), i.e., 1kB inherits the positive definiteness of kB or0kTkys always holds. For 0k, this conclusion holds at hand. For all 1k, assume that kB is positive definite. We will deduce that 0kTkys always holds from the fol-lowing three cases. Case 1. If 0~kA. By the definition of ky and As-sumption A, we have 0}0,~max{ kTkkkTkkTkysAysys . Case 2. If 0~kA. By the definition of ky, (24), and Assumption A, we get 0kTkkTkysys . Case 3. If 0~kA. By the definition of ky, (29), As-sumption A, )(1kxkk zLBd , and the positive defi-niteness of kB, we obtain 0)1()()1(  kkTkkkxTkkkTkkTkdBdzLdysys,So, we have 0kTkys , and 1kB will be generated by the update formula (30). Thus, the update matrix 1kB will always be generated by the update formula (30). Taking the trace operation in both sides of (30), we get ,||||||||)()(21~21~1~1lTllkmklllTlllkmklmkkkysysBssBBTrBTr (43) G. L. Yuan ET AL. Copyright © 2010 SciRes. AM 15where )( kBTr denotes the trace of kB. Repeating this trace operation, we have .||||||||)(||||||||)()(0202021~21~1~1kllTllklllTllllTllkmklllTlllkmklmkkkysysBssBBTrysysBssBBTrBTr (44) Combining (42), (44), )(1kxkk zLBd   , and Lemma 4.1, we obtain .)1()()()()( 102201 MkzLHzLcBTrBTrkljxjTjxk (45) Using 1kBis positive definite, we have 0)( 1kBTr . By (45), we obtain 2210022)1()()()( cMkBTrzLHzLckljxjTjx (46) and .)1()()( 101 MkBTrBTr k (47) By the geometric-arithmetic mean value formula we get .)1()()1()()(110220kkjjxjTjx MkBTrckzLHzL (48) Using Lemma 4.2, (30), and (38), we have .1)det(1)det()det()det()det(001~1~1~1~1~1~1 kjjkmkl lmkkkmkl llTllTlmkkkmkl llTllTlmkkkBBsBsysBsBsysBB This implies .1)det()det(010kjjkBB (49) By using the geometric-arithmetic mean value formula again, we get .)()det( 11nkknBTrB (50) Using (47), (49) and (50), we obtain 13100100110010001,])([)det(min}1,])([)det(min{)exp(1])([)det(11])1()([)det(1knnnnknnnnkjjCMBTrnBMBTrnBnMBTrnBkMkBTrnB (51) where }1,])([)det(min{)exp(11003nnMBTrnBnc. Let .||||)(||)(cosjjxjTjxjdzLdzL Multiplying (48) with (51), for all0k, we get 11022130])1()()1([cos||)(|||||| kkkjjjxk MkBTrckczLs .])([110223 kMBTrcc (52) According to Lemma 4.4 and Assumption A we know that there exists a constant 02M such that 2112|||||||||||||||| Mxxxxs kkkkk . (53) Combining the definition of k and (53), and noting that jjjx zLcos||)(|| , we get for all 0k, .]2))(([1012102230kkkjjMMBTrcc The proof is complete. Now we establish the global convergence theorem for M-L-SQP-A2. Theorem 4.1 Let Assumption (i) hold and let the se-quence }{ kz be generated by M-L-SQP-A2. Then we have 0||)(||inflim  kxkzL . (54) Proof. By Lemma 4.3 and (28), we get .)(||||)()(211kkkkkkzLszLzL (55) By (55), we have 02kk, this implies that G. L. Yuan ET AL. Copyright © 2010 SciRes. AM 160lim  kk. (56) Therefore, relation (54) can be obtained from (56) and Lemma 4.6 directly. 5. Conclusion For further research, we should study the properties of the modified limited memory SQP method under weak conditions. Moreover, numerical experiments for practi-cally constrained problems should be done in the future. 6. References  P. T. Boggs, J. W. Tolle and P. Wang, “On The Local Convergence of Quasi-Newton Methods for Constrained Optimization,” SIAM Journal on Control and Optimiza-tion, Vol. 20, No. 2, 1982, pp. 161-171.  F. H. Clarke, “Optimization and Nonsmooth Analysis,” Wiley, New York, 1983.  T. F. Coleman and A. R. Conn, “Nonlinear Programming Via Exact Penlty Function: Asymptotic Analysis,” Ma-thematical Programming, Vol. 24, No. 1, 1982, pp. 123- 136.  M. Fukushima, “A Successive Quadratic Programming Algorithm with Global and Superlinear Convergence Properties,” Mathematical Programming, Vol. 35, No. 3, 1986, pp. 253-264.  M. J. D. Powell, “The Convergence of Variable Metric methods for Nonlinearly Constrained Optimization Cal-culations,” In O. L. Mangasarian, R. R. Meyer and S. M. Robinson Eds., Nonlinear Programming 3, Academic Press, New York, 1978, pp. 27-63.  W. Sun, “Newton's Method and Quasi-Newton-SQP Me-thod for General LC1 Constrained Optimization,” Applied Mathematics and Computation, Vol. 92, No. 1, 1998, pp. 69-84.  F. C. Thomas and Q. C. Andrew, “On The Local Con-Vergence of a Quasi-Newton Methods for The Nonlinear Programming Problem,” SIAM Journal on Numerical Analysis, Vol. 21, No. 4, 1984, pp. 755-769.  Y. Yuan and W. Sun, “Theory and Methods of Optimiza-tion,” Science Press of China, Beijing, 1999.  G. Yuan, “Modified Nonlinear Conjugate Gradient Me-thods with Sufficient Descent Property for Large-Scale Optimization Problems,” Optimization Letters, Vol. 3, No. 1, 2009, pp. 11-21.  G. L. Yuan and X. W. Lu, “A New Line Search Method with Trust Region for Unconstrained Optimization, ” Communications on Applied Nonlinear Analysis, Vol. 15, No. 1, 2008, pp. 35-49.  G. L. Yuan and X. W. Lu, “A Modified PRP Conjugate Gradient Method,” Annals of Operations Research, Vol. 166, No. 1, 2009, pp. 73-90.  G. Yuan, X. Lu and Z. Wei, “A Conjugate Gradient Me-thod with Descent Direction for Unconstrained Optimiza-tion,” Journal of Computational and Applied Mathemat-ics, Vol. 233, No. 2, 2009, pp. 519-530.  G. L. Yuan and Z. X. Wei, “New Line Search Methods for Unconstrained Optimization,” Journal of the Korean Statistical Society, Vol. 38, No. 1, 2009, pp. 29-39.  W. C. Davidon, “Variable Metric Methods for Minimiza-tion,” SIAM Journal on Optimization, Vol. 1, No. 1, 1991, pp. 1-17.  D. H. Li and M. Fukushima, “A Modified BFGS Method and Its Global Convergence in Non-convex Minimiza-tion,” Journal of Computational and Applied Mathemat-ics, Vol. 129, No. 1-2, 2001, pp. 15-35.  D. H. Li and M. Fukushima, “On The Global Conver-gence of The BFGS Methods for Non-convex Uncon-strained Optimization Problems,” SIAM Journal on Op-timization, Vol. 11, No. 4, 2000, pp.1054-1064.  M. J. D. Powell, “A New Algorithm for Unconstrained Optimation,” In J. B. Rosen, O. L. Mangasarian and K. Ritter Eds., Nonlinear Programming, Academic Press, New York, 1970.  Z. Wei, G. Yu, G. Yuan and Z. Lian, “The Superlinear Convergence of A Modified BFGS-type Method for Un-constrained Optimization,” Computational Optimization and Applications, 29(2004), pp. 315-332.  J. Z. Zhang, N. Y. Deng and L. H. Chen, “New Qua-si-Newton Equation and Related Methods for Uncon-strained Optimization,” Journal of Optimization Theory and Applications, Vol. 102, No. 1, pp. 147-167.  Z. Wei, G. Li and L. Qi, “New Quasi-Newton Methods for Unconstrained Optimization Problems,” Applied Ma-thematics and Computation, Vol. 175, No. 2, 2006, pp. 1156-1188.  G. L. Yuan and Z. X. Wei, “Convergence Analysis of A Modified BFGS Method on Convex Minimizations,” Computational Optimization and Applications, doi: 10.1007/s10 589-008-9219-0.  R. H. Byrd, J. Nocedal and R. B. Schnabel, “Representa-tions of Quasi-Newton Matrices and Their Use in Limited Memory Methods,” Mathematical Programming, Vol. 63, No. 1-3, 1994, pp. 129-156.  C. G. Broyden, J. E. Dennis and J. J. Moré, “On The Lo-cal and Supelinear Convergence of Quasi-Newton Me-thods,” IMA Journal of Applied Mathematics, Vol. 12 No. 3, 1973, pp. 223-246.  R. H. Byrd and J. Nocedal, “A Tool for The Analysis of Quasi-Newton Methods with Application to Uncon-strained Minimization,” SIAM Journal on Numerical Analysis, Vol. 26, No. 3, 1989, pp. 727-739.  R. H. Byrd, J. Nocedal and Y. Yuan, “Global Conver-gence of A Class of Quasi-Newton Methods on Convex Problems,” SIAM Journal on Numerical Analysis, Vol. 24, No. 5, 1987, pp.1171-1189.  J. E. Dennis adn J. J. Moré, “A Characteization of Super-Linear Convergence and Its Application to Quasi-Newton Methods,” Mathematics of Computation, Vol. 28, No. 126, 1974, pp. 549-560.  A. Griewank and Ph. L. Toint, “Local Convergence G. L. Yuan ET AL. Copyright © 2010 SciRes. AM 17Analysis for Partitioned Quasi-Newton Updates,” Nume-rische Mathematik, Vol. 39, No. 3, 1982, pp. 429-448.  A. Perry, “A Class of Conjugate Algorithms with a Two Step Variable Metric Memory,” Discussion paper, No. 269, Center for Mathematical Studies in Economics and Management Science, Northwestern University, 1977.  D. F. Shanno, “On the Convergence of A New Conjugate Gradient Algorithm,” SIAM Journal on Numerical Anal-ysis, 15, No. 6, 1978, pp. 1247-1257.  Z. Wei, L. Qi and X. Chen, “An SQP-type Method and Its Application in Stochastic Programming,” Journal of Optimization Theory and Applications, Vol. 116, No. 1, 2003, pp. 205-228.  G. L. Yuan and Z. X. Wei, “The Superlinear Conver-gence Analysis of A Nonmonotone BFGS Algorithm on Convex Objective Functions,” Acta Mathematica Sinica, Vol. 24, No. 1, 2008, pp. 35-42.  Y. Dai, “Convergence Properties of the BFGS Algo-rithm,” SIAM Journal on Optimization, Vol. 13, No. 3, 2003, pp. 693-701.  W. F. Mascarenhas, “The BFGS Method with Exact Line Searchs Fals for Non-convex Objective Functions,” Ma-thematical Programming, Vo. 99 No. 1, 2004, pp. 49-61.  R. H. Byrd, P. Lu, J. Nocedal and C. Zhu, “A Limited Memory Algorithm for Bound Constrained Optimiza-tion,” SIAM Journal on Scientific Computing, Vol. 16, No. 5, 1995, pp. 1190-1208.  M. J. D. Powell, “A fast Algorithm for Nonlinear Con-strained Optimization Calculations,” Lecture Notes in Mathematics, Vol. 630, Springer, Berlin, pp. 144-157.  M. J. D. Powell, “Some Properties of The Variable Me-tric Algorithm,” In F. A. Lootsma Ed., Numerical me-thods for Nonlinear Optimization, Academia Press, Lon-don, 1972.  J. E. Dennis and J. J. Moré, “Quasi-Newton Methods, Motivation and Theory,” SIAM Review, Vol. 19, No. 1, 1977, pp. 46-89.  J. Y. Han and G. H. Liu, “Global Convergence Analysis of a New Nonmonotone BFGS Algorithm on Convex Objective Functions,” Computational Optimization and Applications, Vol. 7, No. 3, 1997, pp. 277-289.