 Applied Mathematics, 2011, 2, 1119-1123 doi:10.4236/am.2011.29154 Published Online September 2011 (http://www.SciRP.org/journal/am) Copyright © 2011 SciRes. AM A New Descent Nonlinear Conjugate Gradient Method for Unconstrained Optimization Hao Fan, Zhibin Zhu, Anwa Zhou School of Mathematics and Computing Science, Guilin University of Electronic Technology, Guilin, China E-mail: 540103003@qq.com Received June 15, 2011; revised July 10, 2011; accepted July 17, 2011 Abstract In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimiza-tion. The sufficient descent property holds without any line searches. We use some steplength technique which ensures the Zoutendijk condition to be held, this method is proved to be globally convergent. Finally, we improve it, and do further analysis. Keywords: Large Scale Unconstrained Optimization, Conjugate Gradient Method, Sufficient Descent Property, Globally Convergent 1. Introduction The nonlinear conjugate gradient method is designed to solve the following unconstrained optimization problem: min( ),fx nxwhere is a smooth nonlinear function, and the gradient of :nff at x is denoted by ()gx. The it-erative formula of the conjugate gradient methods is given by 1kkkkxxtd , (1.1) 0,1,...,kwhere k is a steplength which is computed by carrying out some line search, is the search direction defined by tkd1,1,2kkkkkgifkdgdifk ,, (1.2) where k is a scalar, kg denotes ()kgx. There are some well-known formulas for k, which are given as follows: 121TPRP kkkkgyg, (Polak-Ribiere-Polyak ), (1.3) 221kFRkkgg, (Fletcher-Reeves ), (1.4) where 1kkk, and 1ygg stands for the Euclid-ean norm of vectors. In addition, the sufficient descent condition is defined as follows: 2Tkk kgdcg , (1.5) where , has often been used in the literature to analyze the global convergence of conjugate gradient methods with inexact line searches. 0cGenerally, the PRP method was much better than the FR method judging from the numerical calculation. When the objective function was convex, Polak and Ri-bie`re proved that the PRP method with the exact line search was globally convergent. But Powell showed that there existed nonconvex functions on which the PRP method did not converge globally. He suggested that k should not be less than zero. Under the sufficient descent condition, Gilbert and Nocedal proved that the modified PRP method max 0,PRPkk was globally conver-gent with the Wolfe-Powell line search. Recently, G. Yu  proposed a modified FR (MFR) formula such as 21k22131() ,kMFRTkk kggd g (1.6) where 1(0, ),211,,30, and 1 was an any given positive constant. They proved that for any line search, (1.6) satisfied the condition (1.5), in which 121c . In fact, the term 2Tkk1gd in the denominator of (1.6) played an important role in en-hancing descent. It essentially controled the relative H. FAN ET AL. 1120 weight placed on conjugacy versus descent. Along this way, G. Yu  proposed a new nonlinear conjugate gra-dient formula such as 21212k11() ,0otherwisTkkk TkkkNTkk kggg ifgg ggd g e, (1.7) in which 1. It possessed the sufficient descent property for any line search, and had an advancement that the directions would approach to the steepest descent directions while the steplength was small. They also proved the algorithm which possed the global conver-gence property with the weak Wolfe-Powell. Z. Wei  proposed a new nonlinear conjugate gradi-ent formula such as 11*21kTkk kkkkggg ggg, (1.8) In , a new conjugate gradient formula *k was given to compute the search directions for unconstrained optimization problems. It was discussed some general convergence results for the proposed formula with some line searches such as the exact line search, the Wolfe- Powell line search and the Grippo-Lucidi line search. The given formula , and had the similar form with *0kPRPk. Combining the algorithms above, in this paper, we propose a new modified scalar formula ()Nk, denoted ()MNk, the new algorithm calls MN algorithm, where 211k211()kTkkkMNTkk kgkgggggd g  (1.9) in which 1. We add some parameters for ()MNk so that it is generalization, then we have VMN algo-rithm 2111k22131()kTkkkVMNTkk kggggggd g k, (1.10) in which1(0, ),211,, 3(0, ), and 1 is any given positive number, calls VMN algo-rithm. In the next section, we present the global convergence of MN algorithm and establish some good properties for which. The global convergence results of VMN algo-rithm are given in Section 3. Finally, we have a conclu-sion section. 2. The Global Convergence of MN Algorithm Firstly, we can prove ()MNk in (1.9) is non-negative, 211211211211() 0 ,kTkkkkMNkTkk kkkkkkTkk kggggggd gggggggd g  The following theorem shows that MN algorithm pos-sesses the sufficient descent property for any line search. Theorem 2.1. Consider any method (1.1) and (1.2), where ()MNkk.Then for all 1k211Tkk kgd g . (2.1) Proof. If 10Tkkgd, for 1, then we have 2211 1111Tkk kkgd gg   . Otherwise, from the definition of k()MN, we can Obtain 211() kMNkTkkggd , and then we have 1121112221111()11 .TkkMN TkkkkkTkkkTkkgdggdgggdgd  1kg From 21110Tgd g, we can deduce that (2.1) holds for all . 1kIn order to establish the global convergence result for MN algorithm, we will impose the following assumptions. Assumption A. 1) The level set 0()( )nxRfx fx  is bounded. 2) In some neighborhood of ,Nf is differenti-able and its gradient g is Lipschitz continuous, that is to say, there exists a constant such that 0L() (),gxgy Lxy ,.xyN (2.2) By using the Assumption A, we can deduce that there exists B and such that 0M,xB () ,gxM (2.3) .xCopyright © 2011 SciRes. AM H. FAN ET AL.1121 The following important result is obtained by Zoutendijk . Lemma 2.2. Suppose that Assumption A holds. Con-sider any iteration method of the form (1.1) and (1.2), and is obtained by the Wolfe line search (1.4). Then kt220.Tkkkkgdd (2.4) Then we will analyse the global convergence property of MN algorithm. Gilbert and Nocedal  introduced the following Prop-erty A which pertains to the PRP method under the suffi-cient descent condition. Now we will show that this Prop-erty A pertains to the new method. Property A. Consider a method of form (1.1) and (1.2). Suppose that 0.kg  (2.5) We say that the method has Property A, if for all , there exist constants k1,b0 such that kb and we have 12kb if 1ks. The following lemma shows that the new method has the Property A. Lemma 2.3. Consider the method of form (1.1) and (1.2) in which k()MNk. Suppose that Assumption A holds, then the method has Property A. Proof. Set 2231, 4bLb. By (1.9) and (2.5) we have 211k2111212223() .kTkkkkMNTkk kkk kkggggggd ggg ggb    By the Assumption A (2) and (2.2) hold, if 1ks, then 11 1121121121221() 221 .2kkkk kkkMNkkkkkkkkkkkkgggg gggggL ggggL gggLg Lbg  The proof is finished. If (2.5) holds and the methods have Property A, then the small steplength should not be too many. The following lemma shows this property. Lemma 2.4. Suppose that Assumption A and (1.5) hold. Let kx and kd be generated by (1.1) and (1.2) in which k satisfies the Wolfe-Powell line search, kt has Property A. If (2.5) holds, then, for any 0, there exist N and 0kN, for all , such that 0kk,2k, where ,1:1,kiiZ kiks ,,k denotes the number of the ,k. Lemma 2.5. Suppose that Assumption A and (1.5) hold. Let kx be generated by (1.1) and (1.2), k satisfies the Wolfe-Powell line search, and t0k has Property A. Then, lim inf0.kkg  The proofs of Lemmas 2.4 and 2.5 had been given in . By the above three lemmas, it is easy to obtain the follow-ing convergence result. Theorem 2.6. Suppose that Assumption A holds. Let kx be generated by (1.1) and (1.2), satisfies the kWolfe-Powell line search, tk is computed by (1.9), then lim inf0.kkg  3. The Global Convergence of VMN Algorithm In this section we will add some parameters of ()MNk so that it is generalization, then we have VMN algorithm 211122131() ,kTkkkVMNkTkk kggggggd g k (3.1) Copyright © 2011 SciRes. AM H. FAN ET AL. Copyright © 2011 SciRes. AM 1122 Table 1. The detail information of numerical experiments for MN algorithm. No. 0x kx kg k S201 (8, 9) (5.00000000000007, 6.00000000000002) 5.338654227543967e−013 2 S202 (15, −2) (11.41277974501077, −0.89680520867268) 7.343412874359107e−007 30 S205 (0,0) (3.00000000072742, 0.49999999510645) 2.411787046907220e−007 12 S206 (−1.2, 1) (1.00000000400789, 1.00000020307759) 3.907063255896894e−007 5 S311 (1, 1) (−3.77931025686871, −3.28318599743028) 5.006611497285485e−007 6 S314 (2, 2) (1.79540285273286, 1.37785978124895) 7.487443339742852e−008 6 in which 1(0, ), 211,, 31(, ), 1 is any given positive number. Theorem 3.1. Suppose that Assumption A holds. Let Let kx be generated by (1.1) and (1.2), k satisfies the Wolfe-Powell line search, and t0k has Property A. Then, lim inf0kkg  Similar to the second part of the discussion for the above general algorithm ()VMNk, we can get . And the algorithm possesses the sufficient descent prop-erty, in which () 0MNk121c The proof is similar as the one of Theorem 2.1 in Sec-tion 2. 4. Numerical Experiments In this section, we carry out some numerical experiments. The MN algorithm has been tested on some problems from . The results are summarized in Table 1. For the test problem, No. is the number of the test problem in , 0x is the initial point, kx is the final point, is the number of times of iteration for the problem. kTable 1 shows the performance of the MN algorithm relative to the iteration. It is easily to see that, for all the problems, the algorithm is very efficient. The results for each problem are accurate, and with less number of times of iteration. 5. Conclusions In this paper, we have proposed a new nonlinear conjugate gradient method-MN algorithm. The sufficient descent property holds without any line searches, and the algorithm satisfys Property A. We also have proved, employing some steplength technique which ensures the Zoutendijk condi-tion to be held, this method is globally convergent. Judging from the numerical experiments in Table 1, compared to most other algorithms, MN algorithm has higher precision and less number of times of iteration. Finally, we have proposed VMN algorithm, it also have the sufficient de-scent property and Property A, and it is global convergence under weak Wolfe-Powell line search. 6. References  M. Al-Baali, “Descent Property and Global Convergence of the Fletcher-Reeves Method with Inexact Line Search,” IMA Journal of Numerical Analysis, Vol. 5, No. 1, 1985, pp. 121-124. doi:10.1093/imanum/5.1.121  Y. F. Hu and C. Storey, “Global Convergence Result for Conjugate Gradient Method,” Journal of Optimization Theory and Applications, Vol. 71, No. 2, 1991, pp. 399-405. doi:10.1007/BF00939927  G. Yu, Y. Zhao and Z. Wei, “A Descent Nonlinear Con-jugate Gradient Method for Large-Scale Unconstrained Optimization,” Applied Mathematics and Computation, Vol. 187, No. 2, 2007, pp. 636-643. doi:10.1016/j.amc.2006.08.087  Z. Wei, S. Yao and L. Liu, “The Convergence Properties of Some New Conjugate Gradient Methods,” Applied Mathematics and Computation, Vol. 183, No. 2, 2006, pp. 1341-1350. doi:10.1016/j.amc.2006.05.150  G. Zoutendijk, “Nonlinear Programming, Computational Methods,” In: J. Abadie, Ed., Integer and Nonlinear Pro-gramming, North-Holland Publisher Co., Amsterdam, 1970, pp. 37-86.  J. C. Gilbert and J. Nocedal, “Global Convergence Prop-erties of Conjugate Gradient Methods for Optimization,” SIAM Journal Optimization, Vol. 2, No. 1, 1992, pp. 21- 42. doi:10.1137/0802003  Y. H. Dai and Y. Yuan, “Nonlinear Conjugate Gradient H. FAN ET AL.1123 Methods,” Shanghai Scientific and Technical Publishers, Shanghai, 1998, pp. 37-48.  W. Hock and K. Schittkowski, “Test Examples for Non-linear Programming Codes,” Journal of Optimization Theory and Applications, Vol. 30, No. 1, 1981, pp. 127- 129. doi:10.1007/BF00934594 Copyright © 2011 SciRes. AM