 J. Software Engineering & Applications, 2010, 3: 503-509 doi:10.4236/jsea.2010.35057 Published Online May 2010 (http://www.SciRP.org/journal/jsea) Copyright © 2010 SciRes. JSEA A Line Search Algorithm for Unconstrained Optimization* Gonglin Yuan1, Sha Lu2, Zengxin Wei1 1College of Mathematics and Information Science, Guangxi University, Nanning, China; 2School of Mathematical Science, Guangxi Teachers Education University, Nanning, China. Email: glyuan@gxu.edu.cn Received February 6th, 2010; revised March 30th, 2010; accepted March 31st, 2010. ABSTRACT It is well known that the line search methods play a very important role for optimization problems. In this paper a new line search method is proposed for solving unconstrained optimization. Under weak conditions, this method possesses global convergence and R-linear convergence for nonconvex function and convex function, respectively. Moreover, the given search direction has sufficiently descent property and belongs to a trust region without carrying out any line search rule. Numerical results show that the new method is effective. Keywords: Line Search, Unconstrained Optimization, Global Convergence, R-linear Convergence 1. Introduction Consider the unconstrained optimization problem min( )nxRfx, (1) where :nfRR is continuously differentiable. The line search algorithm for (1) often generates a sequence of iterates {}kx by letting 1,0,1,2,kkkkxx dk  (2) where kx is the current iterate point, kd is a search direction, and 0k is a steplength. Different choices of kd and k will determine different line search methods [1-3]. The method is divided into two stages at each iteration: 1) choose a descent search direction kd; 2) choose a step size k along the search direction kd. Throughout this paper, we denote ()kfx by kf, ()kfx by kg, and 1()kfx by 1kg, respectively. || || denotes the Euclidian norm of vectors. One simple line search method is the steepest descent method if we take kkdg as a search direction at every iteration, which has wide applications in solving large-scale minimization problems . However, the steepest descent method often yields zigzag phenomena in solving practical problems, which makes the algorithm converge to an optimal solution very slowly, or even fail to converge [5,6]. Then the steepest descent method is not the fastest one among the line search methods. If kkkdHg is the search direction at each iteration in the algorithm, where kH is an n × n matrix approxi-mating 21[()]kfx, then the corresponding line search method is called Newton-like method [4-6] such as Newton method, quasi-Newton method, variable metric method, etc. Many papers [7-10] have been proposed by the method for optimization problems. However, one drawback of the Newton-like method is required to store and compute matrix kH at each iteration and thus adds the cost of storage and computation. Accordingly, this method is not suitable to solve large-scale optimization problems in many cases. The conjugate gradient method is a powerful line search method for solving the large scale optimization problems because of its simplicity and its very low memory requirement. The search direction of the conju-gate gradient method often takes the form ,1,0,kkkkkgdifkdgifk (3) where kR is a scalar which determines the different * This work is supported by China NSF grants 10761001, the Scientific Research Foundation of Guangxi University (Grant No. X081082), andGuangxi SF grants 0991028. A Line Search Algorithm for Unconstrained Optimization Copyright © 2010 SciRes. JSEA 504 conjugate gradient methods [11-13] etc. The convergence behavior of the different conjugate gradient methods with some line search conditions  has been widely studied by many authors for many years (see [4, 15]). At present, one of the most efficient formula for k from the com-putation point of view is the following PRP method 112().|| ||TPRP kk kkkgggg (4) If 1kkxx, it is easy to get 0PRPk, which implies that the direction kd of the PRP method will turn out to be the steepest descent direction as the restart condition automatically when the next iteration point is approximate to the current point. This case is very important to ensure the efficiency of the PRP conjugate gradient method (see [4,15] etc.). For the convergence of the PRP conjugate gradient method, Polak and Ribière  proved that the PRP method with the exact line search is globally con-vergent when the objective function is convex, and Powell  gave a counter example to show that there exist nonconvex functions on which the PRP method does not converge globally even the exact line search is used. We all know that the following sufficient descent con-dition 2|| ||Tkk kgdcg , for all 0k and some constant 0c (5) is very important to insure the global convergence of the algorithm by nonlinear conjugate gradient method, and it may be crucial for conjugate gradient methods . It has been showed that the PRP method with the following strong Wolfe-Powell (SWP) line search rules which is to find the step size k satisfying 1() Tkkk kkkkfxdf gd , (6) and 2|() |||TTkkkk kkgxdd gd (7) did not ensure the condition (5) at each iteration, where 1102, 121. Then Grippo and Lucidi  presented a new line search rule which can ensure the sufficient descent condition and established the conver-gence of the PRP method with their line search technique. Powell  suggested that k should not be less than zero. Considering this idea, Gilbert and Nocedal  proved that the modified PRP method max{0, }PRPkk is globally convergent under the sufficient descent as-sumption condition and the following weak Wolfe-Powell (WWP) line search technique: find the steplength k such that (6) and 2()TTkkkk kkgxdd gd. (8) Over the past few years, much effort has been put to find out new formulas for conjugate methods such that they have not only global convergence property for gen-eral functions but also good numerical performance (see [14,15]). Resent years, some good results on the nonlinear conjugate gradient method are given [19-25]. These observations motivate us to propose a new method which possesses not only the simplicity and low memory but also desirable theoretical features. In this paper, we design a new line search method which pos-sesses not only the sufficiently descent property but also the following property 1|| ||||||kkdcg, for all 0k and some constant 10c (9) whatever line search rule is used, where the property (9) implies that the search direction kd is in a trust region automatically. This paper is organized as follows. In the next section, the algorithms and other line search rules are stated. The global convergence and the R-linear convergence of the new method are established in Section 3. Numerical re-sults and one conclusion are presented in Section 4 and in Section 5, respectively. 2. The Algorithms Besides the inexact line search techniques WWP and SWP, there exist other line search rules which are often used to analyze the convergence of the line search method: 1) The exact minimization rule. The step size k is chosen such that 0()min()kkkk kfxd fxd. (10) 2) Goldstein rule. The step size k is chosen to satisfy (6) and 2() Tkkkkkkkfxdf gd . (11) Now we give our algorithm as follows. 1) Algorithm 1 (New Algorithm) Step 0: Choose an initial point 0,nxR and constants 01, 1102, 121. Set 00dg 0()fx , :0.k Step 1: If |||| ,kg then stop; Otherwise go to step 2. Step 2: Compute steplength k by one line search technique, and let 1kkkkxxd. Step 3: If 1|||| ,kg then stop; Otherwise go to step 4. A Line Search Algorithm for Unconstrained Optimization Copyright © 2010 SciRes. JSEA 505Step 4: Calculate the search direction 1kd by (3), where k is defined by (4). Step 5: Let 1111 1121min{0, }|| ||Tnew kkkk kkkgddd ggg  , where 1111|| |||||||| ||||||kkkkkkygddsd, 1kk ksxx , || ||max{||||,|| ||}kkkysy, 1kk kyg g. Step 6: Let 1:nawkdd, :1kk, and go to step 2. Remark. In the Step 5 of Algorithm 1, we have ||||max{||||,||||}1|| |||| ||kkkkkysyss, which can increase the convergent speed of the algorithm from the computation point of view. Here we give the normal PRP conjugate gradient algo-rithm and one modified PRP conjugate gradient algorithm  as follows. 2) Algorithm 2 (PRP Algorithm ) Step 0: Choose an initial point 0,nxR and constants 01, 1102, 121. Set 00dg 0()fx, :0.k Step 1: If |||| ,kg then stop; Otherwise go to step 2. Step 2: Compute steplength k by one line search technique, and let 1kkkkxxd . Step 3: If 1|||| ,kg then stop; Otherwise go to step 4. Step 4: Calculate the search direction 1kd by (3), where k is defined by (4). Step 5: Let :1kk and go to step 2. 3) Algorithm 3 (PRP+ Algorithm see ) Step 0: Choose an initial point 0,nxR and con-stants 01, 1102, 121. Set 0d 00()gfx , :0.k Step 1: If |||| ,kg then stop; Otherwise go to step 2. Step 2: Compute steplength k by one line search technique, and let 1kkkkxxd . Step 3: If 1|||| ,kg then stop; Otherwise go to step 4. Step 4: Calculate the search direction 1kd by (3), where max{0, }PRPkk Step 5: Let :1kk and go to step 2. We will concentrate on the convergent results of Al-gorithm 1 in the following section. 3. Convergence Analysis The following assumptions are often needed to analyze the convergence of the line search method (see [15,26]). Assumption A (i) fis bounded below on the level set 0{:()()}nxRfx fx ; Assumption A (ii) In some neighborhood 0 of , f is differentiable and its gradient is Lipschitz con-tinuous, namely, there exists a constants 0L such that ||()() ||||||gxgy Lxy, for all 0,xy. In the following, let 0kg for all k, for otherwise a stationary point has been found. Lemma 3.1 Consider Algorithm 1. Let Assumption (ii) hold. Then (5) and (9) hold. Proof. If 0k, (5) and (9) hold obviously. For 1k, by Assumption (ii) and the Step 5 of Algorithm 1, we have 11111|||||||| ||||||||(2 max{1,}1)|||| .newkkkkkdddgLg Now we consider the vector product 11Tkkgd in the following two cases: case 1. If 110Tkkgd. Then we get 221111 111121211 121min{0, }|||||||||| |||| |||||| .TTnew Tkkkk kkkkkTkk kkgdgd gdggggd gg   case 2. If 110Tkkgd. Then we obtain 221111 1111212211111 12121min{0, }|||||||||| |||| |||| |||| |||||| .TTnew Tkkkk kkkkkTTkkkkk kkkgdgd gdggggdgdg ggg     Let (0,1)c, 12max{1, }1cL and use the Step 6 of Algorithm 1, (5) and (9) hold, respectively. The proof is completed. The above lemma shows that the search direction kd has such that the sufficient descent condition (5) and the condition (9) without any line search rule. Based on Lemma 3.1, Assumption (i) and (ii), let us give the global convergence theorem of Algorithm 1. Theorem 3.1 Let 11{,, ,}kkkkdx g be generated by Algorithm 1 with the exact minimization rule, the Gold-stein line search rule, the SWP line search rule, or the WWP line search rule, and Assumption (i) and (ii) hold. Then lim ||||0kkg  (12) holds. A Line Search Algorithm for Unconstrained Optimization Copyright © 2010 SciRes. JSEA 506 Proof. We will prove the result of this theorem with the exact minimization rule, the Goldstein line search rule, the SWP line search rule, and the WWP line search rule, respectively. 1) For the exact minimization rule. Let the step size k be the solution of (10). By the mean value theorem, 0Tkkgd , and Assump-tion (ii), for any 22||||12,5 ||||5 ||||TTkk kkkkkgd gdLd Ld, we have 1010() ()() ()()()[() ]()kkk kkkkkTkkkkkTTkkkkkkk kkfxd fxfxdfxgxtdd dtgdgxtdg ddt 2222224221|| ||2|| ()114()|||| (13)5|| ||225|| ||()3,25 ||||Tkkkk kTTTkk kkkkkkTkkkgdL dgd gdgd LdLdL dgdLd  which together with Assumption (i), we can obtain 220()|| ||Tkkkkgdd. (14) This implies that 22()lim 0||||Tkkkkgdd  (15) holds. By Lemma 3.1, we get (12). 2) For Goldstein rule. Let the step size k be the so-lution of (6) and (11). By (11) and the mean value theorem, we have 12()TTkk kkkk kkkkkgxddff gd  , where (0,1)k, thus 2()TTkkkkk kkgxddgd . Using Assumption (ii) again, we get 22(1 )[ ()]||||TkkTkkkkkk kkgdgxdgdLd , which combining with (6), and use Assumption (i), we have (14) and (15), respectively. By Lemma 3.1, (12) holds. 3) For strong Wolf-Powell rule. Let the step size k be the solution of (6) and (7). By (7), we have 22()TTTkkkkk kkkgdgx ddgd . Similar to the proof of the above case. We can obtain (12) immediately. 4) For weak Wolf-Powell rule. Let the step size k be the solution of (6) and (8). Similar to the proof of the case 3), we can also get (12). Then we conclude this result of this theorem. By Lemma 3.1, there exists a constant 00 such that 0,|||||| ||Tkkkkgdgd for all .k (16) By the proof process of Lemma 3.1. We can deduce that there exists a positive number 1 satisfying 211(),|| ||Tkkkkkgdff d for all .k (17) Similar to the proof of Theorem 4.1 in , it is not difficult to prove the linear convergence rate of Algorithm 1. We state the theorem as follows but omit the proof. Theorem 3.2 (see ) Based on (16), (17), and the condition that the function f is twice continuously dif-ferentiable and uniformly convex on nR. Let 11{,, ,}kkkkdxg be generated by Algorithm 1 with the exact minimization rule, the Goldstein line search rule, the SWP line search rule, or the WWP line search rule. Then {}kx converges to x at least linearly, where x is the unique minimal point of ()fx. 4. Numerical Results In this section, we report some numerical experiments with Algorithm 1, Algorithm 2, and Algorithm 3. We test these algorithms on some problem  taken from MATLAB with given initial points. The parameters common to these methods were set identically, 10.1, 20.9, 610 , In this experiment, the following Himmeblau stop rule is used: If 1|()|kfxe,let 1|()( )|1|()|kkkfx fxstop fx; Other-wise, let 11| ( )()|kkstopfxfx  , where 5110e. If || ||kg or 21stop e was satisfied, the program will be stopped, where 5210e. We also stop the program if the iteration number is more than one thousand. Since the line search cannot always ensure the descent condition 0Tkkdg, uphill search direction may occur in the numerical experiments. In this case, the line search rule maybe failed. In order to avoid this case, the stepsize k will be accepted if the A Line Search Algorithm for Unconstrained Optimization Copyright © 2010 SciRes. JSEA 507searching number is more than forty in the line search. The detailed numerical results are listed on the web site http://210.36.18.9:8018/publication.asp?id=34402 Dolan and Moré  gave a new tool to analyze the efficiency of Algorithms. They introduced the notion of a performance profile as a means to evaluate and compare the performance of the set of solvers S on a test set P. Assuming that there exist sn solvers and pn problems, for each problem p and solver s, they defined ,pstcomputing time (the number of function evalua-tions or others) required to solve problem p by solvers. Requiring a baseline for comparisons, they compared the performance on problem p by solver s with the best performance by any solver on this problem; that is, using the performance ratio ,,,.min{ :}pspspstrtsS Suppose that a parameter ,Mpsrr for all p, s is chosen, and ,psMrr if and only if solver s does not solve problem p. The performance of solver s on any given problem might be of interest, but we would like to obtain an overall assessment of the performance of the solver, then they defined ,1(){ :},spsptsizepPrtn Thus ()st was the probability for solver sS that a performance ratio ,psr was within a factor tR of the best possible ration. Then function s was the (cumula-tive) distribution function for the performance ratio. The performance profile :[0,1]sR for a solver was a nondecreasing, piecewise constant function, continuous from the right at each breakpoint. The value of (1)s was the probability that the solver would win over the rest of the solvers. According to the above rules, we know that one solver whose performance profile plot is on top right will win over the rest of the solvers. In Figures 1-3, NA denotes Algorithm 1, PRP denotes Algorithm 2, and PRP+ denotes Algorithm 3. Figures 1-3 show that the performance of these methods is relative to NTNFm NG, where NF and NG denote the number of function evaluations and gradient evaluations respectively, and m is an integer. According to the re-sults on automatic differentiation , the value of m can be set to 5m. That is to say, one gradient evalua-tion is equivalent to m number of function evaluations if automatic differentiation is used. From these three figures Figure 1. Performance profiles(NT) of methods with Gold-stein rule Figure 2. Performance profiles(NT) of methods with strong Wolfe-Powell rule Figure 3. Performance profiles (NT) of methods with weak Wolfe-Power rule A Line Search Algorithm for Unconstrained Optimization Copyright © 2010 SciRes. JSEA 508 it is clear that the given method has the most wins (has the highest probability of being the optimal solver). In summary, the presented numerical results reveal that the new method, compared with the normal PRP method and the modified PRP method , has potential advan-tages. 5. Conclusions This paper gives a new line search method for uncon-strained optimization. The global and R-linear conver-gence are established under weaker assumptions on the search direction kd. Especially, the direction kd satis-fies the sufficient condition (5) and the condition (9) without carrying out any line search technique, and some paper [14,27,30] often obtains these two conditions by assumption. The comparison of the numerical results shows that the new search direction of the new algorithm is a good search direction at every iteration. REFERENCES  G. Yuan and X. Lu, “A New Line Search Method with Trust Region for Unconstrained Optimization,” Commu- nications on Applied Nonlinear Analysis, Vol. 15, No. 1, 2008, pp. 35-49.  G. Yuan, X. Lu, and Z. Wei, “New Two-Point Stepsize Gradient Methods for Solving Unconstrained Optimi- zation Problems,” Natural Science Journal of Xiangtan University, Vol. 29, No. 1, 2007, pp. 13-15.  G. Yuan and Z. Wei, “New Line Search Methods for Uncons- trained Optimization,” Journal of the Korean Statistical Society, Vol. 38, No. 1, 2009, pp. 29-39.  Y. Yuan and W. Sun, “Theory and Methods of Optimi- zation,” Science Press of China, Beijing, 1999.  D. C. Luenerger, “Linear and Nonlinear Programming,” 2nd Edition, Addition Wesley, Reading, MA, 1989.  J. Nocedal and S. J. Wright, “Numerical Optimization,” Springer, Berlin, Heidelberg, New York, 1999.  Z. Wei, G. Li, and L. Qi, “New Quasi-Newton Methods for Unconstrained Optimization Problems,” Applied Mathe- matics and Computation, Vol. 175, No. 1, 2006, pp. 1156- 1188.  Z. Wei, G. Yu, G. Yuan, and Z. Lian, “The Superlinear Convergence of a Modified BFGS-type Method for Unconstrained Optimization,” Computational Optimiza- tion and Applications, Vol. 29, No. 3, 2004, pp. 315-332.  G. Yuan and Z. Wei, “The Superlinear Convergence Anal- ysis of a Nonmonotone BFGS Algorithm on Convex Objective Functions,” Acta Mathematica Sinica, English Series, Vol. 24, No. 1, 2008, pp. 35-42.  G. Yuan and Z. Wei, “Convergence Analysis of a Modified BFGS Method on Convex Minimizations,” Computational Optimization and Applications, Science Citation Index, 2008.  Y. Dai and Y. Yuan, “A Nonlinear Conjugate Gradient with a Strong Global Convergence Properties,” SIAM Journal of Optimization, Vol. 10, No. 1, 2000, pp. 177- 182.  Z. Wei, G. Li, and L. Qi, “New Nonlinear Conjugate Gradient Formulas for Large-Scale Unconstrained Optimi- zation Problems,” Applied Mathematics and Computation, Vol. 179, No. 2, 2006, pp. 407-430.  G. Yuan and X. Lu, “A Modified PRP Conjugate Gradient Method,” Annals of Operations Research, Vol. 166, No. 1, 2009, pp. 73-90.  J. C. Gibert, J. Nocedal, “Global Convergence Properties of Conjugate Gradient Methods for Optimization,” SIAM Journal on Optimization, Vol. 2, No. 1, 1992, pp. 21-42.  Y. Dai and Y. Yuan, “Nonlinear Conjugate Gradient Methods,” Shanghai Science and Technology Press, 2000.  E. Polak and G. Ribiè, “Note Sur la Xonvergence de Directions Conjugèes,” Rev Francaise Informat Recher- che Operatinelle 3e Annèe, Vol. 16, 1969, pp. 35-43.  M. J. D. Powell, “Nonconvex Minimization Calculations and the Conjugate Gradient Method,” Lecture Notes in Mathematics, Springer-Verlag, Berlin, Vol. 1066, 1984, pp. 122-141.  L. Grippo and S. Lucidi, “A Globally Convergent Version of the Polak-RibiÈRe Gradient Method,” Mathematical Programming, Vol. 78, No. 3, 1997, pp. 375-391.  W. W. Hager and H. Zhang, “A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line Search,” SIAM Journal on Optimization, Vol. 16, No. 1, 2005, pp. 170-192.  Z. Wei, S. Yao, and L. Liu, “The Convergence Properties of Some New Conjugate Gradient Methods,” Applied Mathematics and Computation, Vol. 183, No. 2, 2006, pp. 1341-1350.  G. H. Yu, “Nonlinear Self-Scaling Conjugate Gradient Methods for Large-scale Optimization Problems,” Thesis of Doctor's Degree, Sun Yat-Sen University, 2007.  G. Yuan, “Modified Nonlinear Conjugate Gradient Methods with Sufficient Descent Property for Large-Scale Optimization Problems,” Optimization Letters, Vol. 3, No. 1, 2009, pp. 11-21.  G. Yuan, “A Conjugate Gradient Method for Uncons- trained Optimization Problems,” International Journal of Mathematics and Mathematical Sciences, Vol. 2009, 2009, pp. 1-14.  G. Yuan, X. Lu, and Z. Wei, “A Conjugate Gradient Method with Descent Direction for Unconstrained Optimi- zation,” Journal of Computational and Applied Mathe- matics, Vol. 233, No. 2, 2009, pp. 519-530.  L. Zhang, W. Zhou, and D. Li, “A Descent Modified Polak-RibiÈRe-Polyak Conjugate Method and its Global Convergence,” IMA Journal on Numerical Analysis, Vol. 26, No. 4, 2006, pp. 629-649.  Y. Liu and C. Storey, “Efficient Generalized Conjugate Gradient Algorithms, Part 1: Theory,” Journal of Optimi- zation Theory and Application, Vol. 69, No. 1, 1992, pp. 17-41. A Line Search Algorithm for Unconstrained Optimization Copyright © 2010 SciRes. JSEA 509 Z. J. Shi, “Convergence of Line Search Methods for Unconstrained Optimization,” Applied Mathematics and Computation, Vol. 157, No. 2, 2004, pp. 393-405.  J. J. Moré, B. S. Garbow, and K. E. Hillstrome, “Testing Unconstrained Optimization Software,” ACM Transac- tions on Mathematical Software, Vol. 7, No. 1, 1981, pp. 17-41.  E. D. Dolan and J. J. Moré, “Benchmarking Optimization Software with Performance Profiles,” Mathematical Programming, Vol. 91, No. 2, 2002, pp. 201-213.  Y. Dai and Q. Ni, “Testing Different Conjugate Gradient Methods for Large-scale Unconstrained Optimization,” Journal of Computational Mathematics, Vol. 21, No. 3, 2003, pp. 311-320.