 Intelligent Control and Automation, 2010, 1, 28-35 doi:10.4236/ica.201.11004 Published Online August 2010 (http://www.SciRP.org/journal/ica) Copyright © 2010 SciRes. ICA A Nonmonotone Line Search Method for Symmetric Nonlinear Equations* Gonglin Yuan1, Laisheng Yu2 1College of Mathematics and Information Science, Guangxi University, Nanning, China 2Student Affairs Office, Guangxi University, Nanning, China E-mail: glyuan@gxu.edu.cn Received February 7, 2010; revised March 21, 2010; accepted July 7, 2010 Abstract In this paper, we propose a new method which based on the nonmonotone line search technique for solving symmetric nonlinear equations. The method can ensure that the search direction is descent for the norm function. Under suitable conditions, the global convergence of the method is proved. Numerical results show that the presented method is practicable for the test problems. Keywords: Nonmonotone Line Search, Symmetric Equations, Global Convergence 1. Introduction Consider the following nonlinear equations: 0, ngxxR (1) where :nngRR be continuously differentiable and its Jacobian ()gx is symmetric for all nxR. This problem can come from unconstrained optimization problems, a saddle point problem, and equality con-strained problems [the detail see ]. Let ()x be the norm function defined by 21() ()2xgx. Then the nonlinear equation problem (1) is equivalent to the fol-lowing global optimization problem min, nxxR (2) The following iterative formula is often used to solve (1) and (2): 1kkkkxxd where kis a steplength, and kd is one search direc-tion. To begin with, we briefly review some methods for (1) and (2) by line search technique. First, we give some techniques for k. Li and Fukashima  proposed an approximate monotone line search technique to obtain the step-size ksatisfying: 22212kkkkkkkk kxd xdgg   (3) where 10 and 20 are positive constants,kikr, (0,1), kri is the smallest nonnegative integer i such that (3), and k satisfies 0kk. Combining the line search (3) with one special BFGS update formula, they got some better results (see ). Inspired by their idea, Wei  and Yuan  made a further study. Brown and Saad  proposed the following line search method to obtain the stepsize  Tkkkkk kkxdx xd  (4) where (0,1). Based on this technique, Zhu  gave the nonmonotone line search technique: ()Tkkklkkkkxdx xd   (5) where () 0()max, 0,1,2,,lkk jjnk k()min{ ,}nkMk, and 0M is an integer constant. From these two tech-niques (4) and (5), it is easy to see that the Jacobian ma-tric ()gx must be computed at every iteration, which will increase the workload especially for large-scale problems or this matric is expensive. Considering these points, we  presented a new backtracking inexact technique to obtain the stepsize k *This work is supported by China NSF grands 10761001, the Scientific Research Foundation of Guangxi University (Grant No. X081082), and Guangxi SF grands 0991028. G. L. YUAN ET AL. Copyright © 2010 SciRes. ICA 29 222kTkkk kkkgxdgxgx d  (6) where (0,1) Second, we present some techniques for kd. One of the most effective methods is Newton method. It normally requires a fewest number of function evaluations, and it is very good at handling ill-condi-tioning. However, its efficiency largely depends on the possibility to efficiently solve a linear system which arises when computing the search kd at each iteration  kk kgxd gx (7) Moreover, the exact solution of the system (7) could be too burdensome, or it is not necessary when kd is far from a solution . Inexact Newton methods [5,7] represent the basic approach underlying most of the Newton-type large-scale algorithms. At each iteration, the current estimate of the solution is updated by ap-proximately solving the linear system (7) using an itera-tive algorithm. The inner iteration is typically “trun-cated” before the solution to the linear system is obtained. Griewank  firstly proposed the Broyden’s rank one method for nonlinear equations and obtained the global convergence. At present, a lot of algorithms have been proposed for solving these two problems (1) and (2) (see [9-15]). The famous BFGS formula is one of the most effective quasi-Newton methods, where the kd is the solution of the system of linear equations 0kk kBd g (8) where kB is generated by the following BFGS update formula 1TTkkkkk kkkTTkkkkkBssByyBBsBss y (9) where 1kk ksxxand 1()().kk kygx gx Recently, there are some results on nonlinear equations can be found at [6,17-20]. Zhang and Hager  present a nonmonotone line search technique for unconstrained optimization problem min( )nxRfx, where the nonmonotone line search technique is defined by   ,Tkkk k kkkTTkkkk kkfxdC fxdfxddfx d  where 00011101, ,1,1, kkkkkkkkk kkCfxQQCf xdQQC Q  min max[, ],k and min max01. They proved the global convergence for nonconvex, smooth functions, and R-linear convergence for strongly convex functions. Numerical results show that this method is more effec-tive than other similar methods. Motivated by their tech-nique, we propose a new nonmonotone line search tech-nique which can ensure the descent search direction on the norm function for solving symmetric nonlinear Equa-tions (1) and prove the global convergence of our method. The numerical results are reported too. Here and throughout this paper, ||.|| denote the Euclidian norm of vectors or its induced matrix norm. This paper is organized as follows. In the next section, we will give our algorithm for (2). The global conver-gence and the numerical result are established in Section 3 and in Section 4, respectively. 2. The Algorithm Precisely, our algorithm is stated as follows. Algorithm 1. Step 0: Choose an initial point 0,nxR an initial sy- mmetric positive definite matrix 0nnBR, and constants 212100 0(0,1),01,01,, 1rJgE  , and 0;k Step 1: If 0;kg then stop; Otherwise, solving the following linear Equations (10) to obtain kdand go to step 2; 0kk kBd g (10) Step 2: Let ki be the smallest nonnegative integer i such that  222kTkkkkkkgxdgxgx d  (11) holds for.ir Let kikr; Step 3: Let 1,kkkkxxd 1kk ksxx and 1.kkkygx gx If 0,kTkys  update kB to generate 1kB by the BFGS formula (9). Otherwise, let 1;kkBB Step 4: Choose [0,1],k and set 211111, kkk kkkkkkEJg xEEJE  (12) Step 5: Let k: = k + 1, go to Step 1. Remark 1: 1) By the technique of the step 3 in the algorithm [see ], we deduce that 1kB can inherits the positive and symmetric properties ofkB. Then, it is not difficult to get 0.kTkdg G. L. YUAN ET AL. Copyright © 2010 SciRes. ICA 30 2) It is easy to know that 1kJ is a convex combina-tion of kJ and 21().kgx By 200,Jgit follows that kJis a convex combination of function values 22201,,,.kgg g The choice of k controls the degree of nonmonotonicity. If 0k for each k, then the line search is the usual monotone line search. If 1k for each k, then 2011kkiiVgk ,kkJV where is the average function value. 3) By (9), we have 111kk kkkkkBs y gggs  1,kTkgs this means that 1kB approximate to 1kg along .ks 3. The Global Convergence Analysis of Algorithm 1 In this section, we establish global convergence for Al-gorithm 1. The level set  is defined by 0|() ().nx Rgxgx  Assumption A. The Jaconbian of g is symmetric and there exists a constant 0M holds kkgxgx Mxx (13) for .x. Since kB approximates kg along direction ks, we can give the following assumption. Assumption B. kB is a good approximation to kg, i.e., kkk kgxBd g (14) where (0,1)is a small quantity. Assumption C. There exist positive constants 1b and 2b satisfy 21Tkk kgdbg (15) and 2kkdbg (16) for all sufficiently large k.. By (10) and Assumption C, we have 12kk kbgdb g (17) Lemma 3.1. Let Assumption B hold and 1{,,kkkdx, 1}.kgbe generated by Algorithm 1. Then kd is descent direction for ()x at kx, i.e., 0Tkkxd (18) Proof. By (10), we have  kkkkTTTkkkkkkk kkTTkkk kkxdggdg gdBdgggdBdgg  (19) Using (14) and taking the norm in the right-hand-side of (19), we get  221kTTkkkkkkkkxdggdBd gg  (20) Therefore, for (0,1), we get the lemma. By the above lemma, we know that the norm function ()x is descent along kd, then 1kkgg holds. Lemma 3.2. Let Assumption B hold and 1{,,kkkdx, 1}kg be generated by Algorithm 1. Then {} .kx Moreover, {}kg converges. Proof. By Lemma 3.1, we get 1kkgg. Then, we conclude that {}kg converges. Moreover, we have for all k 10.kkggg Which means that {}.kx The next lemma will show that for any choice of [0,1),kkJ lies between 2kgand .kV Lemma 3.3. Let 11{,, ,}kkkkdx g be generated by Algorithm 1, we have 21,kkkkkgJVJJ  for each k. Proof. We will prove the lower bound for kJ by in-duction. For k = 0, by the initialization 200,Jg this holds. Now we assume that 2iiJg holds for all 0.ik By (2.3) and 221,iigg we have 222111112221111iiiiii iiiiiii iiiiEJgE ggJEEEg ggE (21) where 11iiiEE. Now we prove that 1kkJJ is true. By (12) again, and using 221kkgg, we obtain 211111.kkk kkkk kkkkEJ gEJ JJEE G. L. YUAN ET AL. Copyright © 2010 SciRes. ICA 31Which means that 1kkJJ for all k is satisfied. Then we have 21kkkgJJ (22) Let :kLR Rbe defined by 21,1kkktJ gLt t we can get  212.1kkkJgLtt By 21,kkJg we obtain '( )0kLt for all 0t. Then, kL is nondecreasing, and 2(0)()kk kgLLt for all 0t. Now we prove the upper bound kkJV by induction. For k = 0, by the initialization 200Jg, this holds. Now assume that jjJV hold for all 0jk. By using 01E, (12), and [0,1]k, we obtain 10012jijjpipEj  (23) Denote that kL is monotone nondecreasing, (23) im-plies that 11 1kkkkk kJLELE Lk (24) Using the induction step, we have 221111kk kkkkkJgkV gLk Vkk (25) Combining (24) and (25) implies the upper bound of kJ in this lemma. Therefore, we get the result of this lemma. The following lemma implies that the line search tech-nique is well-defined. Lemma 3.4. Let Assumption A, B and C hold. Then Algorithm 1 will produce an iterate 1kkkkxxd in a finite number of backtracking steps. Proof. From Lemma 3.8 in  we have that in a finite number of backtracking steps, k must satisfy  22 Tkkkkkk kkgxdgxgx gd  (26) where (0,1). By (20) and (15), we get  2211111Tkkkk kkTTkkkkkkkTkkgx gdggdggdbgd   (27) Using (0,1)k, we obtain 1211111TTkkkk kkkTkkkgxgd gdbgdb (28) So let 1110, min1,1b. By Lemma 3.3, we know 2.kkgJ Therefore, we get the line search (11). The proof is complete. Lemma 3.5. Let Assumption A, B and C hold. Then we have the following estimate for k, when k suffi-ciently large: 00kb (29) Proof. Assuming the step-size k such that (11). Then 1kk does not satisfy (11), i.e., 221Tkkkkk kkgxdJ gxd By 2kkgJ, we get 22221kkk kTkkkkk kkgx dggxdJ gxd  Which implies that  2221Tkkkkkk kgxd gxdg  (30) By Taylor formula, (19), (20), and (17), we get 2222222222222111Tkkk kkkkkkk kkkkkkkkgxdgg gdOd gOddO db   (31) Using (15), (17), (30), and (31) we obtain 2212122222121222212222222221121112112111kkkkkkTkkkkkkkk kkkkkdbbddbbdgdbgx dgdO db       (32) G. L. YUAN ET AL. Copyright © 2010 SciRes. ICA 32 which implies when k sufficiently large, 121211.21kbbb Let 10212110, .21bbbb The proof is complete. In the following, we give the global convergence theo-rem. Theorem 3.1. Let 11{,,,}kkkkdx g be generated by Algorithm 1, Assumption A, B, and C hold, and 2kg be bounded from below. Then lim inf0kkg  (33) Moreover, if 21, then lim 0kkg (34) Therefore, every convergent subsequence approaches to a point *,x where *()0.gx Proof. By (11), (15), (16), and (19), we have 2211 112011TkkkkkkkkkkgJdgJbgJbbg   (35) Let 101.bb Combining (12) and the upper bound of (35), we get 221111211kkkkkkkkkkkkkkkEJgEJ JgJEEgJE (36) Since 2kg is bounded from below and 2kkgJ for all k, we can conclude that kJ is bounded from be-low. Then, using (36), we obtain 201kkkgE (37) By (23), we get 12kEk (38) If 2kgwere bounded away from 0, then (37) would violate (38). Hence, (33) holds. If 21, by (23), we have 1120002021111jkkjkkijjikjE  (39) Then, (37) implies (34). The proof is complete. 4. Numerical Results In this section, we report the results of some numerical experiments with the proposed method. Problem 1. The discretized two-point boundary value problem is the same to the problem in   211gxAxFxn, where A is the nn tridiagonal matrix given by 41141114A and  12,, TnFxFxFxFx, with iFx sin1, i1,2,ixn. Problem 2. Unconstrained optimization problem min( ),nfxx R, with Engval function  defined by 22211243nii iifxxxx The related symmetric nonlinear equation is  14gxfx where  12,,, Tngxgxgx gx with 22111222211221121,2,3,,1iiiiinnnngxxx xgxxx xxingx xxx  In the experiments, the parameters in Algorithm 1 were chosen as 100.1,0.001,0.8,krBis unit matrix. The program was coded in MATLAB 6.1. We stopped the iteration when the condition 26() 10gx was satisfied. Tables 1 and 2 show the performance of the method need to solve the Problem 1. Tables 3 and 4 show the performance of the method need to solve the Problem 2. The columns of the tables have the following meaning: Dim: the dimension of the problem. NI: the total number of iterations. NG: the number of the function evaluations. GG: the function evaluations. From the above tabulars, we can see that the numerical G. L. YUAN ET AL. Copyright © 2010 SciRes. ICA 33 Test Result for the Problem 1 Table 1. Small-scale. 0x (4, ..., 4) (20, ..., 20) (100, ..., 100) (-4, ..., -4) (-20, ..., -20) (-100, ..., -100) Dim NI/NG/GG NI/NG/GG NI/NG/GG NI/NG/GG NI/NG/GG NI/NG/GG n = 10 16/22/3.714814e-7 16/20/2.650667e-7 16/20/4.014933e-716/22/3.723721e-7 16/20/2.643713e-7 16/20/4.013770e-7n = 50 44/47/1.388672e-7 44/47/6.929395e 46/49/3.713174e-844/47/1.388793e-744/47/6.929516e-7 46/49/3.726373e-8n = 100 68/71/5.905592e-7 70/73/8.759459e 72/75/3.125373e-768/71/5.905724e-770/73/8.759500e-7 72/75/3.125382e-70x (4, .0, ...,) (20, 0, ..., 20) (100, .0, ...,) (-4, .0, ...,) (-20, .0, ...,) (-100, .0, ...,) Dim NI/NG/GG NI/NG/GG NI/NG/GG NI/NG/GG NI/NG/GG NI/NG/GG n = 10 21/23/1.297275e-9 21/23/5.577021e-9 1/23/9.029402e-9 1/23/1.208563e-9 1/23/4.707707e-9 21/23/1.061736e-8n = 50 63/65/8.204623e-7 67/69/9.997988e-7 9/71/4.511023e-7 3/65/8.204744e-7 7/69/9.997996e-7 69/71/4.511007e-7n = 100 65/67/6.046233e-7 69/71/7.845951e-7 1/73/6.996085e-7 65/67/6.046254e-7 9/71/7.845962e-7 71/73/6.996092e-7 Table 2. Large-scale. 0x (4, ..., 4) (20, ..., 20) (30, ..., 30) (-4, ..., -4) (-20, ..., -20) (-30, ..., -30) Dim NI/NG/GG NI/NG/GG NI/NG/GG NI/NG/GG NI/NG/GG NI/NG/GG n = 300 70/73/7.844778e-7 76/79/7.741702e-7 8/81/6.759628e-7 0/73/7.844800e-7 6/79/7.741706e-7 78/81/6.759631e-7n = 500 70/73/8.547195e-7 76/79/8.435874e-7 8/81/7.366072e-7 0/73/8.547204e-7 6/79/8.435876e-7 78/81/7.366073e-7n = 800 68/70/6.505423e-7 74/76/6.414077e-7 4/76/9.621120e-7 8/70/6.505425e-7 4/76/6.414078e-7 74/76/9.621120e-70x (4, .0, ...,) (20, 0, ..., 20) (30, .0, ...,) (-4, .0, ...,) (-20, .0, ...,) (-30, .0, ...,) Dim NI/NG/GG NI/NG/GG NI/NG/GG NI/NG/GG NI/NG/GG NI/NG/GG n = 300 67/69/5.896038e-7 71/73/9.997625e-7 3/75/8.731533e-767/69/5.896038e-7 71/73/9.997625e-7 73/75/8.731533e-7n = 500 67/69/7.145027e-7 73/75/7.057076e-7 5/77/6.163480e-77/69/7.145024e-773/75/7.057075e-7 75/77/6.163479e-7n = 800 69/71/6.188110e-7 75/77/6.115054e-7 5/77/9.172559e-7 9/71/6.188106e-7 5/77/6.115053e-7 75/77/9.172558e-7 Test Result for the Problem 2 Table 3. Small-scale. 0x (1, ..., 1) (3, ..., 3) (4, ..., 4) (1, 0, ...,) (3, 0, , ...,) (4, 0, ...,) Dim NI/NG/GG NI/NG/GG NI/NG/GG NI/NG/GG NI/NG/GG NI/NG/GG n = 10 20/22/3.007469e-7 38/47/6.088293e-7 44/48/4.898591e-70/23/3.452856e-75/41/5.833715e-7 29/34/4.338894e-7n = 50 36/38/6.966974e-7 76/88/6.845101e-7 99/114/8.556270e-76/39/7.812438e-79/77/4.466497e-7 67/75/4.681269e-7n = 100 36/38/7.207203e-7 6/109/4.173166e-7 0/87/7.911692e-76/39/8.220367e-79/92/8.640158e-7 69/76/8.515673e-7 G. L. YUAN ET AL. Copyright © 2010 SciRes. ICA 34 Table 4. Large-scale. 0x (1, ..., 1) (3, ..., 3) (4, ..., 4) (1, 0, ...,) (3, 0, ...,) (4, 0, ...,) Dim NI/NG/GG NI/NG/GG NI/NG/GG NI/NG/GG NI/NG/GG NI/NG/GG n = 300 40/42/4.452904e-7 4/106/6.146797e-7 63/68/4.232021e-70/43/9.348673e-7 6/72/7.638011e-7 92/106/9.095927e-n = 500 44/46/9.611950e-7 9/171/4.445314e-7 18/133/7.054347-7e 2/45/8.280486e-73/79/7.159029e-7 85/98/4.229872e-7n = 800 41/43/4.510999e-7 7/185/5.274922e-7 74/198/4.239839e-7 1/44/9.502624e-72/77/6.117626e-7 93/106/8.797380e-7 results are quite well for the test Problems with the pro-posed method. The initial points and the dimension don’t influence the performance of the algorithm 1 very much. However, we find the started points will influence the result for the problem 2 a little in our experiment. In one word, the numerical are attractively. The method can be used to the system of nonlinear equations whose Jaco-bian is not symmetric. 5. Conclusion In this paper, we propose a new nonmonotone line search method for symmetric nonlinear equations. The global convergence is proved and the numerical results show that this technique is interesting. The reason is that the new nonmonotone line search algorithm used fewer function and gradient evaluations, on average, than either the monotone or the traditional nonmonotone scheme. We hope the method will be a further topic for symmet-ric nonlinear equations. 6. Acknowledgements We would like to thank these referees for giving us many valuable suggestions and comments that improve this paper greatly. 7. References  D. Li and M. Fukushima, “A Global And Superlinear Convergent Gauss-Newton-based BFGS Method for Symmetric Nonlinear Equations,” SIAM Journal on Nu-merical Analysis, Vol. 37, No. 1, 1999, pp. 152-172.  Z. Wei, G. Yuan and Z. Lian, “An Approximate Gauss- Newton-Based BFGS Method for Solving Symmetric Nonlinear Equations,” Guangxi Sciences, Vol. 11, No. 2, 2004, pp. 91-99.  G. Yuan and X. Li, “An Approximate Gauss-Newton- based BFGS Method with Descent Directions for Solving Symmetric Nonlinear Equations,” OR Transactions, Vol. 8, No. 4, 2004, pp. 10-26.  P. N. Brown and Y. Saad, “Convergence Theory of Nonlinear Newton-Krylov Algorithms,” SIAM Journal on Optimization, Vol. 4, 1994, pp. 297-330.  D. Zhu, “Nonmonotone Backtracking Inexact Quasi-New-ton Algorithms for Solving Smooth Nonlinear Equa-tions,” Applied Mathematics and Computation, Vol. 161, No. 3, 2005, pp. 875-895.  G. Yuan and X. Lu, “A New Backtracking Inexact BFGS Method for Symmetric Nonlinear Equations,” Computers and Mathematics with Applications, Vol. 55, No. 1, 2008, pp. 116-129.  S. G. Nash, “A Survey of Truncated-Newton Methods,” Journal of Computational and Applied Mathematics, Vol. 124, No. 1-2, 2000, pp. 45-59.  A. Griewank, “The ‘Global’ Convergence of Broyden- Like Methods with a Suitable Line Search,” Journal of the Australian Mathematical Society Series B, Vol. 28, No. 1, 1986, pp. 75-92.  A. R. Conn, N. I. M. Gould and P. L. Toint, “Trust Re-gion Method,” Society for Industrial and Applied Mathe- matics, Philadelphia, 2000.  J. E. Dennis and R. B. Schnabel, “Numerical Methods for Unconstrained Optimization and Nonlinear Equations,” Englewood Cliffs, Prentice-Hall, 1983.  K. Levenberg, “A Method for The Solution of Certain Nonlinear Problem in Least Squares,” Quarterly of Ap-plied Mathematics, Vol. 2, 1944, pp. 164-166.  D. W. Marquardt, An algorithm for least-squares estima-tion of nonlinear inequalities, SIAM Journal on Applied Mathematics, Vol. 11, 1963, pp. 431-441.  J. Nocedal and S. J. Wright, “Numerical Optimization,” Springer, New York, 1999.  N. Yamashita and M. Fukushima, “On the Rate of Con-vergence of the Levenberg-Marquardt Method,” Com-puting, Vol. 15, No. Suppl, 2001, pp. 239-249.  Y. Yuan and W. Sun, “Optimization Theory and Algo-rithm,” Scientific Publisher House, Beijing, 1997.  G. Yuan and X. Li, “A Rank-One Fitting Method for Solving Symmetric Nonlinear Equations,” Journal of Ap-plied Functional Analysis, Vol. 5, No. 4, 2010, pp. 389- 407.  G. Yuan, X. Lu and Z. Wei, “BFGS Trust-Region Method for Symmetric Nonlinear Equations,” Journal of Computational and Applied Mathematics, Vol. 230, No. 1, 2009, pp. 44-58. G. L. YUAN ET AL. Copyright © 2010 SciRes. ICA 35 G. Yuan, S. Meng and Z. Wei, “A Trust-Region-Based BFGS Method with Line Search Technique for Symmet-ric Nonlinear Equations,” Advances in Operations Re-search, Vol. 2009, 2009, pp. 1-20.  G. Yuan, Z. Wang and Z. Wei, “A Rank-One Fitting Method with Descent Direction for Solving Symmetric Nonlinear Equations,” International Journal of Commu-nications, Network and System Sciences, Vol. 2, No. 6, 2009, pp. 555-561.  G. Yuan, Z. Wei and X. Lu, “A Nonmonotone Trust Re-gion Method for Solving Symmetric Nonlinear Equa-tions,” Chinese Quarterly Journal of Mathematics, Vol. 24, No. 4, 2009, pp. 574-584.  H. Zhang and W. Hager, “A Nonmonotone Line Search Technique and Its Application to Unconstrained Optimi-zation,” SIAM Journal on Optimization, Vol. 14, No. 4, 2004, pp. 1043-1056.  J. M. Ortega and W. C. Rheinboldt, “Iterative Solution of Nonlinear Equations in Several Variables,” Academic Press, New York, 1970.  E. Yamakawa and M. Fukushima, “Testing Parallel Bari-able Transformation,” Computational Optimization and Applications, Vol. 13, 1999, pp. 253-274.