H. FAN ET AL.

Copyright © 2011 SciRes. AM

1122

Table 1. The detail information of numerical experiments for MN algorithm.

No. 0

k

k

k

S201 (8, 9) (5.00000000000007,

6.00000000000002) 5.338654227543967e−013 2

S202 (15, −2) (11.41277974501077,

−0.89680520867268) 7.343412874359107e−007 30

S205 (0,0) (3.00000000072742,

0.49999999510645) 2.411787046907220e−007 12

S206 (−1.2, 1) (1.00000000400789,

1.00000020307759) 3.907063255896894e−007 5

S311 (1, 1) (−3.77931025686871,

−3.28318599743028) 5.006611497285485e−007 6

S314 (2, 2) (1.79540285273286,

1.37785978124895) 7.487443339742852e−008 6

in which

1(0, ),

211

,,

31

(, )

,

1

is any given positive number.

Theorem 3.1. Suppose that Assumption A holds. Let

Let

k

be generated by (1.1) and (1.2), k satisfies

the Wolfe-Powell line search, and

t

0

k

has Property

A. Then,

lim inf0

k

kg

Similar to the second part of the discussion for the above

general algorithm ()

VMN

k

, we can get .

And the algorithm possesses the sufficient descent prop-

erty, in which

() 0

MN

k

1

2

1c

The proof is similar as the one of Theorem 2.1 in Sec-

tion 2.

4. Numerical Experiments

In this section, we carry out some numerical experiments.

The MN algorithm has been tested on some problems

from [8]. The results are summarized in Table 1. For the

test problem, No. is the number of the test problem in [8],

0

is the initial point, k

is the final point, is the

number of times of iteration for the problem.

k

Table 1 shows the performance of the MN algorithm

relative to the iteration. It is easily to see that, for all the

problems, the algorithm is very efficient. The results for

each problem are accurate, and with less number of times

of iteration.

5. Conclusions

In this paper, we have proposed a new nonlinear conjugate

gradient method-MN algorithm. The sufficient descent

property holds without any line searches, and the algorithm

satisfys Property A. We also have proved, employing some

steplength technique which ensures the Zoutendijk condi-

tion to be held, this method is globally convergent. Judging

from the numerical experiments in Table 1, compared to

most other algorithms, MN algorithm has higher precision

and less number of times of iteration. Finally, we have

proposed VMN algorithm, it also have the sufficient de-

scent property and Property A, and it is global convergence

under weak Wolfe-Powell line search.

6. References

[1] M. Al-Baali, “Descent Property and Global Convergence

of the Fletcher-Reeves Method with Inexact Line

Search,” IMA Journal of Numerical Analysis, Vol. 5, No.

1, 1985, pp. 121-124. doi:10.1093/imanum/5.1.121

[2] Y. F. Hu and C. Storey, “Global Convergence Result for

Conjugate Gradient Method,” Journal of Optimization

Theory and Applications, Vol. 71, No. 2, 1991, pp.

399-405. doi:10.1007/BF00939927

[3] G. Yu, Y. Zhao and Z. Wei, “A Descent Nonlinear Con-

jugate Gradient Method for Large-Scale Unconstrained

Optimization,” Applied Mathematics and Computation,

Vol. 187, No. 2, 2007, pp. 636-643.

doi:10.1016/j.amc.2006.08.087

[4] Z. Wei, S. Yao and L. Liu, “The Convergence Properties

of Some New Conjugate Gradient Methods,” Applied

Mathematics and Computation, Vol. 183, No. 2, 2006, pp.

1341-1350. doi:10.1016/j.amc.2006.05.150

[5] G. Zoutendijk, “Nonlinear Programming, Computational

Methods,” In: J. Abadie, Ed., Integer and Nonlinear Pro-

gramming, North-Holland Publisher Co., Amsterdam,

1970, pp. 37-86.

[6] J. C. Gilbert and J. Nocedal, “Global Convergence Prop-

erties of Conjugate Gradient Methods for Optimization,”

SIAM Journal Optimization, Vol. 2, No. 1, 1992, pp. 21-

42. doi:10.1137/0802003

[7] Y. H. Dai and Y. Yuan, “Nonlinear Conjugate Gradient