Applied Mathematics, 2011, 2, 1119-1123
doi:10.4236/am.2011.29154 Published Online September 2011 (http://www.SciRP.org/journal/am)
Copyright © 2011 SciRes. AM
A New Descent Nonlinear Conjugate Gradient Method for
Unconstrained Optimization
Hao Fan, Zhibin Zhu, Anwa Zhou
School of Mathematics and Computing Science, Guilin University of Electronic Technology, Guilin, China
E-mail: 540103003@qq.com
Received June 15, 2011; revised July 10, 2011; accepted July 17, 2011
Abstract
In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimiza-
tion. The sufficient descent property holds without any line searches. We use some steplength technique
which ensures the Zoutendijk condition to be held, this method is proved to be globally convergent. Finally,
we improve it, and do further analysis.
Keywords: Large Scale Unconstrained Optimization, Conjugate Gradient Method, Sufficient Descent
Property, Globally Convergent
1. Introduction
The nonlinear conjugate gradient method is designed to
solve the following unconstrained optimization problem:
min( ),
f
x
n
x
where is a smooth nonlinear function, and
the gradient of
:n
f
f
at
x
is denoted by ()
g
x. The it-
erative formula of the conjugate gradient methods is
given by
1kkkk
x
xtd
 , (1.1)
0,1,...,k
where k is a steplength which is computed by carrying
out some line search, is the search direction defined
by
t
k
d
1
,1
,2
k
k
kkk
gifk
dgdifk

 
,
,
(1.2)
where k
is a scalar, k
g
denotes ()
k
g
x.
There are some well-known formulas for k
, which
are given as follows:
1
2
1
T
PRP kk
k
k
g
y
g
, (Polak-Ribiere-Polyak [1]), (1.3)
2
2
1
k
FR
k
k
g
g
, (Fletcher-Reeves [2]), (1.4)
where 1kkk
, and
1
ygg


stands for the Euclid-
ean norm of vectors.
In addition, the sufficient descent condition is defined
as follows:
2
T
kk k
g
dcg , (1.5)
where , has often been used in the literature to
analyze the global convergence of conjugate gradient
methods with inexact line searches.
0c
Generally, the PRP method was much better than the
FR method judging from the numerical calculation.
When the objective function was convex, Polak and Ri-
bie`re proved that the PRP method with the exact line
search was globally convergent. But Powell showed that
there existed nonconvex functions on which the PRP
method did not converge globally. He suggested that k
should not be less than zero. Under the sufficient descent
condition, Gilbert and Nocedal proved that the modified
PRP method
max 0,
P
RP
k

k
was globally conver-
gent with the Wolfe-Powell line search.
Recently, G. Yu [3] proposed a modified FR (MFR)
formula such as
2
1
k2
2131
() ,
k
MFR
T
kk k
g
gd g


(1.6)
where 1(0, ),

211
,,


30,

and
1
was an any given positive constant. They proved that
for any line search, (1.6) satisfied the condition (1.5), in
which 1
2
1c
 . In fact, the term 2
T
kk1
g
d
in the
denominator of (1.6) played an important role in en-
hancing descent. It essentially controled the relative
H. FAN ET AL.
1120
weight placed on conjugacy versus descent. Along this
way, G. Yu [3] proposed a new nonlinear conjugate gra-
dient formula such as
2
12
1
2
k11
() ,
0otherwis
T
kkk T
kkk
NT
kk k
ggg ifgg g
gd g


e,
(1.7)
in which 1
. It possessed the sufficient descent
property for any line search, and had an advancement
that the directions would approach to the steepest descent
directions while the steplength was small. They also
proved the algorithm which possed the global conver-
gence property with the weak Wolfe-Powell.
Z. Wei [4] proposed a new nonlinear conjugate gradi-
ent formula such as
1
1
*
2
1
k
T
kk k
k
k
k
g
gg g
g
g



, (1.8)
In [4], a new conjugate gradient formula *
k
was
given to compute the search directions for unconstrained
optimization problems. It was discussed some general
convergence results for the proposed formula with some
line searches such as the exact line search, the Wolfe-
Powell line search and the Grippo-Lucidi line search.
The given formula , and had the similar form
with
*0
k
P
RP
k
.
Combining the algorithms above, in this paper, we
propose a new modified scalar formula ()
N
k
, denoted
()
MN
k
, the new algorithm calls MN algorithm, where
2
1
1
k2
11
()
kT
kk
k
MN
T
kk k
g
k
g
gg
g
gd g


(1.9)
in which 1
. We add some parameters for ()
MN
k
so that it is generalization, then we have VMN algo-
rithm
2
11
1
k2
2131
()
kT
kk
k
VMN
T
kk k
g
ggg
g
gd g
 



k
, (1.10)
in which1(0, ),

211
,,

 3(0, ),

and 1
is any given positive number, calls VMN algo-
rithm.
In the next section, we present the global convergence
of MN algorithm and establish some good properties for
which. The global convergence results of VMN algo-
rithm are given in Section 3. Finally, we have a conclu-
sion section.
2. The Global Convergence of MN Algorithm
Firstly, we can prove ()
MN
k
in (1.9) is non-negative,
2
1
1
2
11
2
1
1
2
11
()
0 ,
kT
kkk
k
MN
kT
kk k
k
kkk
k
T
kk k
g
ggg
g
gd g
g
ggg
g
gd g




The following theorem shows that MN algorithm pos-
sesses the sufficient descent property for any line search.
Theorem 2.1. Consider any method (1.1) and (1.2),
where ()
MN
kk

.Then for all 1k
2
1
1
T
kk k
gd g

 

.
(2.1)
Proof. If 10
T
kk
gd
, for 1
, then we have
22
11 11
1
1
T
kk kk
gd gg
 

 

 .
Otherwise, from the definition of k()
MN
, we can
Obtain
2
1
1
() k
MN
kT
kk
g
g
d

,
and then we have
11
2
111
2
22
1
11
1
()
1
1 .
T
kk
MN T
kkkk
kT
kkk
T
kk
gd
ggd
g
ggd
gd




 

 


1
k
g
From 2
1110
T
gd g
, we can deduce that (2.1)
holds for all .
1k
In order to establish the global convergence result for
MN algorithm, we will impose the following assumptions.
Assumption A.
1) The level set
0
()( )
n
x
Rfx fx  is bounded.
2) In some neighborhood of ,
N
f
is differenti-
able and its gradient
g
is Lipschitz continuous, that is
to say, there exists a constant such that
0L
() (),
g
xgy Lxy ,.
x
yN (2.2)
By using the Assumption A, we can deduce that there
exists B and such that
0M
,
x
B
() ,
g
xM (2.3) .x
Copyright © 2011 SciRes. AM
H. FAN ET AL.1121
The following important result is obtained by Zoutendijk
[5].
Lemma 2.2. Suppose that Assumption A holds. Con-
sider any iteration method of the form (1.1) and (1.2), and
is obtained by the Wolfe line search (1.4). Then
k
t

2
2
0
.
T
kk
kk
gd
d

(2.4)
Then we will analyse the global convergence property of
MN algorithm.
Gilbert and Nocedal [6] introduced the following Prop-
erty A which pertains to the PRP method under the suffi-
cient descent condition. Now we will show that this Prop-
erty A pertains to the new method.
Property A. Consider a method of form (1.1) and (1.2).
Suppose that
0.
k
g
  (2.5)
We say that the method has Property A, if for all ,
there exist constants
k
1,b0
such that kb
and we have
1
2
kb
if 1k
s
.
The following lemma shows that the new method has the
Property A.
Lemma 2.3. Consider the method of form (1.1) and (1.2)
in which k()
MN
k

. Suppose that Assumption A
holds, then the method has Property A.
Proof. Set

22
31, 4
bLb


.
By (1.9) and (2.5) we have

2
1
1
k2
11
1
2
1
2
2
23
()
.
kT
kkk
k
MN
T
kk k
kk k
k
g
ggg
g
gd g
gg g
g
b

 










By the Assumption A (2) and (2.2) hold, if 1k
s
,
then


11 1
1
2
1
1
2
1
1
2
1
22
1
()
221
.
2
k
kkk kk
k
MN
k
k
kkk
k
kkk
k
k
k
g
ggg gg
g
g
gL gg
g
gL gg
g
Lg L
b
g


 








The proof is finished.
If (2.5) holds and the methods have Property A, then the
small steplength should not be too many. The following
lemma shows this property.
Lemma 2.4. Suppose that Assumption A and (1.5) hold.
Let
k
x
and
k
d be generated by (1.1) and (1.2) in
which k satisfies the Wolfe-Powell line search, k
t
has Property A. If (2.5) holds, then, for any 0
, there
exist N
 and 0
kN
, for all , such that
0
kk
,2
k
,
where
,1
:1,
ki
iZ kiks

 ,,k
denotes the number of the ,k
.
Lemma 2.5. Suppose that Assumption A and (1.5) hold.
Let
k
x
be generated by (1.1) and (1.2), k satisfies
the Wolfe-Powell line search, and
t
0
k
has Property
A. Then,
lim inf0.
k
kg

The proofs of Lemmas 2.4 and 2.5 had been given in [7].
By the above three lemmas, it is easy to obtain the follow-
ing convergence result.
Theorem 2.6. Suppose that Assumption A holds. Let
k
x
be generated by (1.1) and (1.2), satisfies the
k
Wolfe-Powell line search,
t
k
is computed by (1.9),
then
lim inf0.
k
kg

3. The Global Convergence of VMN
Algorithm
In this section we will add some parameters of ()
MN
k
so that it is generalization, then we have VMN algorithm
2
11
1
2
2131
() ,
kT
kk
k
VMN
kT
kk k
g
ggg
g
gd g
 

k




(3.1)
Copyright © 2011 SciRes. AM
H. FAN ET AL.
Copyright © 2011 SciRes. AM
1122
Table 1. The detail information of numerical experiments for MN algorithm.
No. 0
x
k
x
k
g
k
S201 (8, 9) (5.00000000000007,
6.00000000000002) 5.338654227543967e013 2
S202 (15, 2) (11.41277974501077,
0.89680520867268) 7.343412874359107e007 30
S205 (0,0) (3.00000000072742,
0.49999999510645) 2.411787046907220e007 12
S206 (1.2, 1) (1.00000000400789,
1.00000020307759) 3.907063255896894e007 5
S311 (1, 1) (3.77931025686871,
3.28318599743028) 5.006611497285485e007 6
S314 (2, 2) (1.79540285273286,
1.37785978124895) 7.487443339742852e008 6
in which
1(0, ),

211
,,

 31
(, )
,
1
is any given positive number.
Theorem 3.1. Suppose that Assumption A holds. Let
Let

k
x
be generated by (1.1) and (1.2), k satisfies
the Wolfe-Powell line search, and
t
0
k
has Property
A. Then,
lim inf0
k
kg

Similar to the second part of the discussion for the above
general algorithm ()
VMN
k
, we can get .
And the algorithm possesses the sufficient descent prop-
erty, in which
() 0
MN
k

1
2
1c

The proof is similar as the one of Theorem 2.1 in Sec-
tion 2.
4. Numerical Experiments
In this section, we carry out some numerical experiments.
The MN algorithm has been tested on some problems
from [8]. The results are summarized in Table 1. For the
test problem, No. is the number of the test problem in [8],
0
x
is the initial point, k
x
is the final point, is the
number of times of iteration for the problem.
k
Table 1 shows the performance of the MN algorithm
relative to the iteration. It is easily to see that, for all the
problems, the algorithm is very efficient. The results for
each problem are accurate, and with less number of times
of iteration.
5. Conclusions
In this paper, we have proposed a new nonlinear conjugate
gradient method-MN algorithm. The sufficient descent
property holds without any line searches, and the algorithm
satisfys Property A. We also have proved, employing some
steplength technique which ensures the Zoutendijk condi-
tion to be held, this method is globally convergent. Judging
from the numerical experiments in Table 1, compared to
most other algorithms, MN algorithm has higher precision
and less number of times of iteration. Finally, we have
proposed VMN algorithm, it also have the sufficient de-
scent property and Property A, and it is global convergence
under weak Wolfe-Powell line search.
6. References
[1] M. Al-Baali, “Descent Property and Global Convergence
of the Fletcher-Reeves Method with Inexact Line
Search,” IMA Journal of Numerical Analysis, Vol. 5, No.
1, 1985, pp. 121-124. doi:10.1093/imanum/5.1.121
[2] Y. F. Hu and C. Storey, “Global Convergence Result for
Conjugate Gradient Method,” Journal of Optimization
Theory and Applications, Vol. 71, No. 2, 1991, pp.
399-405. doi:10.1007/BF00939927
[3] G. Yu, Y. Zhao and Z. Wei, “A Descent Nonlinear Con-
jugate Gradient Method for Large-Scale Unconstrained
Optimization,” Applied Mathematics and Computation,
Vol. 187, No. 2, 2007, pp. 636-643.
doi:10.1016/j.amc.2006.08.087
[4] Z. Wei, S. Yao and L. Liu, “The Convergence Properties
of Some New Conjugate Gradient Methods,” Applied
Mathematics and Computation, Vol. 183, No. 2, 2006, pp.
1341-1350. doi:10.1016/j.amc.2006.05.150
[5] G. Zoutendijk, “Nonlinear Programming, Computational
Methods,” In: J. Abadie, Ed., Integer and Nonlinear Pro-
gramming, North-Holland Publisher Co., Amsterdam,
1970, pp. 37-86.
[6] J. C. Gilbert and J. Nocedal, “Global Convergence Prop-
erties of Conjugate Gradient Methods for Optimization,”
SIAM Journal Optimization, Vol. 2, No. 1, 1992, pp. 21-
42. doi:10.1137/0802003
[7] Y. H. Dai and Y. Yuan, “Nonlinear Conjugate Gradient
H. FAN ET AL.1123
Methods,” Shanghai Scientific and Technical Publishers,
Shanghai, 1998, pp. 37-48.
[8] W. Hock and K. Schittkowski, “Test Examples for Non-
linear Programming Codes,” Journal of Optimization
Theory and Applications, Vol. 30, No. 1, 1981, pp. 127-
129. doi:10.1007/BF00934594
Copyright © 2011 SciRes. AM