﻿ Global Convergence of an Extended Descent Algorithm without Line Search for Unconstrained Optimization

Journal of Applied Mathematics and Physics
Vol.06 No.01(2018), Article ID:81762,8 pages
10.4236/jamp.2018.61013

Global Convergence of an Extended Descent Algorithm without Line Search for Unconstrained Optimization

Cuiling Chen*, Liling Luo, Caihong Han, Yu Chen

College of Mathematics and Statistics, Guangxi Normal University, Guilin, China    Received: December 16, 2017; Accepted: January 13, 2018; Published: January 16, 2018

ABSTRACT

In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search direction to more general form, and also obtain the global convergence of corresponding algorithm. The numerical results illustrate that the new algorithm is effective.

Keywords:

Unconstrained Optimization, Descent Method, Line Search, Global Convergence 1. Introduction

Consider an unconstrained optimization problem (UP)

$\underset{x\in {\Re }^{n}}{min}f\left(x\right),$ (1)

where $f:{\Re }^{n}\to \Re$ is a continuously differentiable function. In general, the iterative algorithms for solving (UP) usually take the form:

${x}_{k+1}={x}_{k}+{\alpha }_{k}{d}_{k},$ (2)

where ${x}_{k},{\alpha }_{k}$ and ${d}_{k}$ are current iterative point, a positive step length and a search direction, respectively. For simplicity, we denote $\nabla f\left({x}_{k}\right)$ by ${g}_{k}$ and $f\left({x}_{k}\right)$ by ${f}_{k}$ .

The main task in the iterative formula (2) is to choose search direction ${d}_{k}$ and determine step length ${\alpha }_{k}$ along the direction. There are many classic methods to choose search direction ${d}_{k}$ , such as the steepest descent methods, Newton-type methods, Variable metric methods (see  ), and conjugate gradient methods

${d}_{k}=\left\{\begin{array}{ll}-{g}_{k}\hfill & \text{if}\text{\hspace{0.17em}}k=1,\hfill \\ -{g}_{k}+{\beta }_{k}{d}_{k-1}\hfill & \text{if}\text{\hspace{0.17em}}k\ge 2,\hfill \end{array}$ (3)

where ${\beta }_{k}$ is a parameter (see    ). For step length ${\alpha }_{k}$ , it is usually determined by line search procedure, such as exact line search, Wolfe line search, Armijo line search, and so on. However, these line search procedures may involve extensive computation of objective functions and its gradients, which often becomes a significant burden for large-scale problems. Evidently, it is a good idea that line search procedure is avoided in algorithm design in order to reduce the evaluations of objective functions and gradients.

Based on the above consideration, some authors have started to study the algorithms without line search. Recently, some conjugate gradient algorithms without line search were investigated. In  , Sun and Zhang studied some well-known conjugate gradient methods without line search, for instance, Fletcher-Reeves method, Hestenes-Stiefel method, Dai-Yuan method, Polak- Ribière method and Conjugate Descent method. In  , Chen and Sun researched a two-parameter family of conjugate gradient methods without line search. In   , Wang and Zhu put forward to conjugate gradient path methods without line search. Shi, Shen and Zhou proposed descent methods without line search in  and  , respectively. Further, Zhou presented the steepest descent algorithm without line search in  .

Inspired by the above literatures, in this paper we will extend the descent algorithm without line search of  to more general case, and discuss its global convergence. The rest of this paper is organized as follows. In Section 2, we describe the extended descent algorithm without line search. In Section 3, we analyze its global convergence. Further, we generalize the search direction to more general form, and obtain global convergence of corresponding algorithm. Finally, numerical results are reported in Section 4.

2. Extended Descent Algorithm

To proceed, we first assume that 

(H1) The function f has lower bound on $\text{£}=\left\{x\in {\Re }^{n}|f\left(x\right)\le f\left({x}_{1}\right)\right\}$ , where ${x}_{1}$ is available.

(H2) The gradient g is Lipschitz continuous in an open convex set $\mathcal{B}$ that contains $\text{£}$ , i.e., there exists $L>0$ such that

$‖g\left(x\right)-g\left(y\right)‖\le L‖x-y‖,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall x,y\in \mathcal{B}.$ (4)

Now we give the extended algorithm.

Algorithm 2.1. Given a starting point ${x}_{1}$ , a positive constant $ϵ$ , three

parameters ${\mu }_{1},{\mu }_{2}$ and $\rho$ such that $0<{\mu }_{1}<\frac{1}{2}<{\mu }_{2}<1$ , $\frac{1}{2}\le \rho <1$ . Let $k:=1$ .

Step 1. If $‖{g}_{k}‖<ϵ$ , then stop; otherwise go to Step 2.

Step 2. Compute

${s}_{k}=\left\{\begin{array}{l}\rho ,\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{\hspace{0.17em}}k=1,\hfill \\ \frac{\rho {‖{g}_{k}‖}^{2}}{\rho {‖{g}_{k}‖}^{2}+\left(1-\rho \right)|{g}_{k}^{\text{T}}{d}_{k-1}|},\text{\hspace{0.17em}}\text{\hspace{0.17em}}k\ge 2.\hfill \end{array}$ (5)

Step 3. Set search direction

${d}_{k}=\left\{\begin{array}{l}-{s}_{k}{g}_{k},\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=1,\hfill \\ -\left[\rho \left(1-\frac{{\alpha }_{k-1}{s}_{k}}{1+{\alpha }_{k-1}}\right){g}_{k}+\left(1-\rho \right)\frac{{\alpha }_{k-1}{s}_{k}}{1+{\alpha }_{k-1}}{d}_{k-1}\right],\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k\ge 2.\hfill \end{array}$ (6)

Step 4. Compute step length by the following rule. When $k=1$ , ${\alpha }_{k}$ is determined by Wolfe line search, i.e., it satisfies that

$f\left({x}_{k}+{\alpha }_{k}{d}_{k}\right)-{f}_{k}\le {\mu }_{1}{\alpha }_{k}{g}_{k}^{\text{T}}{d}_{k},$ (7)

$g{\left({x}_{k}+{\alpha }_{k}{d}_{k}\right)}^{\text{T}}{d}_{k}\ge {\mu }_{2}{g}_{k}^{\text{T}}{d}_{k}.$ (8)

When $k\ge 2$ ,

${\alpha }_{k}=-\frac{{g}_{k}^{\text{T}}{d}_{k}}{{L}_{k}{‖{d}_{k}‖}^{2}},$ (9)

where ${L}_{k}$ satisfies that $\rho L\le {L}_{k}\le {m}_{k}L$ and $\left\{{m}_{k},k=1,2,\cdots \right\}$ is a positive sequence which has a sufficient large upper bound.

Step 5. Set next iterative point

${x}_{k+1}={x}_{k}+{\alpha }_{k}{d}_{k}.$ (10)

Step 6. Set $k:=k+1$ , and go to Step 1.

Remark 2.1. Note that the formula of ${s}_{k}$ and ${d}_{k}$ in Algorithm 2.1 are the generalized forms of those in  .

3. Global Convergence

Lemma 3.1. If Algorithm 2.1 generates an infinite sequence $\left\{{x}_{k},k=1,2,\cdots \right\}$ , then all search directions ${d}_{k}$ are descent, and $\forall k\ge 2$ , it holds that

$-{g}_{k}^{\text{T}}{d}_{k}\ge \frac{\rho {‖{g}_{k}‖}^{2}}{1+{\alpha }_{k-1}}.$ (11)

Proof. If $k=1$ , it is obvious that $-{g}_{1}^{\text{T}}{d}_{1}=\rho {‖{g}_{1}‖}^{2}>0$ . If $k\ge 2$ , by (5) and (6), we have

$\begin{array}{c}-{g}_{k}^{\text{T}}{d}_{k}=\rho \left(1-\frac{{\alpha }_{k-1}{s}_{k}}{1+{\alpha }_{k-1}}\right){‖{g}_{k}‖}^{2}+\left(1-\rho \right)\frac{{\alpha }_{k-1}{s}_{k}}{1+{\alpha }_{k-1}}{g}_{k}^{\text{T}}{d}_{k-1}\\ =\rho {‖{g}_{k}‖}^{2}-\frac{{\alpha }_{k-1}{s}_{k}}{1+{\alpha }_{k-1}}\left[\rho {‖{g}_{k}‖}^{2}-\left(1-\rho \right){g}_{k}^{\text{T}}{d}_{k-1}\right]\\ \ge \rho {‖{g}_{k}‖}^{2}-\frac{{\alpha }_{k-1}{s}_{k}}{1+{\alpha }_{k-1}}\left[\rho {‖{g}_{k}‖}^{2}+\left(1-\rho \right)|{g}_{k}^{\text{T}}{d}_{k-1}|\right]\\ =\frac{\rho {‖{g}_{k}‖}^{2}}{1+{\alpha }_{k-1}}.\end{array}$ (12)

This completes the proof. ,

Lemma 3.2 (Mean value theorem, see  ). Suppose that the objective function $f\left(x\right)$ is continuously differentiable on an open convex set $\mathcal{B}$ , then

$f\left({x}_{k}+\alpha {d}_{k}\right)-{f}_{k}=\alpha {\int }_{0}^{1}\text{ }\text{ }g{\left({x}_{k}+t\alpha {d}_{k}\right)}^{\text{T}}{d}_{k}\text{d}t,$ (13)

where ${x}_{k},{x}_{k}+\alpha {d}_{k}\in \mathcal{B}$ , ${d}_{k}\in {\Re }^{n}$ . If $f\left(x\right)$ is twice continuously differentiable on $\mathcal{B}$ , then

$g\left({x}_{k}+\alpha {d}_{k}\right)-{g}_{k}=\alpha {\int }_{0}^{1}\text{ }\text{ }{\nabla }^{2}f\left({x}_{k}+t\alpha {d}_{k}\right){d}_{k}\text{d}t,$ (14)

and

$f\left({x}_{k}+\alpha {d}_{k}\right)-{f}_{k}=\alpha {g}_{k}^{\text{T}}{d}_{k}+{\alpha }^{2}{\int }_{0}^{1}\left(1-t\right){d}_{k}^{\text{T}}{\nabla }^{2}f\left({x}_{k}+t\alpha {d}_{k}\right){d}_{k}\text{d}t.$ (15)

Lemma 3.3. $\forall k\ge 2$ ,

${‖{d}_{k}‖}^{2}\le 3{\rho }^{2}\cdot \underset{1\le i\le k}{\sum }{‖{g}_{i}‖}^{2}.$ (16)

Proof. Where $k\ge 2$ , it holds that $\left(1-\rho \right){s}_{k}|{g}_{k}^{\text{T}}{d}_{k-1}|=\rho \left(1-{s}_{k}\right){‖{g}_{k}‖}^{2}$ by (5). Then $\forall k\ge 2$ , we have

$\begin{array}{c}{‖{d}_{k}‖}^{2}={‖\rho \left(1-\frac{{\alpha }_{k-1}{s}_{k}}{1+{\alpha }_{k-1}}\right){g}_{k}+\left(1-\rho \right)\frac{{\alpha }_{k-1}{s}_{k}}{1+{\alpha }_{k-1}}{d}_{k-1}‖}^{2}\\ ={\rho }^{2}{\left(1-\frac{{\alpha }_{k-1}{s}_{k}}{1+{\alpha }_{k-1}}\right)}^{2}{‖{g}_{k}‖}^{2}+2\rho \left(1-\frac{{\alpha }_{k-1}{s}_{k}}{1+{\alpha }_{k-1}}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdot \left(1-\rho \right)\frac{{\alpha }_{k-1}{s}_{k}}{1+{\alpha }_{k-1}}\cdot {g}_{k}^{\text{T}}{d}_{k-1}+{\left(1-\rho \right)}^{2}{\left(\frac{{\alpha }_{k-1}{s}_{k}}{1+{\alpha }_{k-1}}\right)}^{2}{‖{d}_{k-1}‖}^{2}\\ \le {\rho }^{2}{‖{g}_{k}‖}^{2}+2\rho \left(1-\rho \right){s}_{k}|{g}_{k}^{\text{T}}{d}_{k-1}|+{‖{d}_{k-1}‖}^{2}\\ ={\rho }^{2}{‖{g}_{k}‖}^{2}+2{\rho }^{2}\left(1-{s}_{k}\right){‖{g}_{k}‖}^{2}+{‖{d}_{k-1}‖}^{2}\le 3{\rho }^{2}{‖{g}_{k}‖}^{2}+{‖{d}_{k-1}‖}^{2}.\end{array}$

Using induction principle and noting that ${‖{d}_{1}‖}^{2}={\rho }^{2}{‖{g}_{1}‖}^{2}$ , it yields that

${‖{d}_{k}‖}^{2}\le 3{\rho }^{2}{‖{g}_{k}‖}^{2}+3{\rho }^{2}{‖{g}_{k-1}‖}^{2}+3{\rho }^{2}{‖{g}_{k-2}‖}^{2}+\cdots +{\rho }^{2}{‖{g}_{1}‖}^{2}.$

Therefore (16) holds. The proof is completed. ,

Theorem 3.1. If (H1), (H2) hold, and Algorithm 2.1 generates an infinite sequence $\left\{{x}_{k},k=1,2,\cdots \right\}$ , then

$\underset{k=2}{\overset{+\infty }{\sum }}\frac{{‖{g}_{k}‖}^{4}}{{\left(1+{\alpha }_{k-1}\right)}^{2}\underset{1\le i\le k}{\sum }{‖{g}_{i}‖}^{2}}<+\infty ;$ (17)

and

$\underset{k=2}{\overset{+\infty }{\sum }}\frac{{\alpha }_{k}}{1+{\alpha }_{k-1}}{‖{g}_{k}‖}^{2}<+\infty .$ (18)

Proof. When $k\ge 2$ , from (13), (4), Lemma 3.1, Lemma 3.3 and $\rho L\le {L}_{k}\le {m}_{k}L$ , it yields that

$\begin{array}{c}{f}_{k}-{f}_{k+1}=-{\alpha }_{k}{\int }_{0}^{1}\text{ }\text{ }g{\left({x}_{k}+t{\alpha }_{k}{d}_{k}\right)}^{\text{T}}{d}_{k}\text{d}t\\ =-{\alpha }_{k}{g}_{k}^{\text{T}}{d}_{k}-{\alpha }_{k}{\int }_{0}^{1}{\left[g\left({x}_{k}+t{\alpha }_{k}{d}_{k}\right)-{g}_{k}\right]}^{\text{T}}{d}_{k}\text{d}t\\ \ge -{\alpha }_{k}{g}_{k}^{\text{T}}{d}_{k}-{\alpha }_{k}{\int }_{0}^{1}‖g\left({x}_{k}+t{\alpha }_{k}{d}_{k}\right)-{g}_{k}‖\cdot ‖{d}_{k}‖\text{d}t\\ \ge -{\alpha }_{k}{g}_{k}^{\text{T}}{d}_{k}-{\alpha }_{k}^{2}L{\int }_{0}^{1}\text{ }\text{ }t{‖{d}_{k}‖}^{2}\text{d}t=-{\alpha }_{k}{g}_{k}^{\text{T}}{d}_{k}-\frac{1}{2}{\alpha }_{k}^{2}L{‖{d}_{k}‖}^{2}\end{array}$

$\begin{array}{c}=\left(\frac{1}{{L}_{k}}-\frac{1}{2{L}_{k}^{2}}\right)\frac{{\left({g}_{k}^{\text{T}}{d}_{k}\right)}^{2}}{{‖{d}_{k}‖}^{2}}\ge \frac{\left(2\rho -1\right){\left({g}_{k}^{\text{T}}{d}_{k}\right)}^{2}}{2L{m}_{k}^{2}{‖{d}_{k}‖}^{2}}\\ \ge \frac{\left(2\rho -1\right)\cdot {\rho }^{2}{‖{g}_{k}‖}^{4}}{2L{m}_{k}^{2}{\left(1+{\alpha }_{k-1}\right)}^{2}\cdot 3{\rho }^{2}\cdot \underset{1\le i\le k}{\sum }{‖{g}_{i}‖}^{2}}\\ =\frac{\left(2\rho -1\right){‖{g}_{k}‖}^{4}}{6L{m}_{k}^{2}{\left(1+{\alpha }_{k-1}\right)}^{2}\underset{1\le i\le k}{\sum }{‖{g}_{i}‖}^{2}},\end{array}$ (19)

which implies that $\left\{{f}_{k},k=1,2,\cdots \right\}$ is a decreasing sequence. And it is clear that the sequence $\left\{{x}_{k},k=1,2,\cdots \right\}$ generated by Algorithm 2.1 is contained in $\mathcal{B}$ by (H1), and there exists a constant ${f}^{*}$ such that ${\mathrm{lim}}_{k\to \infty }{f}_{k}={f}^{*}$ . Therefore

$\underset{k=2}{\overset{+\infty }{\sum }}\left({f}_{k}-{f}_{k+1}\right)=\underset{N\to +\infty }{\mathrm{lim}}\underset{k=2}{\overset{N}{\sum }}\left({f}_{k}-{f}_{k+1}\right)=\underset{N\to +\infty }{\mathrm{lim}}\left({f}_{2}-{f}_{N+1}\right)={f}_{2}-{f}^{*}.$

Thus

$\underset{k=2}{\overset{+\infty }{\sum }}\left({f}_{k}-{f}_{k+1}\right)<+\infty ,$

which combining with (19) yields

$\underset{k=2}{\overset{+\infty }{\sum }}\frac{{‖{g}_{k}‖}^{4}}{{m}_{k}^{2}{\left(1+{\alpha }_{k-1}\right)}^{2}\underset{1\le i\le k}{\sum }{‖{g}_{i}‖}^{2}}<+\infty .$ (20)

Since $\left\{{m}_{k},k=1,2,\cdots \right\}$ has an upper bound, (17) holds.

On the other hand, by (9) and Lemma 3.1, we have

$\begin{array}{c}{f}_{k}-{f}_{k+1}\ge -{\alpha }_{k}{g}_{k}^{\text{T}}{d}_{k}-\frac{1}{2}{\alpha }_{k}^{2}L{‖{d}_{k}‖}^{2}\\ =-{\alpha }_{k}{g}_{k}^{\text{T}}{d}_{k}+\frac{L{\alpha }_{k}{g}_{k}^{\text{T}}{d}_{k}}{2{L}_{k}}=-\frac{\left(2{L}_{k}-L\right)\left({\alpha }_{k}{g}_{k}^{\text{T}}{d}_{k}\right)}{2{L}_{k}}\\ \ge -\frac{\left(2\rho -1\right)\left({\alpha }_{k}{g}_{k}^{\text{T}}{d}_{k}\right)}{2\rho }\ge \frac{\left(2\rho -1\right){\alpha }_{k}{‖{g}_{k}‖}^{2}}{2\left(1+{\alpha }_{k-1}\right)}.\end{array}$ (21)

By the same analysis as the above proof, (18) holds. The proof is completed. ,

Lemma 3.4 (see  ). If the conditions in Theorem 3.1 hold and ${\mathrm{sup}}_{k\ge 1}\left\{{\alpha }_{k}\right\}<+\infty$ , then both the sequence $\left\{{g}_{k},k=1,2,\cdots \right\}$ and $\left\{{d}_{k},k=1,2,\cdots \right\}$ have a bound.

Theorem 3.2. If the conditions in Theorem 3.1 hold, then

$\mathrm{lim}{\mathrm{inf}}_{k\to +\infty }‖{g}_{k}‖=0.$ (22)

Proof. Suppose $\mathrm{lim}{\mathrm{inf}}_{k\to +\infty }‖{g}_{k}‖\ne 0$ , then there exists a positive $\gamma$ such that

$‖{g}_{k}‖\ge \gamma ,\text{\hspace{0.17em}}\forall k\ge 1.$ (23)

In the following, we carry out our proofs in two cases.

Case 1. We complete the proof by utilizing reduction to absurdity. Suppose that ${\mathrm{sup}}_{k\ge 1}\left\{{\alpha }_{k}\right\}<+\infty$ . By (17), we have

$\underset{k=2}{\overset{+\infty }{\sum }}\frac{{‖{g}_{k}‖}^{4}}{\underset{1\le i\le k}{\sum }{‖{g}_{i}‖}^{2}}<+\infty$ (24)

From Lemma 3.4, we know that there exists $M>0$ such that $‖{g}_{k}‖\le M,\forall k\ge 1$ . Combining (23), we have

$\frac{{‖{g}_{k}‖}^{4}}{\underset{1\le i\le k}{\sum }{‖{g}_{i}‖}^{2}}\ge \frac{{\gamma }^{4}}{k\cdot {M}^{2}}.$

It is known that

$\underset{k=2}{\overset{+\infty }{\sum }}\frac{{\gamma }^{4}}{k\cdot {M}^{2}}=\frac{{\gamma }^{4}}{{M}^{2}}\underset{k=2}{\overset{+\infty }{\sum }}\frac{1}{k}=+\infty ,$

So

$\underset{k=2}{\overset{+\infty }{\sum }}\frac{{‖{g}_{k}‖}^{4}}{\underset{1\le i\le k}{\sum }{‖{g}_{i}‖}^{2}}=+\infty ,$ (25)

which contradicts with (24). Therefore (22) holds.

Case 2. When ${\mathrm{sup}}_{k\ge 1}\left\{{\alpha }_{k}\right\}=+\infty$ , the proof is the same as that in  and here is omitted.

It follows from the proofs of Case 1 and Case 2 that (22) holds. This completes the proof. ,

Remark 3.1. Search direction of Algorithm 2.1 can be extended to more general form as follows:

${d}_{k}=\left\{\begin{array}{l}-{s}_{k}{g}_{k},\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{\hspace{0.17em}}k=1,\hfill \\ -\rho \left(1-\phi \left({\alpha }_{k-1}\right){s}_{k}\right){g}_{k}±\left(1-\rho \right)\varphi \phi \left({\alpha }_{k-1}\right){s}_{k}{d}_{k-1},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }k\ge 2,\hfill \end{array}$ (26)

where the function $\phi \left(\alpha \right)$ satisfies the following conditions(see  ):

a) It is continuous and strictly monotone increasing when $\alpha \in \left[0,+\infty \right)$ ;

b) ${\mathrm{lim}}_{\alpha \to {0}^{+}}\phi \left(\alpha \right)=\phi \left(0\right)=0$ and ${\mathrm{lim}}_{\alpha \to +\infty }\phi \left(\alpha \right)=1$ ;

c) $\alpha \left(1-\phi \left(\alpha \right)\right)$ is continuous, strictly monotone increasing when $\alpha \in \left[0,+\infty \right)$ , and

$\underset{\alpha \to +\infty }{\mathrm{lim}}\alpha \left(1-\phi \left(\alpha \right)\right)=1.$

Evidently, there are many functions satisfying the conditions (a)-(c). For

example, $\frac{\alpha }{1+\alpha }$ , $\frac{{\alpha }^{2}}{1+\alpha +{\alpha }^{2}}$ , $\frac{{\alpha }^{3}}{1+{\alpha }^{2}+{\alpha }^{3}}$ , etc (see  ). We denote Algorithm

2.1 in which ${d}_{k}$ is determined by (26) as Algorithm 3.1. By using proof technique of above Theorem 3.2, it is easy to get its convergence theorem.

4. Numerical Results

In this section, we report some preliminary numerical experiments. The test problems and their initial values are drawn from  .

In numerical experiment, we take the parameter ${L}_{k}=100$ ,and stop the iteration if the inequality $‖{g}_{k}‖\le {10}^{-5}$ is satisfied. The detailed numerical results

Table 1. Numerical results.

are reported in Table 1, in which NI, NF and NG denote the total number of iterations, the total number of function evaluations and the total number of gradient evaluations, respectively. From Table 1, we can see the extended algorithm has good numerical results.

5. Conclusion

In this paper, we extended the descent algorithm without line search of  to more general case, and got its global convergence. Compared with  , the extended algorithm has more effective numerical perfermance, so it is effective. In the future, we will further research the descent algorithms without line search, and try to get some new descent algorithms without line search, which not only convergence globally, but also have good numerical results.

Acknowledgements

We gratefully acknowledge the scholarship fund of education department of Guangxi Zhuang autonomous region, Guangxi basic ability improvement project fund for the middle-aged and young teachers of colleges and universities (2017KY0068, KY2016YB069), Guangxi higher education undergraduate course teaching reform project fund (2017JGB147), NNSF of China (11761014), Guangxi natural science foundation (2017GXNSFAA198243).

Cite this paper

Chen, C.L., Luo, L.L., Han, C.H. and Chen, Y. (2018) Global Convergence of an Extended Descent Algorithm without Line Search for Unconstrained Optimization. Journal of Applied Mathematics and Physics, 6, 130-137. https://doi.org/10.4236/jamp.2018.61013

References

1. 1. Nocedal, J. and Stephen, J.W. (1999) Numerical Optimization. Springer-Verlag, New York. https://doi.org/10.1007/b98874

2. 2. Dai, Y.H. and Yuan, Y. X. (2000) Nonlinear Conjugate Gradient Methods. Shanghai Science and Technology Press of China, Shanghai. (In Chinese)

3. 3. Gilbert, J.C. and Nocedal, J. (1992) Global Convergence Properties of Conjugate Gradient Methods for Optimization. SIAM Journal on Optimization, 2, 21-42. https://doi.org/10.1137/0802003

4. 4. Grippo, L. and Lucidi, S. (1997) A Globally Convergent Version of the Polak-Ribière Conjugate Gradient Method. Mathematical Programming, 78, 375-391. https://doi.org/10.1007/BF02614362

5. 5. Sun, J. and Zhang, J.P. (2001) Global Convergence of Conjugate Gradient Methods without Line Search. Annals of Operations Research, 103, 161-173. https://doi.org/10.1023/A:1012903105391

6. 6. Chen, X.D. and Sun, J. (2002) Global Convergence of a Two-Parameter Family of Conjugate Gradient Methods without Line Search. Journal of Computational and Applied Mathematics, 146, 37-45. https://doi.org/10.1016/S0377-0427(02)00416-8

7. 7. Wang, J.Y. and Zhu, D.T. (2016) Conjugate Gradient Path Method without Line Search Technique for Derivative-Free Unconstrained Optimization. Numerical Algorithms, 73, 957-983. https://doi.org/10.1007/s11075-016-0124-9

8. 8. Wang, J.Y. and Zhu, D.T. (2017) Derivative-Free Restrictively Preconditioned Conjugate Gradient Path Method without Line Search Technique for Solving Linear Equality Constrained Optimization. Computers and Mathematics with Applications, 73, 277-293. https://doi.org/10.1016/j.camwa.2016.11.025

9. 9. Shi, Z. J. and Shen, J. (2005) Convergence of Descent Method without Line Search. Applied Mathematics and Computation, 167, 94-107. https://doi.org/10.1016/j.amc.2004.06.097

10. 10. Zhou, G.M. (2009) A Descent Algorithm without Line Search for Unconstrained Optimization. Applied Mathematics and Computation, 215, 2528-2533. https://doi.org/10.1016/j.amc.2009.08.058

11. 11. Zhou, G.M. and Feng, C.S. (2013) The Steepest Descent Algorithm without Line Search for p-Laplacian. Applied Mathematics and Computation, 224, 36-45. https://doi.org/10.1016/j.amc.2013.07.096

12. 12. Shi, Z.J. and Shen, J. (2005) A New Descent Algorithm with Curve Search Rule. Applied Mathematics and Computation, 161, 753-768. https://doi.org/10.1016/j.amc.2003.12.058

13. 13. Morè, J.J., Garbbow, B.S. and Hillstrom, K.E. (1981) Testing Unconstrained Optimization Software. ACM Transactions on Mathematical Software, 7, 17-41. https://doi.org/10.1145/355934.355936