﻿ A Study on the Convergence of Gradient Method with Momentum for Sigma-Pi-Sigma Neural Networks

Journal of Applied Mathematics and Physics
Vol.06 No.04(2018), Article ID:84026,8 pages
10.4236/jamp.2018.64075

A Study on the Convergence of Gradient Method with Momentum for Sigma-Pi-Sigma Neural Networks

Xun Zhang, Naimin Zhang*

School of Mathematics and Information Science, Wenzhou University, Wenzhou, China

Received: December 20, 2017; Accepted: April 23, 2018; Published: April 26, 2018

ABSTRACT

In this paper, a gradient method with momentum for sigma-pi-sigma neural networks (SPSNN) is considered in order to accelerate the convergence of the learning procedure for the network weights. The momentum coefficient is chosen in an adaptive manner, and the corresponding weak convergence and strong convergence results are proved.

Keywords:

Sigma-Pi-Sigma Neural Network, Momentum Term, Gradient Method, Convergence

1. Introduction

Pi-sigma network (PSN) is a kind of high order feedforward neural network which is characterized by the fast convergence rate of the single-layer network, and the unique high order network nonlinear mapping capability [1] . In order to further improve the application capacity of the network, Li introduces more complex network structures based on PSN called sigma-pi-sigma neural network (SPSNN) [2] . SPSNN can be learned to implement static mapping in the similar manner to that of multilayer neural networks and the radial basis function networks.

The gradient method is often used for training neural networks, and the main disadvantages of this method are the slow convergence and the local minimum problem. To speed up and stabilize the training iteration procedure for the gradient method, a momentum term is often added to the increment formula for the weights, in which the present weight updating increment is a combination of the present gradient of the error function and the previous weight updating increment [3] . Many researchers have developed the theory about momentum and extended its applications. For the back-propagation algorithm, Phansalkar and Sastry give a stability analysis with adding the momentum term [4] . Torii and Bhaya discuss the convergence of the gradient method with momentum under the restriction that the error function is quadratic [5] [6] . Shao et al. study the adaptive momentum for both batch gradient method and online gradient method, and compare the efficiency of momentum with penalty [7] [8] [9] [10] [11] . The key for the convergence analysis for momentum algorithms is the monotonicity of the error function during the learning procedure, which is generally proved under the uniformly boundedness assumption of the activation function and its derivatives. In [8] [10] [12] [13] , for the gradient method with momentum, some convergence results are given for both two-layer and multi-layer feedforward neural networks. In this paper, we will consider the gradient method with momentum for sigma-pi-sigma neural networks and discuss its convergence.

The rest of the paper is organized as follows. In Section 2 we introduce the neural network model of SPSNN and the gradient method with momentum. In Section 3 we give the convergence analysis of the gradient method with momentum for training SPSNN. Numerical experiments are given in Section 4. Finally, in Section 5, we end the paper with some conclusions.

2. The Neural Network Model of SPsnn and Gradient Method with Momentum

In this section we introduce the sigma-pi-sigma neural network that is composed of multilayer neural network. The output of SPSNN has the form ${\sum }_{n=1}^{K}{\prod }_{i=1}^{n}{\sum }_{j=1}^{{N}_{v}}{f}_{nij}\left({x}_{j}\right)$ , where ${x}_{j}$ is an input, ${N}_{v}$ is the number of inputs, ${f}_{nij}\left(\text{ }\right)$ is a function to be generated through the network training, and K is the number of pi-sigma network(PSN) that is the basic building block for SPSNN. The expression of the function ${f}_{nij}\left({x}_{j}\right)$ is ${\sum }_{k=1}^{{N}_{q}+{N}_{e}-1}{w}_{nijk}{B}_{ijk}\left({x}_{j}\right)$ , where the function ${B}_{ijk}\left(\text{ }\right)$ is either 0 or 1, and ${w}_{nijk}$ is weight values stored in memory. ${N}_{q}$ and ${N}_{e}$ are information numbers stored in ${x}_{j}$ . For a K-th order SPSNN, the total weight value will be

$\frac{1}{2}×K×\left(K+1\right)×{N}_{v}×\left({N}_{q}+{N}_{e}-1\right)$ .

For a set of training examples $\left\{\left({S}_{t},{O}_{t}\right)\in {R}^{{N}_{v}}×R\right\}$ , where ${O}_{t}$ is the ideal output, $t=1,2,\cdots ,T$ , we have the following actual output:

${y}_{t}={\sum }_{n=1}^{K}{\prod }_{i=1}^{n}{\sum }_{j=1}^{{N}_{v}}{\sum }_{k=1}^{{N}_{q}+{N}_{e}-1}{w}_{nijk}{B}_{ijk}\left({x}_{j}^{\left({S}_{t}\right)}\right)$ ,

where ${x}_{j}^{\left({S}_{t}\right)}$ denotes the jth element of a given input vector ${S}_{t}$ .

In order to train the SPSNN, we choose a quadratic error function $E\left(W\right)$ :

$E\left(W\right)=\frac{1}{2}{\sum }_{t=1}^{T}{\left({O}_{t}-{y}_{t}\right)}^{2}\equiv {\sum }_{t=1}^{T}{g}_{t}\left(W\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}{g}_{t}\left(W\right)=\frac{1}{2}{\left({O}_{t}-{y}_{t}\right)}^{2}$

where $W={\left({w}_{1111},{w}_{1112},\cdots ,{w}_{K,K,{N}_{v},{N}_{q}+{N}_{e}-1}\right)}^{\text{T}}$ . For convenience we denote ${w}_{\alpha }={w}_{K,K,{N}_{v},{N}_{q}+{N}_{e}-1}$ .

The gradient method with momentum is used to train weights. The gradients of $E\left(W\right)$ and ${g}_{t}\left(W\right)$ are denoted by

$\nabla E\left(W\right)={\left(\frac{\partial E\left(W\right)}{\partial {w}_{1111}},\frac{\partial E\left(W\right)}{\partial {w}_{1112}},\cdots ,\frac{\partial E\left(W\right)}{\partial {w}_{nijk}},\cdots ,\frac{\partial E\left(W\right)}{\partial {w}_{\alpha }}\right)}^{\text{T}}$ ,

$\nabla {g}_{t}\left(W\right)={\left(\frac{\partial {g}_{t}\left(W\right)}{\partial {w}_{1111}},\frac{\partial {g}_{t}\left(W\right)}{\partial {w}_{1112}},\cdots ,\frac{\partial {g}_{t}\left(W\right)}{\partial {w}_{nijk}},\cdots ,\frac{\partial {g}_{t}\left(W\right)}{\partial {w}_{\alpha }}\right)}^{\text{T}}$ ,

and the Hessian matrices of ${g}_{t}\left({W}^{m}\right)$ and $E\left({W}^{m}\right)$ at ${W}^{m}$ are denoted by

${\nabla }^{2}{g}_{t}\left({W}^{m}\right)=\left(\begin{array}{ccc}\frac{{\partial }^{2}{g}_{t}\left({W}^{m}\right)}{\partial {w}_{1111}^{m}\partial {w}_{1111}^{m}}& \cdots & \frac{{\partial }^{2}{g}_{t}\left({W}^{m}\right)}{\partial {w}_{1111}^{m}\partial {w}_{\alpha }^{m}}\\ ⋮& \ddots & ⋮\\ \frac{{\partial }^{2}{g}_{t}\left({W}^{m}\right)}{\partial {w}_{\alpha }^{m}\partial {w}_{1111}^{m}}& \cdots & \frac{{\partial }^{2}{g}_{t}\left({W}^{m}\right)}{\partial {w}_{\alpha }^{m}\partial {w}_{\alpha }^{m}}\end{array}\right)$ ,

${\nabla }^{2}E\left({W}^{m}\right)=\left(\begin{array}{ccc}\frac{{\partial }^{2}E\left({W}^{m}\right)}{\partial {w}_{1111}^{m}\partial {w}_{1111}^{m}}& \cdots & \frac{{\partial }^{2}E\left({W}^{m}\right)}{\partial {w}_{1111}^{m}\partial {w}_{\alpha }^{m}}\\ ⋮& \ddots & ⋮\\ \frac{{\partial }^{2}E\left({W}^{m}\right)}{\partial {w}_{\alpha }^{m}\partial {w}_{1111}^{m}}& \cdots & \frac{{\partial }^{2}E\left({W}^{m}\right)}{\partial {w}_{\alpha }^{m}\partial {w}_{\alpha }^{m}}\end{array}\right)$ .

Given any arbitrarily initial weight vectors ${W}^{0}$ , ${W}^{1}$ , the gradient method with momentum updates the weight vector W by

${W}^{m+1}={W}^{m}-\eta \nabla E\left({W}^{m}\right)+{\tau }^{m}\left({W}^{m}-{W}^{m-1}\right),\text{\hspace{0.17em}}m=1,2,\cdots$ , (1)

where $\eta >0$ is the learning rate, ${W}^{m}-{W}^{m-1}$ is called the momentum term, ${\tau }^{m}$ is the momentum coefficient.

Similar to [12] [14] , in this paper, we choose ${\tau }^{m}$ as follows:

${\tau }^{m}=\left\{\begin{array}{l}\frac{\mu ‖\nabla E\left({W}^{m}\right)‖}{‖\Delta {W}^{m}‖},\text{if}\text{\hspace{0.17em}}\Delta W\ne 0\\ 0,\text{else}\end{array}$

where m is a positive number and $\Delta {W}^{m}={W}^{m}-{W}^{m-1}$ , and $‖\text{ }\cdot \text{ }‖$ is 2-norm in this paper.

Notice the component form of (1) is

${w}_{nijk}^{m+1}={w}_{nijk}^{m}-\eta \frac{\partial E\left({W}^{m}\right)}{\partial {w}_{nijk}^{m}}+{\tau }^{m}\left({w}_{nijk}^{m}-{w}_{nijk}^{m-1}\right)$ .

In fact,

${y}_{t}=PS{N}_{1}+PS{N}_{2}+\cdots +PS{N}_{K}$ ,

where $PS{N}_{n}={\prod }_{i=1}^{n}{U}_{ni}$ . Recalling ${f}_{nij}\left({x}_{j}\right)={\sum }_{k=1}^{{N}_{q}+{N}_{e}-1}{w}_{nijk}{B}_{ijk}\left({x}_{j}\right)$ , then

$\begin{array}{c}\frac{\partial E\left(W\right)}{\partial {w}_{nijk}}={\sum }_{t=1}^{T}\left({y}_{t}-{O}_{t}\right)\frac{\partial {y}_{t}}{\partial {w}_{nijk}}\\ ={\sum }_{t=1}^{T}\left({y}_{t}-{O}_{t}\right)\frac{\partial PS{N}_{n}}{\partial {U}_{ni}}\frac{\partial {U}_{ni}}{\partial {f}_{nij}}\frac{\partial {f}_{nij}}{\partial {w}_{nijk}}\\ ={\sum }_{t=1}^{T}\left({y}_{t}-{O}_{t}\right)\left\{\prod _{p\ne i}{U}_{np}\right\}{B}_{ijk}\left(xj\right)\end{array}$

3. Convergence Results

Similar to [12] [14] , we need the following assumptions.

(A1): The elements of the Hessian matrix ${\nabla }^{2}E\left({W}^{m}\right)$ be uniformly bounded for any ${W}^{m}$ .

(A2): The number of the elements of $\Omega =\left\{W|\nabla E\left(W\right)=0\right\}$ be finite.

From (A1), it is easy to see that there exists a constant $M>0$ such that

$‖{\nabla }^{2}E\left({W}^{m}\right)‖\le M,\text{\hspace{0.17em}}m=0,1,2,\cdots$ .

Lemma 3.1 ( [15] ) Let $f:{R}^{n}\to R$ be continuously differentiable, the number of the elements of the set $\Omega =\left\{x|\nabla f\left(x\right)=0\right\}$ be finite, and the sequence $\left\{{x}^{k}\right\}$ satisfy

$\underset{k\to \infty }{\mathrm{lim}}‖{x}^{k}-{x}^{k-1}‖=0$ ,

$\underset{k\to \infty }{\mathrm{lim}}‖\nabla f\left({x}^{k}\right)‖=0$ .

Then there exists a ${x}^{\ast }\in {R}^{n}$ such than

$\underset{k\to \infty }{\mathrm{lim}}{x}^{k}={x}^{\ast },\text{\hspace{0.17em}}\nabla f\left({x}^{\ast }\right)=0$ .

Theorem 3.2 If Assumption (A1) is satisfied. Then there exists ${E}^{\ast }\ge 0$ such that for $\eta \in \left(0,\frac{2}{M}\right)$ and $\mu \in \left(0,\frac{-1-M\eta +\sqrt{1+4M\eta }}{M}\right)$ , it holds the following weak convergence result for the iteration (1):

$E\left({W}^{m+1}\right)\le E\left({W}^{m}\right)$ ,

$\underset{m\to \infty }{\mathrm{lim}}E\left({W}^{m}\right)={E}^{\ast }$ ,

$\underset{m\to \infty }{\mathrm{lim}}‖\nabla E\left({W}^{m}\right)‖=0$ .

Furthermore, if Assumption (A2) is also valid, then it holds the strong convergence result, that is there exists ${W}^{\ast }$ such that

$\underset{m\to \infty }{\mathrm{lim}}{W}^{m}={W}^{\ast },\text{\hspace{0.17em}}\nabla E\left({W}^{\ast }\right)=0$ .

Proof.

Using Taylor’s formula, we expand ${g}_{t}\left({W}^{m+1}\right)$ at ${W}^{m}$ :

$\begin{array}{c}{g}_{t}\left({W}^{m+1}\right)={g}_{t}\left({W}^{m}\right)+{\left(\nabla {g}_{t}\left({W}^{m}\right)\right)}^{\text{T}}\left({W}^{m+1}-{W}^{m}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{1}{2}{\left({W}^{m+1}-{W}^{m}\right)}^{\text{T}}{\nabla }^{2}{g}_{t}\left({\xi }^{m}\right)\left({W}^{m+1}-{W}^{m}\right)\end{array}$ (2)

where ${\xi }^{m}$ lies in between ${W}^{m}$ and ${W}^{m+1}$ .

From (2) we have

$\begin{array}{c}{\sum }_{t=1}^{T}{g}_{t}\left({W}^{m+1}\right)={\sum }_{t=1}^{T}{g}_{t}\left({W}^{m}\right)+{\sum }_{t=1}^{T}{\left(\nabla {g}_{t}\left({W}^{m}\right)\right)}^{\text{T}}\left({W}^{m+1}-{W}^{m}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\sum }_{t=1}^{T}\frac{1}{2}{\left({W}^{m+1}-{W}^{m}\right)}^{\text{T}}{\nabla }^{2}{g}_{t}\left({\xi }^{m}\right)\left({W}^{m+1}-{W}^{m}\right)\end{array}$

The above equation is equivalent to

$E\left({W}^{m+1}\right)=E\left({W}^{m}\right)+{\delta }_{1}+{\delta }_{2}$ (3)

where

${\delta }_{1}={\sum }_{t=1}^{T}{\left(\nabla {g}_{t}\left({W}^{m}\right)\right)}^{\text{T}}\left({W}^{m+1}-{W}^{m}\right)$ ,

${\delta }_{2}={\sum }_{t=1}^{T}\frac{1}{2}{\left({W}^{m+1}-{W}^{m}\right)}^{\text{T}}{\nabla }^{2}{g}_{t}\left({\xi }^{m}\right)\left({W}^{m+1}-{W}^{m}\right)$ .

It is easy to see that

$\begin{array}{c}{\delta }_{1}={\left(\nabla E\left({W}^{m}\right)\right)}^{\text{T}}\Delta {W}^{m+1}\\ ={\left(\nabla E\left({W}^{m}\right)\right)}^{\text{T}}\left(-\eta \nabla E\left({W}^{m}\right)+{\tau }^{m}\Delta {W}^{m}\right)\\ =-\eta {\left(\nabla E\left({W}^{m}\right)\right)}^{\text{T}}\nabla E\left({W}^{m}\right)+{\tau }^{m}{\left(\nabla E\left({W}^{m}\right)\right)}^{\text{T}}\Delta {W}^{m}\\ \le -\eta {‖\nabla E\left({W}^{m}\right)‖}^{2}+\mu ‖{\left(\nabla E\left({W}^{m}\right)\right)}^{\text{T}}‖\frac{‖\nabla E\left({W}^{m}\right)‖}{‖\Delta {W}^{m}‖}‖\Delta {W}^{m}‖\\ =\left(-\eta +\mu \right){‖\nabla E\left({W}^{m}\right)‖}^{2}\end{array}$

$\begin{array}{c}{\delta }_{2}=\frac{1}{2}{\left(\Delta {W}^{m+1}\right)}^{\text{T}}{\sum }_{t=1}^{T}{\nabla }^{2}{g}_{t}\left({\xi }^{m}\right)\Delta {W}^{m+1}\\ =\frac{1}{2}{\left(\Delta {W}^{m+1}\right)}^{\text{T}}{\nabla }^{2}E\left({\xi }^{m}\right)\Delta {W}^{m+1}\\ \le \frac{1}{2}|{\left(\Delta {W}^{m+1}\right)}^{\text{T}}{\nabla }^{2}E\left({\xi }^{m}\right)\Delta {W}^{m+1}|\\ \le \frac{1}{2}‖{\left(\Delta {W}^{m+1}\right)}^{\text{T}}‖‖{\nabla }^{2}E\left({\xi }^{m}\right)‖‖\Delta {W}^{m+1}‖\\ \le \frac{1}{2}M{‖\Delta {W}^{m+1}‖}^{2}\end{array}$

$\begin{array}{l}=\frac{1}{2}M{‖-\eta \nabla E\left({W}^{m}\right)+{\tau }^{m}\Delta {W}^{m}‖}^{2}\\ \le \frac{1}{2}M{\left(‖-\eta \nabla E\left({W}^{m}\right)‖+‖{\tau }^{m}\Delta {W}^{m}‖\right)}^{2}\\ =\frac{1}{2}M\left({‖-\eta \nabla E\left({W}^{m}\right)‖}^{2}+{‖{\tau }^{m}\Delta {W}^{m}‖}^{2}+2‖-\eta \nabla E\left({W}^{m}\right)‖‖{\tau }^{m}\Delta {W}^{m}‖\right)\\ \le \frac{1}{2}M\left({\eta }^{2}{‖\nabla E\left({W}^{m}\right)‖}^{2}+{\mu }^{2}{‖\nabla E\left({W}^{m}\right)‖}^{2}+2\eta \mu {‖\nabla E\left({W}^{m}\right)‖}^{2}\right)\\ =\frac{1}{2}M{\left(\eta +\mu \right)}^{2}{‖\nabla E\left({W}^{m}\right)‖}^{2}\end{array}$

Together with (3), we have

$\begin{array}{c}E\left({W}^{m+1}\right)=E\left({W}^{m}\right)+{\delta }_{1}+{\delta }_{2}\\ \le E\left({W}^{m}\right)-\left(\eta -\mu -\frac{1}{2}M{\left(\eta +\mu \right)}^{2}\right){‖\nabla E\left({W}^{m}\right)‖}^{2}\end{array}$

Set $\beta =\eta -\mu -\frac{1}{2}M{\left(\eta +\mu \right)}^{2}$ . Then

$E\left({W}^{m+1}\right)\le E\left({W}^{m}\right)-\beta {‖\nabla E\left({W}^{m}\right)‖}^{2}$ . (4)

It is easy to see that $\beta >0$ when

$\left\{\begin{array}{l}\eta \in \left(0,\frac{2}{M}\right)\\ \mu \in \left(0,\frac{-1-M\eta +\sqrt{1+4M\eta }}{M}\right)\end{array}$ . (5)

If η and μ satisfy (5), then the sequence $\left\{E\left({W}^{m}\right)\right\}$ is monotonically decreasing. Since $E\left({W}^{m}\right)$ is nonnegative, it must converge to some ${E}^{\ast }\ge 0$ , that is

$\underset{m\to \infty }{\mathrm{lim}}E\left({W}^{m}\right)={E}^{\ast }$ .

By (4) it is easy to see for any positive integer N, it holds

$\beta {\sum }_{m=0}^{N-1}{‖\nabla E\left({W}^{m}\right)‖}^{2}\le E\left({W}^{0}\right)-E\left({W}^{N}\right)$ .

Let $N\to \infty$ , then we have ${\sum }_{m=0}^{\infty }{‖\nabla E\left({W}^{m}\right)‖}^{2}\le \infty$ , so $\underset{m\to \infty }{\mathrm{lim}}‖\nabla E\left({W}^{m}\right)‖=0$ , which finishes the proof for the weak convergence.

By (1), we have

$‖{W}^{m+1}-{W}^{m}‖\le \eta ‖\nabla E\left({W}^{m}\right)‖+{\tau }^{m}‖\Delta {W}^{m}‖\le \left(\eta +\mu \right)‖\nabla E\left({W}^{m}\right)‖$ ,

which indicates

$\underset{m\to \infty }{\mathrm{lim}}‖{W}^{m+1}-{W}^{m}‖=0$ .

From Lemma 3.1, it holds

$\underset{m\to \infty }{\mathrm{lim}}{W}^{m}={W}^{\ast },\nabla E\left({W}^{\ast }\right)=0$ ,

which finishes the proof for the strong convergence.

4. Numerical Results

In this section, we propose an example to illustrate the convergence behavior of the iteration (1) by comparing the iteration steps (IT), elapsed CPU time in seconds (CPU) and relative residual error (RES). The experiment is terminated when the current iteration satisfies $\text{RES}\le {10}^{-8}$ or the number of the max iteration steps k = 1000 are exceeded. The computations are implemented in MATLAB on a PC computer with Intel (R) Core (R) CPU 1000 M @ 1.80 GHz, and 2.00 GB memory.

Example 4.1 ( [16] ) Four-dimensional parity problem (Table 1)

Table 1. The data samples.

Table 2. Optimal parameters, CPU times, iteration numbers, and residuals.

In this simulation experiment, the initial weights ${W}^{0}$ is a zero vector of 24 dimensional and ${W}^{1}$ is a 24 dimensional vector whose elements are all 1. The learning rate $\eta =0.00001$ and momentum factor $\mu =0.00005$ . The number of training samples is $T=16$ . In the above Table 2, we compare the convergence behavior of the gradient method with momentum and the gradient method with no momentum. It can be seen that the network training is improved significantly after added the momentum item.

5. Conclusion

In this paper, we study the gradient method with momentum for training sigma-pi-sigma neural networks. We take the momentum coefficient in an adaptive manner, and the corresponding weak convergence and strong convergence results are proved. The Assumptions A1 and A2 in this paper seem to be a little severe, so how to weaken the one or two assumptions will be our future work.

Support Information

This author is supported by National Natural Science Foundation of China under grant No. 61572018 and Zhejiang Provincial Natural Science Foundation of China under Grant No. LY15A010016.

Cite this paper

Zhang, X. and Zhang, N.M. (2018) A Study on the Convergence of Gradient Method with Momentum for Sigma-Pi-Sigma Neural Networks. Journal of Applied Mathematics and Physics, 6, 880-887. https://doi.org/10.4236/jamp.2018.64075

References

1. 1. Shin, Y. and Ghosh, J. (1991) The Pi-Sigma Network: An Efficient Higher-Order Neural Network for Pattern Classification and Function Approximation. International Joint Conference on Neural Networks, 1, 13-18. https://doi.org/10.1109/IJCNN.1991.155142

2. 2. Li, C.K. (2003) A Sigma-Pi-Sigma Neural Network (SPSNN). Neural Processing Letters, 17, 1-19. https://doi.org/10.1023/A:1022967523886

3. 3. Rumelhart, D.E., Hinton, G.E. and Williams, R.J. (1986) Learning Representations by Back-Propagating Errors. Nature, 323, 533-536. https://doi.org/10.1038/323533a0

4. 4. Phansalkar, V.V. and Sastry, P.S. (1994) Analysis of the Back-Propagation Algorithm with Momentum. IEEE Transactions on Neural Networks, 5, 505-506. https://doi.org/10.1109/72.286925

5. 5. Torii, M. and Hagan, M.T. (2002) Stability of Steepest Descent with Momentum for Quadratic Functions. IEEE Transactions on Neural Networks, 13, 752-756. https://doi.org/10.1109/TNN.2002.1000143

6. 6. Bhaya, A. and Kaszkurewicz, E. (2004) Steepest Descent with Momentum for Quadratic Functions Is a Version of the Conjugate Gradient Method. Neural Networks, 17, 65-71. https://doi.org/10.1016/S0893-6080(03)00170-9

7. 7. Qian, N. (1999) On the Momentum Term in Gradient Descent Learning Algorithms. Neural Networks, 12, 145-151. https://doi.org/10.1016/S0893-6080(98)00116-6

8. 8. Shao, H. and Zheng, G. (2011) Convergence Analysis of a Back-Propagation Algorithm with Adaptive Momentum. Neurocomputing, 74, 749-752. https://doi.org/10.1016/j.neucom.2010.10.008

9. 9. Shao, H. and Zheng, G. (2011) Boundedness and Convergence of Online Gradient Method with Penalty and Momentum. Neurocomputing, 74, 765-770. https://doi.org/10.1016/j.neucom.2010.10.005

10. 10. Shao, H., Xu, D., Zheng, G. and Liu, L. (2012) Convergence of an Online Gradient Method with Inner-Product and Adaptive Momentum. Neurocomputing, 747, 243-252. https://doi.org/10.1016/j.neucom.2011.09.003

11. 11. Xu, D., Shao, H. and Zhang, H. (2012) A New Adaptive Momentum Algorithm for Split-Complex Recurrent Neural Networks. Neurocomputing, 93, 133-136. https://doi.org/10.1016/j.neucom.2012.03.013

12. 12. Zhang, N., Wu, W. and Zheng, G. (2006) Convergence of Gradient Method with Momentum for Two-Layer Feedforward Neural Networks. IEEE Transactions Neural Networks, 17, 522-525. https://doi.org/10.1109/TNN.2005.863460

13. 13. Wu, W., Zhang, N., Li, Z., Li, L. and Liu, Y. (2008) Convergence of Gradient Method with Momentum for Back-Propagation Neural Networks. Journal of Computational Mathematics, 4, 613-623.

14. 14. Gori, M. and Maggini, M. (1996) Optimal Convergence of On-Line Backpropagation. IEEE Transactions on Neural Networks, 7, 251-254. https://doi.org/10.1109/72.478415

15. 15. Yuan, Y. and Sun, W. (1997) Optimization Theory and Methods. Science Press, Beijing.

16. 16. Yan, X. and Chao, Z. (2008) Convergence of Asynchronous Batch Gradient Method with Momentum for Pi-Sigma Networks. Mathematica Applicata, 21, 207-212.