Journal of Applied Mathematics and Physics
Vol.06 No.04(2018), Article ID:84026,8 pages
10.4236/jamp.2018.64075

A Study on the Convergence of Gradient Method with Momentum for Sigma-Pi-Sigma Neural Networks

Xun Zhang, Naimin Zhang*

School of Mathematics and Information Science, Wenzhou University, Wenzhou, China

Copyright © 2018 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: December 20, 2017; Accepted: April 23, 2018; Published: April 26, 2018

ABSTRACT

In this paper, a gradient method with momentum for sigma-pi-sigma neural networks (SPSNN) is considered in order to accelerate the convergence of the learning procedure for the network weights. The momentum coefficient is chosen in an adaptive manner, and the corresponding weak convergence and strong convergence results are proved.

Keywords:

Sigma-Pi-Sigma Neural Network, Momentum Term, Gradient Method, Convergence

1. Introduction

Pi-sigma network (PSN) is a kind of high order feedforward neural network which is characterized by the fast convergence rate of the single-layer network, and the unique high order network nonlinear mapping capability [1] . In order to further improve the application capacity of the network, Li introduces more complex network structures based on PSN called sigma-pi-sigma neural network (SPSNN) [2] . SPSNN can be learned to implement static mapping in the similar manner to that of multilayer neural networks and the radial basis function networks.

The gradient method is often used for training neural networks, and the main disadvantages of this method are the slow convergence and the local minimum problem. To speed up and stabilize the training iteration procedure for the gradient method, a momentum term is often added to the increment formula for the weights, in which the present weight updating increment is a combination of the present gradient of the error function and the previous weight updating increment [3] . Many researchers have developed the theory about momentum and extended its applications. For the back-propagation algorithm, Phansalkar and Sastry give a stability analysis with adding the momentum term [4] . Torii and Bhaya discuss the convergence of the gradient method with momentum under the restriction that the error function is quadratic [5] [6] . Shao et al. study the adaptive momentum for both batch gradient method and online gradient method, and compare the efficiency of momentum with penalty [7] [8] [9] [10] [11] . The key for the convergence analysis for momentum algorithms is the monotonicity of the error function during the learning procedure, which is generally proved under the uniformly boundedness assumption of the activation function and its derivatives. In [8] [10] [12] [13] , for the gradient method with momentum, some convergence results are given for both two-layer and multi-layer feedforward neural networks. In this paper, we will consider the gradient method with momentum for sigma-pi-sigma neural networks and discuss its convergence.

The rest of the paper is organized as follows. In Section 2 we introduce the neural network model of SPSNN and the gradient method with momentum. In Section 3 we give the convergence analysis of the gradient method with momentum for training SPSNN. Numerical experiments are given in Section 4. Finally, in Section 5, we end the paper with some conclusions.

2. The Neural Network Model of SPsnn and Gradient Method with Momentum

In this section we introduce the sigma-pi-sigma neural network that is composed of multilayer neural network. The output of SPSNN has the form n = 1 K i = 1 n j = 1 N v f n i j ( x j ) , where x j is an input, N v is the number of inputs, f n i j ( ) is a function to be generated through the network training, and K is the number of pi-sigma network(PSN) that is the basic building block for SPSNN. The expression of the function f n i j ( x j ) is k = 1 N q + N e 1 w n i j k B i j k ( x j ) , where the function B i j k ( ) is either 0 or 1, and w n i j k is weight values stored in memory. N q and N e are information numbers stored in x j . For a K-th order SPSNN, the total weight value will be

1 2 × K × ( K + 1 ) × N v × ( N q + N e 1 ) .

For a set of training examples { ( S t , O t ) R N v × R } , where O t is the ideal output, t = 1 , 2 , , T , we have the following actual output:

y t = n = 1 K i = 1 n j = 1 N v k = 1 N q + N e 1 w n i j k B i j k ( x j ( S t ) ) ,

where x j ( S t ) denotes the jth element of a given input vector S t .

In order to train the SPSNN, we choose a quadratic error function E ( W ) :

E ( W ) = 1 2 t = 1 T ( O t y t ) 2 t = 1 T g t ( W ) , g t ( W ) = 1 2 ( O t y t ) 2

where W = ( w 1111 , w 1112 , , w K , K , N v , N q + N e 1 ) T . For convenience we denote w α = w K , K , N v , N q + N e 1 .

The gradient method with momentum is used to train weights. The gradients of E ( W ) and g t ( W ) are denoted by

E ( W ) = ( E ( W ) w 1111 , E ( W ) w 1112 , , E ( W ) w n i j k , , E ( W ) w α ) T ,

g t ( W ) = ( g t ( W ) w 1111 , g t ( W ) w 1112 , , g t ( W ) w n i j k , , g t ( W ) w α ) T ,

and the Hessian matrices of g t ( W m ) and E ( W m ) at W m are denoted by

2 g t ( W m ) = ( 2 g t ( W m ) w 1111 m w 1111 m 2 g t ( W m ) w 1111 m w α m 2 g t ( W m ) w α m w 1111 m 2 g t ( W m ) w α m w α m ) ,

2 E ( W m ) = ( 2 E ( W m ) w 1111 m w 1111 m 2 E ( W m ) w 1111 m w α m 2 E ( W m ) w α m w 1111 m 2 E ( W m ) w α m w α m ) .

Given any arbitrarily initial weight vectors W 0 , W 1 , the gradient method with momentum updates the weight vector W by

W m + 1 = W m η E ( W m ) + τ m ( W m W m 1 ) , m = 1 , 2 , , (1)

where η > 0 is the learning rate, W m W m 1 is called the momentum term, τ m is the momentum coefficient.

Similar to [12] [14] , in this paper, we choose τ m as follows:

τ m = { μ E ( W m ) Δ W m , if Δ W 0 0 , else

where m is a positive number and Δ W m = W m W m 1 , and is 2-norm in this paper.

Notice the component form of (1) is

w n i j k m + 1 = w n i j k m η E ( W m ) w n i j k m + τ m ( w n i j k m w n i j k m 1 ) .

In fact,

y t = P S N 1 + P S N 2 + + P S N K ,

where P S N n = i = 1 n U n i . Recalling f n i j ( x j ) = k = 1 N q + N e 1 w n i j k B i j k ( x j ) , then

E ( W ) w n i j k = t = 1 T ( y t O t ) y t w n i j k = t = 1 T ( y t O t ) P S N n U n i U n i f n i j f n i j w n i j k = t = 1 T ( y t O t ) { p i U n p } B i j k (xj)

3. Convergence Results

Similar to [12] [14] , we need the following assumptions.

(A1): The elements of the Hessian matrix 2 E ( W m ) be uniformly bounded for any W m .

(A2): The number of the elements of Ω = { W | E ( W ) = 0 } be finite.

From (A1), it is easy to see that there exists a constant M > 0 such that

2 E ( W m ) M , m = 0 , 1 , 2 , .

Lemma 3.1 ( [15] ) Let f : R n R be continuously differentiable, the number of the elements of the set Ω = { x | f ( x ) = 0 } be finite, and the sequence { x k } satisfy

lim k x k x k 1 = 0 ,

lim k f ( x k ) = 0 .

Then there exists a x R n such than

lim k x k = x , f ( x ) = 0 .

Theorem 3.2 If Assumption (A1) is satisfied. Then there exists E 0 such that for η ( 0 , 2 M ) and μ ( 0 , 1 M η + 1 + 4 M η M ) , it holds the following weak convergence result for the iteration (1):

E ( W m + 1 ) E ( W m ) ,

lim m E ( W m ) = E ,

lim m E ( W m ) = 0 .

Furthermore, if Assumption (A2) is also valid, then it holds the strong convergence result, that is there exists W such that

lim m W m = W , E ( W ) = 0 .

Proof.

Using Taylor’s formula, we expand g t ( W m + 1 ) at W m :

g t ( W m + 1 ) = g t ( W m ) + ( g t ( W m ) ) T ( W m + 1 W m ) + 1 2 ( W m + 1 W m ) T 2 g t ( ξ m ) ( W m + 1 W m ) (2)

where ξ m lies in between W m and W m + 1 .

From (2) we have

t = 1 T g t ( W m + 1 ) = t = 1 T g t ( W m ) + t = 1 T ( g t ( W m ) ) T ( W m + 1 W m ) + t = 1 T 1 2 ( W m + 1 W m ) T 2 g t ( ξ m ) ( W m + 1 W m )

The above equation is equivalent to

E ( W m + 1 ) = E ( W m ) + δ 1 + δ 2 (3)

where

δ 1 = t = 1 T ( g t ( W m ) ) T ( W m + 1 W m ) ,

δ 2 = t = 1 T 1 2 ( W m + 1 W m ) T 2 g t ( ξ m ) ( W m + 1 W m ) .

It is easy to see that

δ 1 = ( E ( W m ) ) T Δ W m + 1 = ( E ( W m ) ) T ( η E ( W m ) + τ m Δ W m ) = η ( E ( W m ) ) T E ( W m ) + τ m ( E ( W m ) ) T Δ W m η E ( W m ) 2 + μ ( E ( W m ) ) T E ( W m ) Δ W m Δ W m = ( η + μ ) E ( W m ) 2

δ 2 = 1 2 ( Δ W m + 1 ) T t = 1 T 2 g t ( ξ m ) Δ W m + 1 = 1 2 ( Δ W m + 1 ) T 2 E ( ξ m ) Δ W m + 1 1 2 | ( Δ W m + 1 ) T 2 E ( ξ m ) Δ W m + 1 | 1 2 ( Δ W m + 1 ) T 2 E ( ξ m ) Δ W m + 1 1 2 M Δ W m + 1 2

= 1 2 M η E ( W m ) + τ m Δ W m 2 1 2 M ( η E ( W m ) + τ m Δ W m ) 2 = 1 2 M ( η E ( W m ) 2 + τ m Δ W m 2 + 2 η E ( W m ) τ m Δ W m ) 1 2 M ( η 2 E ( W m ) 2 + μ 2 E ( W m ) 2 + 2 η μ E ( W m ) 2 ) = 1 2 M ( η + μ ) 2 E ( W m ) 2

Together with (3), we have

E ( W m + 1 ) = E ( W m ) + δ 1 + δ 2 E ( W m ) ( η μ 1 2 M ( η + μ ) 2 ) E ( W m ) 2

Set β = η μ 1 2 M ( η + μ ) 2 . Then

E ( W m + 1 ) E ( W m ) β E ( W m ) 2 . (4)

It is easy to see that β > 0 when

{ η ( 0 , 2 M ) μ ( 0 , 1 M η + 1 + 4 M η M ) . (5)

If η and μ satisfy (5), then the sequence { E ( W m ) } is monotonically decreasing. Since E ( W m ) is nonnegative, it must converge to some E 0 , that is

lim m E ( W m ) = E .

By (4) it is easy to see for any positive integer N, it holds

β m = 0 N 1 E ( W m ) 2 E ( W 0 ) E ( W N ) .

Let N , then we have m = 0 E ( W m ) 2 , so lim m E ( W m ) = 0 , which finishes the proof for the weak convergence.

By (1), we have

W m + 1 W m η E ( W m ) + τ m Δ W m ( η + μ ) E ( W m ) ,

which indicates

lim m W m + 1 W m = 0 .

From Lemma 3.1, it holds

lim m W m = W , E ( W ) = 0 ,

which finishes the proof for the strong convergence.

4. Numerical Results

In this section, we propose an example to illustrate the convergence behavior of the iteration (1) by comparing the iteration steps (IT), elapsed CPU time in seconds (CPU) and relative residual error (RES). The experiment is terminated when the current iteration satisfies RES 10 8 or the number of the max iteration steps k = 1000 are exceeded. The computations are implemented in MATLAB on a PC computer with Intel (R) Core (R) CPU 1000 M @ 1.80 GHz, and 2.00 GB memory.

Example 4.1 ( [16] ) Four-dimensional parity problem (Table 1)

Table 1. The data samples.

Table 2. Optimal parameters, CPU times, iteration numbers, and residuals.

In this simulation experiment, the initial weights W 0 is a zero vector of 24 dimensional and W 1 is a 24 dimensional vector whose elements are all 1. The learning rate η = 0.00001 and momentum factor μ = 0.00005 . The number of training samples is T = 16 . In the above Table 2, we compare the convergence behavior of the gradient method with momentum and the gradient method with no momentum. It can be seen that the network training is improved significantly after added the momentum item.

5. Conclusion

In this paper, we study the gradient method with momentum for training sigma-pi-sigma neural networks. We take the momentum coefficient in an adaptive manner, and the corresponding weak convergence and strong convergence results are proved. The Assumptions A1 and A2 in this paper seem to be a little severe, so how to weaken the one or two assumptions will be our future work.

Support Information

This author is supported by National Natural Science Foundation of China under grant No. 61572018 and Zhejiang Provincial Natural Science Foundation of China under Grant No. LY15A010016.

Cite this paper

Zhang, X. and Zhang, N.M. (2018) A Study on the Convergence of Gradient Method with Momentum for Sigma-Pi-Sigma Neural Networks. Journal of Applied Mathematics and Physics, 6, 880-887. https://doi.org/10.4236/jamp.2018.64075

References

  1. 1. Shin, Y. and Ghosh, J. (1991) The Pi-Sigma Network: An Efficient Higher-Order Neural Network for Pattern Classification and Function Approximation. International Joint Conference on Neural Networks, 1, 13-18. https://doi.org/10.1109/IJCNN.1991.155142

  2. 2. Li, C.K. (2003) A Sigma-Pi-Sigma Neural Network (SPSNN). Neural Processing Letters, 17, 1-19. https://doi.org/10.1023/A:1022967523886

  3. 3. Rumelhart, D.E., Hinton, G.E. and Williams, R.J. (1986) Learning Representations by Back-Propagating Errors. Nature, 323, 533-536. https://doi.org/10.1038/323533a0

  4. 4. Phansalkar, V.V. and Sastry, P.S. (1994) Analysis of the Back-Propagation Algorithm with Momentum. IEEE Transactions on Neural Networks, 5, 505-506. https://doi.org/10.1109/72.286925

  5. 5. Torii, M. and Hagan, M.T. (2002) Stability of Steepest Descent with Momentum for Quadratic Functions. IEEE Transactions on Neural Networks, 13, 752-756. https://doi.org/10.1109/TNN.2002.1000143

  6. 6. Bhaya, A. and Kaszkurewicz, E. (2004) Steepest Descent with Momentum for Quadratic Functions Is a Version of the Conjugate Gradient Method. Neural Networks, 17, 65-71. https://doi.org/10.1016/S0893-6080(03)00170-9

  7. 7. Qian, N. (1999) On the Momentum Term in Gradient Descent Learning Algorithms. Neural Networks, 12, 145-151. https://doi.org/10.1016/S0893-6080(98)00116-6

  8. 8. Shao, H. and Zheng, G. (2011) Convergence Analysis of a Back-Propagation Algorithm with Adaptive Momentum. Neurocomputing, 74, 749-752. https://doi.org/10.1016/j.neucom.2010.10.008

  9. 9. Shao, H. and Zheng, G. (2011) Boundedness and Convergence of Online Gradient Method with Penalty and Momentum. Neurocomputing, 74, 765-770. https://doi.org/10.1016/j.neucom.2010.10.005

  10. 10. Shao, H., Xu, D., Zheng, G. and Liu, L. (2012) Convergence of an Online Gradient Method with Inner-Product and Adaptive Momentum. Neurocomputing, 747, 243-252. https://doi.org/10.1016/j.neucom.2011.09.003

  11. 11. Xu, D., Shao, H. and Zhang, H. (2012) A New Adaptive Momentum Algorithm for Split-Complex Recurrent Neural Networks. Neurocomputing, 93, 133-136. https://doi.org/10.1016/j.neucom.2012.03.013

  12. 12. Zhang, N., Wu, W. and Zheng, G. (2006) Convergence of Gradient Method with Momentum for Two-Layer Feedforward Neural Networks. IEEE Transactions Neural Networks, 17, 522-525. https://doi.org/10.1109/TNN.2005.863460

  13. 13. Wu, W., Zhang, N., Li, Z., Li, L. and Liu, Y. (2008) Convergence of Gradient Method with Momentum for Back-Propagation Neural Networks. Journal of Computational Mathematics, 4, 613-623.

  14. 14. Gori, M. and Maggini, M. (1996) Optimal Convergence of On-Line Backpropagation. IEEE Transactions on Neural Networks, 7, 251-254. https://doi.org/10.1109/72.478415

  15. 15. Yuan, Y. and Sun, W. (1997) Optimization Theory and Methods. Science Press, Beijing.

  16. 16. Yan, X. and Chao, Z. (2008) Convergence of Asynchronous Batch Gradient Method with Momentum for Pi-Sigma Networks. Mathematica Applicata, 21, 207-212.