**Journal of Applied Mathematics and Physics**

Vol.06 No.01(2018), Article ID:82069,17 pages

10.4236/jamp.2018.61024

Some Features of Neural Networks as Nonlinearly Parameterized Models of Unknown Systems Using an Online Learning Algorithm

Leonid S. Zhiteckii^{1}, Valerii N. Azarskov^{2}, Sergey A. Nikolaienko^{1}, Klaudia Yu. Solovchuk^{1}^{ }

^{1}Department of Intelligent Automatic Systems, International Centre of Information Technologies and Systems, Institute of Cybernetics, Kiev, Ukraine

^{2}Aircraft Control Systems Department, National Aviation University, Kiev, Ukraine

Copyright © 2018 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: October 31, 2017; Accepted: January 26, 2018; Published: January 29, 2018

ABSTRACT

This paper deals with deriving the properties of updated neural network model that is exploited to identify an unknown nonlinear system via the standard gradient learning algorithm. The convergence of this algorithm for online training the three-layer neural networks in stochastic environment is studied. A special case where an unknown nonlinearity can exactly be approximated by some neural network with a nonlinear activation function for its output layer is considered. To analyze the asymptotic behavior of the learning processes, the so-called Lyapunov-like approach is utilized. As the Lyapunov function, the expected value of the square of approximation error depending on network parameters is chosen. Within this approach, sufficient conditions guaranteeing the convergence of learning algorithm with probability 1 are derived. Simulation results are presented to support the theoretical analysis.

**Keywords:**

Neural Network, Nonlinear Model, Online Learning Algorithm, Lyapunov Function, Probabilistic Convergence

1. Introduction

Design of mathematical models for technical, economic, social and other systems with uncertainties is the important problem from both theoretical and practical points of view. This problem attracts close attention of many researches. The significant progress in this scientific area has been achieved last time. Within this area, new methods and modern intelligent algorithms dealing with uncertain systems have recently been proposed in [1] [2] [3] [4] . They include some new optimization approaches advanced, in particular, in the papers [2] [4] .

Over the past decades, interest has been increasing toward the use multilayer neural networks applied among other as models for the adaptive identification of nonlinearly parameterized dynamic systems [5] [6] [7] [8] . This has been motivated by the theoretical works of several researches [9] [10] who proved that, even with one hidden layer, neural network can uniformly approximate any continuous mapping over a compact domain, provided that the network has sufficient number of neurons with corresponding weights. The theoretical background on neural network modeling may be found in the book [11] .

Different learning methods for updating the weights of neural networks have been reported in literature. Most of these methods rely on the gradient concept [8] . One of these methods is based on utilizing the Lyapunov stability theory [6] [12] .

The convergence of the online gradient training procedure dealing with input signals that have deterministic (non-stochastic) nature was studied by many authors [13] - [23] . Several of these authors assumed that training set must be finite whereas in online identification schemes, this set is theoretically infinite. Nevertheless, recently we observed a non-stochastic learning process when this procedure did not converge for certain infinite sequence of training examples [24] .

The probabilistic asymptotic analysis on convergence of the online gradient training algorithms has been conducted in [25] - [33] . Several of their results make it possible to employ a constant learning rate [28] [30] . To the best of author’s knowledge, there are no general results in literature concerning the global convergence properties of training procedures with a fixed learning rate applicable to the case of infinite learning set.

A popular approach to analyze the asymptotic behavior of online gradient algorithms in stochastic case is based on Martingale convergence theory [34] . This approach has been exploited by the authors in [24] to derive some local convergence in stochastic framework for standard online gradient algorithms with the constant learning rate.

The difficulties associated with convergence properties of online gradient learning algorithms are how to guarantee the boundedness of the network weights biases assuming the learning process to be theoretically infinite. To overcome these difficulties, the penalty term to an error function has been introduced in [33] . Recently we however established in [35] that the global convergence of these algorithms with probability 1 can be achieved without any additional term, at least, in the case when the activation function of the network output layer is linear.

This work has been motivated by the fact that the standard gradient algorithm is widely exploited for online updating the neural network weights in accordance with the gradient-descent principle whereas the following important question related to its ultimate properties remained in part open as yet: when does the sequential procedure based on this algorithm converge if the learning rate is constant? As pointed out in [23] , the answer to the question on convergence properties of this standard algorithm which should shed some light on asymptotic features of multilayer neural networks using the gradient-like training technique is the first step toward a full understanding of other more generic training algorithms based on regularization, conjugate gradient, and Newton optimization methods, etc.

Novelty of this paper which extends the basic ideas of [35] to the case where the activation function of the output layer is nonlinear, consists in establishing sufficient conditions under which the gradient algorithm for learning neural networks will globally converge in the sense almost sure for the case when the learning rate can be constant. The proposed approach to deriving these convergence results is based on utilizing the Lyapunov methodology [36] . They make it possible to reveal some new features of the multilayer neural networks with nonlinear activation function in output layer which use the online gradient-type training algorithms having a constant learning rate.

2. Description of Learning Neural Network System: Problem Formulation

Consider the typical three-layer feedforward neural network containing a hidden layer and p inputs, q hidden neurons, and one output neuron. Denote by

$W={\left({w}_{ij}\right)}_{q\times p}={\left[{w}_{1},\cdots ,{w}_{q}\right]}^{\text{T}}$

with

${w}_{i}={\left[{w}_{i1},\cdots ,{w}_{ip}\right]}^{\text{T}}\in {R}^{p},\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,\cdots ,q$

the weight matrix connecting the input and hidden layers, and define the so-called bias vector ${w}_{0}$ as

${w}_{0}={\left[{w}_{01},\cdots ,{w}_{0q}\right]}^{\text{T}}\in {R}^{q},$

which is the threshold in the hidden-layer output. Further, let

$\omega ={\left[{\omega}_{1},\cdots ,{\omega}_{q}\right]}^{\text{T}}\in {R}^{q},$

be the weight vector between the hidden and output layers, and ${\omega}_{0}$ be the bias in the output layer. As in [33] , the activation functions used in the hidden neurons are all the same denoted by $g:R\to R$ , and the activation function for the output layer is $f:R\to R$ .

Now, denoting by

$G\left(z\right)={\left[g\left({z}_{1}\right),\cdots ,g\left({z}_{q}\right)\right]}^{\text{T}}$

the vector-valued function which depends on the vector $z={\left[{z}_{1},\cdots ,{z}_{q}\right]}^{\text{T}}\in {R}^{q}$ , introduce the extended matrix $\stackrel{\u02dc}{W}=\left[W\vdots {w}_{0}\right]\in {R}^{q\times \left(p+1\right)}$ by adding the column ${w}_{0}$ to W and the extended vector $\stackrel{\u02dc}{\omega}={\left[{\omega}^{\text{T}},{\omega}_{0}\right]}^{\text{T}}\in {R}^{q+1}$ , and also the function $\stackrel{\u02dc}{G}\left(z\right)={\left[g\left({z}_{1}\right),\cdots ,g\left({z}_{q}\right),1\right]}^{\text{T}}$ of z. Then the for an input vector

$x={\left[{x}_{1},\cdots ,{x}_{p}\right]}^{\text{T}}\in {R}^{p},$

the output vector of hidden layer can be written as $\stackrel{\u02dc}{G}\left(\stackrel{\u02dc}{W}\text{\hspace{0.05em}}\stackrel{\u02dc}{x}\right)$ , where the notation $\stackrel{\u02dc}{x}={\left[{x}^{\text{T}},1\right]}^{\text{T}}$ of the extended vector $\stackrel{\u02dc}{x}\in {R}^{p+1}$ is used, and the final output ${y}_{\text{NN}}\in R$ of the neural network can be expressed as follows:

${y}_{\text{NN}}=f\left({\stackrel{\u02dc}{\omega}}^{\text{T}}\stackrel{\u02dc}{G}\left(\stackrel{\u02dc}{W}\stackrel{\u02dc}{x}\right)\right).$ (1)

Let

$y=\phi \left(x\right)$ (2)

with $\phi :{R}^{p}\to R$ be an unknown and bounded nonlinearity given over the bounded either finite or infinite sets $X\subset {R}^{p}$ which are depicted in Figure 1 for the case $p=2$ . This function needs to be approximated by the neural network (1) via suitable choice of $\stackrel{\u02dc}{\omega}$ and $\stackrel{\u02dc}{W}$ . By virtue of (2) the approximation error

$e\left(\stackrel{\u02dc}{\omega}\text{,}\stackrel{\u02dc}{W},x,y\right)=y-f\left({\stackrel{\u02dc}{\omega}}^{\text{T}}\stackrel{\u02dc}{G}\left(\stackrel{\u02dc}{W}\stackrel{\u02dc}{x}\right)\right)$ (3)

depends on x for any fixed $\left(\stackrel{\u02dc}{\omega}\text{,}\stackrel{\u02dc}{W}\right)$ .

Now, suppose that some complex system to be identified is described at each nth time instant by the equation

${y}^{n}=\phi \left({x}^{n}\right)\text{\hspace{1em}}\left(n=0,1,2,\cdots \right)$ (4)

in which ${x}^{n}\in X$ and ${y}^{n}\in R$ are its input and output signals, respectively available for measurement.

Based on the infinite sequence of the training examples ${\left\{{x}^{n},{y}^{n}\right\}}_{n=0}^{\infty}$ that is

generated by (4), the outline learning algorithm for updating the weight and biases in (1) is defined as the standard gradient-descent iteration procedure

${\stackrel{\u02dc}{\omega}}^{n+1}={\stackrel{\u02dc}{\omega}}^{n}-{\eta}_{n}{\nabla}_{\stackrel{\u02dc}{\omega}}{e}^{2}\left({\stackrel{\u02dc}{\omega}}^{n},{\stackrel{\u02dc}{W}}^{n};{y}^{n},{\stackrel{\u02dc}{x}}^{n}\right),$ (5)

${w}_{i}^{n+1}={w}_{i}^{n}-{\eta}_{n}{\nabla}_{{\stackrel{\u02dc}{w}}_{i}}{e}^{2}\left({\stackrel{\u02dc}{\omega}}^{n},{\stackrel{\u02dc}{W}}^{n};{y}^{n},{\stackrel{\u02dc}{x}}^{n}\right),$ (6)

$i=1,\cdots ,q,\text{\hspace{1em}}n=0,1,\cdots .$

In these equations, ${\nabla}_{\stackrel{\u02dc}{\omega}}{e}^{2}\left(\cdot ,\text{\hspace{0.17em}}\cdot ;\text{\hspace{0.17em}}\cdot ,\text{\hspace{0.17em}}\cdot \right)$ and ${\nabla}_{{\stackrel{\u02dc}{w}}_{i}}{e}^{2}\left(\cdot ,\text{\hspace{0.17em}}\cdot ;\text{\hspace{0.17em}}\cdot ,\text{\hspace{0.17em}}\cdot \right)$ denote the current gradients of the error function ${e}^{2}\left(\stackrel{\u02dc}{\omega},\stackrel{\u02dc}{W};y,\stackrel{\u02dc}{x}\right)$ with respect to $\stackrel{\u02dc}{\omega}$ and ${w}_{i}$ ,

Figure 1. Training sets: (a) X is an infinite set of xs; (b) X is a finite set of xs.

respectively, obtained after substituting $\stackrel{\u02dc}{\omega}={\stackrel{\u02dc}{\omega}}^{n}$ , $\stackrel{\u02dc}{W}={\stackrel{\u02dc}{W}}^{n}$ , $y={y}^{n}$ , and $\stackrel{\u02dc}{x}={\stackrel{\u02dc}{x}}^{n}$ into (3), and ${\eta}_{n}>0$ represents the step size (the learning rate). Note that the expressions of ${\nabla}_{\stackrel{\u02dc}{\omega}}{e}^{2}\left({\stackrel{\u02dc}{\omega}}^{n},{\stackrel{\u02dc}{W}}^{n};{y}^{n},{\stackrel{\u02dc}{x}}^{n}\right)$ and ${\nabla}_{{\stackrel{\u02dc}{w}}_{i}}{e}^{2}\left({\stackrel{\u02dc}{\omega}}^{n},{\stackrel{\u02dc}{W}}^{n};{y}^{n},{\stackrel{\u02dc}{x}}^{n}\right)$ may be written in detail similar to that in [23] [33] . (Due to space limitation, they are here omitted.)

Introducing the notation

$\theta ={\left[{\stackrel{\u02dc}{\omega}}^{\text{T}},{w}_{1}^{\text{T}},\cdots ,{w}_{q}^{\text{T}},{w}_{0}^{\text{T}}\right]}^{\text{T}}$

of the extended weight and bias vector $\theta \in {R}^{q\left(p+1\right)}$ , and considering the Equations (5) and (6) in conjunction, rewrite the online gradient learning algorithm for updating ${\theta}^{n}$ in a general form (as in [33] )

${\theta}^{n+1}={\theta}^{n}-{\eta}_{n}{\nabla}_{\theta}{e}^{2}\left({\theta}^{n};{y}^{n},{\stackrel{\u02dc}{x}}^{n}\right),$ (7)

where ${\nabla}_{\theta}{e}^{2}\left(\cdot ;\text{\hspace{0.17em}}\cdot ,\text{\hspace{0.17em}}\cdot \right)$ represents the gradient of ${e}^{2}\left({\theta}^{n};{y}^{n},{\stackrel{\u02dc}{x}}^{n}\right)$ with respect to $\theta $ calculated at the nth time instant.

Thus, the Equation (7) together with the expression

$e\left({\theta}^{n};{y}^{n},{x}^{n}\right)={y}^{n}-{y}_{\text{NN}}^{n}$

in which ${y}^{n}$ is given by (4), and

${y}_{\text{NN}}^{n}=\psi \left({\theta}^{n},{x}^{n}\right)$

describe the learning neural network system necessary to identify the nonlinearity (2). For better understanding the performance of this system, its structure is depicted in Figure 2, where the notation ${e}^{n}=e\left({\theta}^{n};{y}^{n},{x}^{n}\right)$ is used.

The problem formulated in this paper consists in analyzing asymptotic properties of the learning neural network system presented above. More certainly, it is required to derive conditions under which the learning procedure will be convergent meaning the existence of a limit

$\underset{n\to \infty}{\mathrm{lim}}{\theta}^{n}={\theta}^{\infty}$ (8)

in some sense [24] .

3. Preliminaries

Suppose that there is a multilayer neural network described by

${y}_{\text{NN}}\equiv \psi \left(\theta ,x\right),$

where $\theta $ is some fixed parameter vector. According to [9] [10] , the requirement

$\underset{x\in X}{\mathrm{max}}\left|\phi \left(x\right)-\psi \left(\theta ,x\right)\right|\le \epsilon $

evaluating an accuracy of the approximation of $\phi \left(x\right)$ by $\psi \left(\theta ,x\right)$ can be satisfied for any $\epsilon >0$ via suitable choice of $\theta $ and the number of the neurons in its layers. On the other hand, the performance index of the neural network model with a fixed number of these neurons defining its approximation capability might naturally be expressed as follows:

Figure 2. Configuration of learning neural network system.

${J}^{0}\left(\theta \right)=\underset{x\in X}{\mathrm{max}}\left|\phi \left(x\right)-\psi \left(\theta ,x\right)\right|.$ (9)

In fact, the desired (optimal) vector $\theta ={\theta}_{0}^{*}$ will then be specified from (9) as the variable $\theta $ minimizing ${J}^{0}\left(\theta \right)$ :

${\theta}_{0}^{*}=\mathrm{arg}\underset{\theta}{\mathrm{min}}\underset{x\in X}{\mathrm{max}}\left|\phi \left(x\right)-\psi \left(\theta ,x\right)\right|.$ (10)

Nevertheless, all researches which employ online learning procedures in stochastic environment “silently” replace ${J}^{0}\left(\theta \right)$ by

$J\left(\theta \right)={E}_{x}\left\{{e}^{2}\left(\theta ;y,\stackrel{\u02dc}{x}\right)\right\},$

where ${E}_{x}\left\{{e}^{2}\left(\theta ;y,\stackrel{\u02dc}{x}\right)\right\}$ denotes the expected value of ${e}^{2}\left(\theta ;y,\stackrel{\u02dc}{x}\right)$ .

Indeed, the learning algorithm (7) does not minimize (9): namely, it minimizes $J\left(\theta \right)$ (instead of ${J}^{0}\left(\theta \right)$ ) [37] . This observation means that (7) will at best yield

${\theta}^{*}:=\mathrm{arg}\underset{\theta}{\mathrm{min}}J\left(\theta \right).$

but not ${\theta}_{0}^{*}$ given by (10) as $n\to \infty $ .

Now, consider a special case when the unknown function (2) can exactly be approximated by the neural network $\psi \left(\theta ,\stackrel{\u02dc}{x}\right)$ implying

$\phi \left(x\right)\equiv \psi \left({\theta}^{*},\stackrel{\u02dc}{x}\right)\text{\hspace{1em}}\forall x\in X.$ (11)

In this case called in ( [8] , p. 304) by the ideal case, we have $e\left({\theta}^{*},\stackrel{\u02dc}{x}\right)\equiv 0$ for any x from X and, consequently, $J\left({\theta}^{*}\right)=0$ .

If the condition given in identity (11) is satisfied, then the learning rate ${\eta}_{n}$ in (7) may be constant:

${\eta}_{n}\equiv \eta =\text{const}>0;$

see ( [37] , sect. 3.13).

Note that the property (11) may take place, in particular, when

$X=\left\{{x}^{\left(1\right)},\cdots ,{x}^{\left(K\right)}\right\}$ contains certain number $K=\text{card}\text{\hspace{0.17em}}X$ of training examples

provided that their number does not exceed the dimension of $\theta $ . To understand this fact, according to (11) write the set of K equations

$\begin{array}{l}\psi \left(\theta ,{\stackrel{\u02dc}{x}}^{\left(1\right)}\right)={y}^{\left(1\right)}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\vdots \\ \psi \left(\theta ,{\stackrel{\u02dc}{x}}^{\left(K\right)}\right)={y}^{\left(K\right)}\end{array}\}$

with respect to the unknown $\theta $ . They are compatible if $K\le q\left(p+2\right)+1$ . Due to (2) together with the definition of ${\theta}^{*}$ it can be concluded that their solution is just $\theta ={\theta}^{*}$ yielding $J\left({\theta}^{*}\right)=0$ because in this special case, $\psi \left({\theta}^{*},{\stackrel{\u02dc}{x}}^{\left(k\right)}\right)={y}^{\left(k\right)}$ for all $k=1,\cdots ,K$ .

4. Main Results

4.1. Some Feature of Multilayer Neural Network

It turns out that if the activation functions g of the hidden layer are nonlinear, then for an arbitrary fixed vector ${\theta}^{\prime}$ there is, at least, one vector ${\theta}^{\u2033}$ such that the network outputs for these different vectors are the same even though the output activation function f is linear, i.e. if $f\left(\zeta \right)=\zeta $ :

$\psi \left({\theta}^{\prime},\stackrel{\u02dc}{x}\right)\equiv \psi \left({\theta}^{\u2033},\stackrel{\u02dc}{x}\right)\text{\hspace{1em}}\forall x\in X.$ (12)

The feature (12) gives that in the presence of nonlinear g there exist, at least, two different ${\theta}^{*}\text{s}$ . For example, let $p=1,\text{\hspace{0.17em}}q=1$ and

$g\left({z}_{1}\right)=\frac{1}{1+\mathrm{exp}\left(-{z}_{1}\right)}$

in which ${z}_{1}={w}_{11}{x}_{1}+{w}_{01}$ , and $f\left(\zeta \right)=\zeta $ with $\zeta ={\omega}_{1}g\left({z}_{1}\right)+{\omega}_{0}$ . Fix a ${\theta}^{\prime}={\left[{{w}^{\prime}}_{11},{{w}^{\prime}}_{01},{{\omega}^{\prime}}_{1},{{\omega}^{\prime}}_{0}\right]}^{\text{T}}$ . Then ${\theta}^{\u2033}={\left[-{{w}^{\prime}}_{11},-{{w}^{\prime}}_{01},-{{\omega}^{\prime}}_{1},{{\omega}^{\prime}}_{1}+{{\omega}^{\prime}}_{0}\right]}^{\text{T}}$ will also satisfy (12); see [35] . Therefore, the set of ${\theta}^{*}\text{s}$ will be not one-point if g is nonlinear.

4.2. An Observation

To study some asymptotic properties of sequence $\left\{{\theta}^{n}\right\}$ caused by the learning algorithm (7) in the non-stochastic case, simulation experiments with the scalar nonlinear system (2) having the nonlinearity

$\phi \left(x\right)=\frac{3.75+0.05\mathrm{exp}\left(-7.15x\right)}{1+0.19\mathrm{exp}\left(-7.15x\right)}$

were conducted. This nonlinearity can explicitly be approximated by the two-layer neural network model described by $\psi \left({\theta}^{*},\stackrel{\u02dc}{x}\right)$ as in Subsection 4.1 with ${\theta}^{\ast \left(1\right)}={\left[7.15,1.65,3.45,0.3\right]}^{\text{T}}$ and ${\theta}^{\ast \left(2\right)}={\left[-7.15,-1.65,-3.45,3.75\right]}^{\text{T}}$ .

Figure 3 illustrates the results of the one simulation experiment with $\eta =0.01$ , where $\left\{{x}^{n}\right\}$ was chosen as a non-stochastic sequence. It can be observed that in this example, the variable ${\mathrm{min}}_{i=1,2}\Vert {\theta}^{*\left(i\right)}-{\theta}^{n}\Vert $ shown in Figure 3(b) has no limit implying that the learning algorithm (7) may not be convergent: in this case, the limit (8) does not exist, see Figure 3(c).

(a)(b)(c)

Figure 3. Behaviour of learning algorithm (7) in non-stochastic case: (a) inputs ${e}^{n}$ ; (b) the variable $\mathrm{min}\left\{\Vert {\theta}^{*\left(1\right)}-{\theta}^{n}\Vert ,\Vert {\theta}^{*\left(2\right)}-{\theta}^{n}\Vert \right\}$ ; (c) current model error ${e}^{n}$ .

4.3. Sufficient Conditions for the Probabilistic Convergence of Learning Procedure

The following basic assumption concerning ${\left\{{x}^{n}\right\}}_{n=0}^{\infty}$ which is bounded stochastic sequence (since X is bounded) is made:

(A1) ${x}^{n}\text{s}$ arise randomly in accordance with a probability distribution $P\left(x\right)$ if X is finite, and with probability density $p\left(x\right)$ if X is infinite.

Within assumption (A1), the expected value (mean) of ${e}^{2}\left(\theta ;y,\stackrel{\u02dc}{x}\right)={\left(y-\psi \left(\theta ,\stackrel{\u02dc}{x}\right)\right)}^{2}$ is given by

${E}_{x}\left\{{e}^{2}\left(\theta ;y,\stackrel{\u02dc}{x}\right)\right\}=\{\begin{array}{l}{\displaystyle \underset{x\in X}{\sum}{e}^{2}\left(\theta ;y,\stackrel{\u02dc}{x}\right)P\left(x\right)}\text{\hspace{1em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}X\text{isfiniteset,}\\ {\displaystyle {\int}_{X}{e}^{2}\left(\theta ;y,\stackrel{\u02dc}{x}\right)p\left(x\right)\text{d}x}\text{\hspace{1em}}\text{\hspace{0.17em}}\text{if}X\text{isinfiniteset}\text{.}\end{array}$

To derive the main theoretical result we need Assumption (A1) and the following additional assumptions:

(A2) the identity (11) holds;

(A3) the activation functions used in the hidden neurons and output neuron are the same $\left(f(\cdot )=g(\cdot )\right)$ , twice continuously differentiable on $R$ and also uniformly bounded on $R$ .

Further, we introduce a scalar function $V\left(\theta \right)$ playing a role of the Lyapunov function [36] with the features:

(a) $V\left(\theta \right)$ is nonnegative, i.e.,

$V\left(\theta \right)\ge 0;$ (13)

(b) $V\left(\theta \right)$ is the Lipschitz function in the sense that

$\Vert \nabla V\left({\theta}^{\prime}\right)-\nabla V\left({\theta}^{\u2033}\right)\Vert \le L\Vert {\theta}^{\prime}-{\theta}^{\u2033}\Vert $ (14)

for any ${\theta}^{\prime},{\theta}^{\u2033}$ from ${R}^{q\left(p+1\right)}$ , where $\nabla V\left(\theta \right)$ denotes its gradient, and $L>0$ represents the Lipschitz constant.

Now, the global stochastic convergence analysis of the gradient learning algorithm (7) is based on employing the fundamental convergence conditions established in the following Key Technical Lemma which is the slightly reformulated Theorem 3 of [36] .

Key Technical Lemma. Let $V\left(\theta \right)$ be a function satisfying (13) and (14). Define the scalar variable

$H\left(\theta \right)={\nabla}_{\theta}V{\left(\theta \right)}^{\text{T}}{\nabla}_{\theta}E\left\{Q\left(x,\theta \right)\right\}$ (16)

with some $Q\left(x,\theta \right)\ge 0$ , and denote

${H}_{n}\left(\theta \right):={\nabla}_{\theta}V{\left({\theta}^{n}\right)}^{\text{T}}{\nabla}_{\theta}E\left\{Q\left(x,{\theta}^{n}\right)\right\}.$

Suppose:

1) ${H}_{n}\left(\theta \right)\ge {\Theta}_{n}V\left({\theta}^{n-1}\right),\text{\hspace{0.17em}}{\Theta}_{n}>0,$

2) $E\left\{{\Vert {\nabla}_{\theta}Q\left(x,{\theta}^{n}\right)\Vert}^{2}\right\}\le {\tau}_{n}V\left({\theta}^{n}\right),\text{\hspace{0.17em}}{\tau}_{n}\ge 0.$

Introduce the additional variable

${\nu}_{n}={\eta}_{n}\left({\Theta}_{n}-L{\eta}_{n}{\tau}_{n}/2\right).$ (17)

Then the algorithm (7) yields

$\underset{n\to \infty}{\mathrm{lim}}{V}_{n}=0$ a.s.,

where ${V}_{n}:=V\left({\theta}^{n}\right)$ provided that $E\left\{{\theta}^{0}\right\}<\infty $ and

$0\le {\nu}_{n}\le 1,$ (18)

$\underset{n=0}{\overset{\infty}{\sum}}{v}_{n}=\infty}.$ (19)

Related results followed from the Theorem 3’ of [36] are:

Corollary. Under the conditions of the Key Technical Lemma, if

${\Theta}_{n}\equiv \Theta =\text{const}$ and
${\tau}_{n}\equiv \tau =\text{const}$ , and
${\eta}_{n}\equiv \eta =\text{const}$ , then
${V}_{n}\underset{n\to \infty}{\to}0$ _{ }

with probability 1 provided that

$0<\eta \le 2\left(\Theta -\epsilon \right)/L\tau \text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(0<\epsilon <\Theta \right)$ (20)

is satisfied. n

Next, we are able to present the convergence result summarized in the theorem below.

Theorem. Suppose Assumption (A2) holds. Then the gradient algorithm (7) with a constant learning rate, ${\eta}_{n}\equiv \eta $ , will converge with probability 1 (in the

sense that ${V}_{n}\underset{n\to \infty}{\to}0$ a.s.) and

$\underset{n\to \infty}{\mathrm{lim}}e\left({\theta}^{n};{y}^{n},{\stackrel{\u02dc}{x}}^{n}\right)=0$ a.s. (21)

for any initial ${\theta}^{0}$ chosen randomly so that $E\left\{Q\left(x,{\theta}^{0}\right)\right\}<\infty $ if $\eta $ satisfies

the conditions (20) with $\Theta $ and $\tau $ determined by

$\Theta :=\underset{\theta}{\mathrm{inf}}\frac{{\Vert {\nabla}_{\theta}E\left\{Q\left(x,\theta \right)\right\}\Vert}^{2}}{E\left\{Q\left(x,\theta \right)\right\}}>0,$ (22)

$\tau :=\underset{\theta}{\mathrm{sup}}\frac{E\left\{{\Vert {\nabla}_{\theta}Q\left(x,\theta \right)\Vert}^{2}\right\}}{E\left\{Q\left(x,\theta \right)\right\}}<\infty .$ (23)

Proof. Set $V\left(\theta \right)=E\left\{Q\left(x,\theta \right)\right\}$ . Then condition (13) and (14) can be shown to be valid. This indicates that this function may be taken as the Lyapunov function. By virtue of (16) such a choice of $V\left(\theta \right)$ gives $H\left(\theta \right)={\Vert {\nabla}_{\theta}E\left\{Q\left(x,\theta \right)\right\}\Vert}^{2}$ . Putting ${\Theta}_{n}\equiv \Theta $ and ${\tau}_{n}\equiv \tau $ with $\Theta $ and $\tau $ determined by (22) and (23), respectively, we can conclude that the conditions 1), 2) of the Key Technical Lemma are satisfied. Applying its Corollary it proves that ${\mathrm{lim}}_{n\to \infty}{V}_{n}=0$ with probability 1.

Due to the fact that $V\left(\theta \right)={E}_{x}\left\{{e}^{2}\left(\theta ;y,\stackrel{\u02dc}{x}\right)\right\}$ together with Assumption (A2), result (21) follows. n

4.4. Simulations and a Discussion

To demonstrate theoretical result given in Subsection 4.3, several simulations were conducted. First, we dealt with the same neural network and the same training samples as in ( [33] , p. 1052). Namely, they were chosen as follows:

${x}^{\left(1\right)}={\left[0,0\right]}^{\text{T}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}{y}^{\left(1\right)}=1;$

${x}^{\left(2\right)}={\left[0,1\right]}^{\text{T}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{y}^{\left(2\right)}=0;$

${x}^{\left(3\right)}={\left[1,0\right]}^{\text{T}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}{y}^{\left(3\right)}=0;$

${x}^{\left(4\right)}={\left[1,1\right]}^{\text{T}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}{y}^{\left(4\right)}=1.$

The two numerical examples with different initial ${\theta}^{0}$ were considered. In Example 1 we set ${w}_{11}^{0}=0.95$ , ${w}_{12}^{0}=-0.084$ , ${w}_{21}^{0}=\text{0}\text{.079}$ , ${w}_{22}^{0}=-0.079$ , ${w}_{01}^{0}=-0.089$ , ${w}_{02}^{0}=\text{0}\text{.075}$ , ${\omega}_{1}^{0}=\text{0}\text{.357}$ , ${\omega}_{2}^{0}=-\text{0}\text{.357}$ , ${\omega}_{0}^{0}=\text{0}\text{.354}$ . In Example 2 we set ${w}_{11}^{0}=-\text{0}\text{.090}$ , ${w}_{12}^{0}=\text{0}\text{.225}$ , ${w}_{21}^{0}=-\text{0}\text{.138}$ , ${w}_{22}^{0}=\text{0}\text{.139}$ , ${w}_{01}^{0}=\text{0}\text{.222}$ , ${w}_{02}^{0}=-\text{0}\text{.084}$ , ${\omega}_{1}^{0}=-\text{0}\text{.356}$ , ${\omega}_{2}^{0}=\text{0}\text{.357}$ , ${\omega}_{0}^{0}=\text{0}\text{.353}$ .

Contrary to [33] the learning rate was chosen as $\eta =0.01$ in order to implement the algorithms (5), (6) with no penalty term.

Results of two simulation experiments whose durations were 10000 iteration steps are presented in Figure 4 and Figure 5 in which the components of ${\theta}^{n}$ and $J\left({\theta}^{n}\right)$ are shown.

Further, another simulation experiments were also conducted. In contrast with previous experiments, they dealt with an infinite training sets X Namely, the two simulations with the same nonlinear function as in Subsection 4.2 were first conducted, provided that X is the infinite bounded set given by $X\in \left[-2,2\right]$ . However, $\left\{{x}^{n}\right\}$ was now chosen as the stochastic sequence. Namely, it was generated as a pseudorandom i.i.d. sequence.

Two numerical examples were considered. In Example 3, the initial values of neural network weights and biases were taken as follows: ${w}_{1}^{0}=0.529$ ,

Figure 4. Behavior of gradient learning algorithm (7) in Example 1.

Figure 5. Behavior of gradient learning algorithm (2) in Example 2.

${w}_{2}^{0}=-0.5012$ , ${\omega}_{1}^{0}=-0.9168$ , ${\omega}_{2}^{0}=1.0409$ . In Example 4 we set ${w}_{1}^{0}=-0.3756$ , ${w}_{2}^{0}=-0.572$ , ${\omega}_{1}^{0}=-0.9798$ , ${\omega}_{2}^{0}=1.1436$ . Figure 6 and Figure 7 demonstrate results of the two simulation experiments conducted with the initial estimates ${\theta}^{0}$ given above. In both experiments, ${\eta}_{n}$ was also chosen as ${\eta}_{n}\equiv \eta =0.01$ .

Next, another nonlinearity

$\phi \left(x\right)=\frac{1}{1+\mathrm{exp}\left[-{a}_{1}\left(x\right)-{a}_{2}\left(x\right)-1\right]}$

with ${a}_{1}\left(x\right)={\left[1+\mathrm{exp}\left(-10x-5\right)\right]}^{-1}$ and ${a}_{2}\left(x\right)={\left[1+\mathrm{exp}\left(-10x+5\right)\right]}^{-1}$ to be exactly approximated by a suitable neural network was chosen as in [11, p. 12-4]. The following initial estimates were taken: ${w}_{1}^{0}=2.8$ , ${w}_{2}^{0}=-5.6$ , ${w}_{3}^{0}=-2.8$ , ${w}_{4}^{0}=-5.6$ , ${\omega}_{1}^{0}=5.33$ , ${\omega}_{2}^{0}=1.71$ , ${\omega}_{3}^{0}=-3.52$ (Example 5), and ${w}_{1}^{0}=0.27$ , ${w}_{2}^{0}=0.19$ , ${w}_{3}^{0}=-3.09$ , ${w}_{4}^{0}=3.96$ , ${\omega}_{1}^{0}=1.64$ , ${\omega}_{2}^{0}=0.72$ , ${\omega}_{3}^{0}=-2.21$ (Example 6).

Results of the two simulation experiments conducted with the initial estimates ${\theta}^{0}$ given above are depicted in Figure 8 and Figure 9.

From Figures 4-9 we can see that the learning processes converge and the performance index $J\left({\theta}^{n}\right)$ tends to zero while the penalty term is absent. It can be observed that if the initial vectors ${\theta}^{0}\text{s}$ are different then the sequences $\left\{{\theta}^{n}\right\}$ may converge to different final ${\theta}^{\infty}\text{s}$ .

The simulation experiments show that the penalty term is not necessary, in principle, to achieve the convergence of the online gradient learning procedure in the three-layer neural networks if certain conditions given by Assumption (A1)-(A3) are satisfied. This fact supports our theoretical results.

Figure 6. Behavior of gradient learning algorithm (7) in Example 3.

Figure 7. Behavior of gradient learning algorithm (7) in Example 4.

Figure 8. Behavior of gradient learning algorithm (7) in Example 5.

Figure 9. Behavior of gradient learning algorithm (7) in Example 6.

5. Conclusion

In this paper, some important features of multilayer neural networks which are utilized as nonlinearly parameterized models of unknown nonlinear systems to be identified have been derived. A special case where the nonlinearity can exactly be approximated by a three-layer neural network has been studied. Contrary to previous author’s papers we dealt with the neural network having a nonlinear activation function for its output layer. It was shown that if the activation function of the hidden layer is nonlinear, then, for any input variables, there are, at least, two different network parameter vectors under which the network outputs will be the same even though the output activation function is linear. This feature gives that the standard gradient online training algorithm with a constant learning rate may not be convergent if the training sequence is non-stochastic. Nevertheless, provided that this sequence is stochastic, it has theoretically been established that, under certain conditions, such algorithm will converge with probability one. However, ultimate values of network parameters may be different. These facts were confirmed by simulation experiments.

Acknowledgements

The authors are grateful to anonymous reviewer for his valuable comments.

Cite this paper

Zhiteckii, L.S., Azarskov, V.N., Nikolaienko, S.A. and Solovchuk, K.Yu. (2018) Some Features of Neural Networks as Nonlinearly Parameterized Models of Unknown Systems Using an Online Learning Algorithm. Journal of Applied Mathematics and Physics, 6, 247-263. https://doi.org/10.4236/jamp.2018.61024

References

- 1. Chen, L., Peng, J., Zhang, B. and Rosyida, I. (2017) Diversified Models for Portfolio Selection Based on Uncertain Semivariance. International Journal of Systems Science, 3, 637-648. https://doi.org/10.1080/00207721.2016.1206985
- 2. Draa, A., Bouzoubia, S. and Boukhalfa, I. (2015) A Sinusoidal Differential Evolution Algorithm for Numerical Optimisation. Applied Soft Computing, 27, 99-126. https://doi.org/10.1016/j.asoc.2014.11.003
- 3. Zhang, B., Peng, J., Li, S. and Chen, L. (2016) Fixed Charge Solid Transportation Problem in Uncertain Environment and Its Algorithm. Computers & Industrial Engineering, 102, 186-197. https://doi.org/10.1016/j.cie.2016.10.030
- 4. Sun, G., Zhao, R. and Lan, Y. (2016) Joint Operations Algorithm for Large-Scale Global Optimization. Applied Soft Computing, 38, 1025-1039. https://doi.org/10.1016/j.asoc.2015.10.047
- 5. Suykens, J. and Moor, B.D. (1993) Nonlinear System Identification Using Multilayer Neural Networks: Some Ideas for Initial Weights, Number of Hidden Neurons and Error Criteria. Proceedings of the 12th IFAC World Congress, Sydney, Australia, 3, 49-52.
- 6. Kosmatopoulos, E.S., Polycarpou, M.M., Christodoulou, M.A. and Ioannou, P.A. (1995) High-Order Neural Network Structures for Identification of Dynamical Systems. IEEE Transactions on Neural Networks, 6, 422-431. https://doi.org/10.1109/72.363477
- 7. Levin, A.U. and Narendra, K.S. (1995) Recursive Identification Using Feedforward Neural Networks. International Journal of Control, 61, 533-547. https://doi.org/10.1080/00207179508921916
- 8. Tsypkin, Ya.Z., Mason, J.D., Avedyan, E.D., Warwick, K. and Levin, I.K. (1999) Neural Networks for Identification of Nonlinear Systems Under Random Piecewise Polynomial Disturbances. IEEE Transactions on Neural Networks, 10, 303-311. https://doi.org/10.1109/72.750559
- 9. Cybenko, G. (1989) Approximation by Superpositions of a Sigmoidal Functions. Mathematics of Control, Signals, and Systems, 2, 303-313.
- 10. Funahashi, K. (1989) On the Approximate Realization of Continuous Mappings by Neural Networks. Neural Networks, 2, 182-192. https://doi.org/10.1016/0893-6080(89)90003-8
- 11. Hagan, M.T., Demuth, H.B. and Beale, M.H. (1996) Neural Network Design. PWS Publishing, Boston.
- 12. Behera, L., Kumar, S. and Patnaik, A. (2006) On Adaptive Learning Rate That Guarantees Convergence in Feedforward Networks. IEEE Transactions on Neural Networks, 17, 1116-1125. https://doi.org/10.1109/TNN.2006.878121
- 13. Mangasarian, O.L. and Solodov, M.V. (1994) Serial and Parallel Backpropagation Convergence via Nonmonotone Perturbed Minimization. Optimization Methods of Software, 4, 103-116, 199. https://doi.org/10.1080/10556789408805581
- 14. Luo, Z. and Tseng, P. (1994) Analysis of an Approximate Gradient Projection Method with Application to the Backpropagation Algorithm. Optimization Methods of Software, 4, 85-101. https://doi.org/10.1080/10556789408805580
- 15. Ellacott, S.W. (1993) The Numerical Analysis Approach. In: Taylor, J.G., Ed., Mathematical Approaches to Neural Networks, Elsevier Science Publisher B.V., Amsterdam, 103-137. https://doi.org/10.1016/S0924-6509(08)70036-9
- 16. Wu, W. and Shao, Z. (2003) Convergence of an Online Gradient Methods for Continuous Perceptrons with Linearly Separable Training Patterns. Applied Mathematics Letters, 16, 999-1002. https://doi.org/10.1016/S0893-9659(03)90086-3
- 17. Wu, W. and Xu, Y.S. (2002) Deterministic Convergence of an Online Gradient Method for Neural Networks. Journal of Computational and Applied Mathematics, 144, 335-347. https://doi.org/10.1016/S0377-0427(01)00571-4
- 18. Wu, W., Feng, G.R., Li, X. and Xu, Y.S. (2005) Deterministic Convergence of an Online Gradient Method for BP Neural Networks. IEEE Transactions on Neural Networks, 16, 1-9. https://doi.org/10.1109/TNN.2005.844903
- 19. Wu, W., Feng, G. and Li, X. (2002) Training Multilayer Perceptrons via Minimization of Ridge Functions. Advances in Computational Mathematics, 17, 331-347. https://doi.org/10.1023/A:1016249727555
- 20. Wu, W., Shao, H. and Qu, D. (2005) Strong Convergence for Gradient Methods for BP Networks Training. Proceedings of 2005 International Conference on Neural Networks and Brain, Beijing, 13-15 October 2005, 332-334. https://doi.org/10.1109/ICNNB.2005.1614626
- 21. Zhang, N., Wu, W. and Zheng, G. (2006) Convergence of Gradient Method with Momentum for Two-Layer Feedforward Neural Networks. IEEE Transactions on Neural Networks, 17, 522-525. https://doi.org/10.1109/TNN.2005.863460
- 22. Shao, H., Wu, W. and Liu, L. (2007) Convergence and Monotonicity of an Online Gradient Method with Penalty for Neural Networks. WSEAS Transactions on Mathematics, 6, 469-476.
- 23. Xu, Z.B., Zhang, R. and Jing, W.-F. (2009) When Does Online BP Training Converge? IEEE Transactions on Neural Networks, 20, 1529-1539. https://doi.org/10.1109/TNN.2009.2025946
- 24. Zhiteckii, L.S., Azarskov, V.N. and Nikolaienko, S.A. (2012) Convergence of Learning Algorithms in Neural Networks for Adaptive Identification of Nonlinearly Parameterized Systems. Proceedings 16th IFAC Symposium on System Identification Brussels, 11-13 July 2012, 1593-1598. https://doi.org/10.3182/20120711-3-BE-2027.00150
- 25. Li, Z., Wu, W. and Tian, Y. (2004) Convergence of an Online Gradient Method for FNN with Stochastic Inputs. Journal of Computational and Applied Mathematics, 163, 165-176. https://doi.org/10.1016/j.cam.2003.08.062
- 26. White, H. (1989) Some Asymptotic Results for Learning in Single Hidden-Layer Feedforward Neural Network Models. Journal of the American Statistical Association, 84, 1003-1013. https://doi.org/10.1080/01621459.1989.10478865
- 27. Fine, T.L. and Mukherjee, S. (1999) Parameter Convergence and Learning Curves for Neural Networks. Neural Computing and Applications, 11, 749-769. https://doi.org/10.1162/089976699300016647
- 28. Finnoff, W. (1994) Diffusion Approximations for the Constant Learning Rate Backpropagation Algorithm and Resistance to Local Minima. Neural Computing and Applications, 6, 285-295. https://doi.org/10.1162/neco.1994.6.2.285
- 29. Oh, S.H. (1997) Improving the Error BP Algorithm with a Modified Error Function. IEEE Transactions on Circuits and Systems, 8, 799-803.
- 30. Kuan, C.M. and Hornik, K. (1991) Convergence of Learning Algorithms with Constant Learning Rates. IEEE Transactions on Neural Networks, 2, 484-489. https://doi.org/10.1109/72.134285
- 31. Gaivoronski, A.A. (1994) Convergence Properties of Backpropagation for Neural Nets via Theory of Stochastic Gradient Methods. Optimization Methods of Software, 4, 117-134. https://doi.org/10.1080/10556789408805582
- 32. Tadic, V. and Stankovic, S. (2000) Learning in Neural Networks by Normalized Stochastic Gradient Algorithm: Local Convergence. Proceedings of the 5th Seminar on Neural Network Applications in Electrical Engineering, Yugoslavia, 26-27 September 2000, 11-17. https://doi.org/10.1109/NEUREL.2000.902375
- 33. Zhang, H., Wu, W., Liu, F. and Yao, M. (2009) Boundedness and Convergence of Online Gradient Method with Penalty for Feedforward Neural Networks. IEEE Transactions on Neural Networks, 20, 1050-1054. https://doi.org/10.1109/TNN.2009.2020848
- 34. Loeve, M. (1963) Probability Theory. Springer-Verlag, New York.
- 35. Azarskov, V.N., Kucherov, D.P., Nikolaienko, S.A. and Zhiteckii, L.S. (2015) Asymptotic Behaviour of Gradient Learning Algorithms in Neural Network Models for the Identification of Nonlinear Systems. American Journal of Neural Networks and Applications, 1, 1-10. https://doi.org/10.11648/j.ajnna.20150101.11
- 36. Polyak, B.T. (1976) Convergence and Convergence Rate of Iterative Stochastic Algorithms, I: General Case. Automation and Remote Control, 12, 1858-1868.
- 37. Tsypkin, Ya.Z. (1971) Adaptation and Learning in Automatic Systems. Academic Press, New York.