﻿ Least Squares Method from the View Point of Deep Learning II: Generalization

Vol.08 No.09(2018), Article ID:87483,10 pages
10.4236/apm.2018.89048

Least Squares Method from the View Point of Deep Learning II: Generalization

Kazuyuki Fujii1,2

1International College of Arts and Sciences, Yokohama City University, Yokohama, Japan

2Department of Mathematical Sciences, Shibaura Institute of Technology, Saitama, Japan

Received: August 30, 2018; Accepted: September 22, 2018; Published: September 25, 2018

ABSTRACT

The least squares method is one of the most fundamental methods in Statistics to estimate correlations among various data. On the other hand, Deep Learning is the heart of Artificial Intelligence and it is a learning method based on the least squares method, in which a parameter called learning rate plays an important role. It is in general very hard to determine its value. In this paper we generalize the preceding paper [K. Fujii: Least squares method from the view point of Deep Learning: Advances in Pure Mathematics, 8, 485-493, 2018] and give an admissible value of the learning rate, which is easily obtained.

Keywords:

Least Squares Method, Statistics, Deep Learning, Learning Rate, Gerschgorin’s Theorem

1. Introduction

This paper is a sequel to the preceding paper [1] .

The least squares method in Statistics plays an important role in almost all disciplines, from Natural Science to Social Science. When we want to find properties, tendencies or correlations hidden in huge and complicated data we usually employ the method. See for example [2] .

On the other hand, Deep Learning is the heart of Artificial Intelligence and will become a most important field in Data Science in the near future. As to Deep Learning see for example [3] - [10] .

Deep Learning may be stated as a successive learning method based on the least squares method. Therefore, to reconsider it from the view point of Deep Learning is natural and instructive. We carry out the calculation thoroughly of the successive approximation called gradient descent sequence, in which a parameter called learning rate plays an important role.

One of main points is to determine the range of the learning rate, which is a very hard problem [8] . We showed in [1] that a difference in methods between Statistics and Deep Learning leads to different results when the learning rate changes.

We generalize the preceding results to the case of the least squares method by polynomial approximation. Our results may give a new insight to both Statistics and Data Science.

2. Least Squares Method

Let us explain the least squares method by polynomial approximation [9] . The model function $f\left(x\right)$ is a polynomial in x of degree M given by

$f\left(x\right)={w}_{0}+{w}_{1}x+\cdots +{w}_{M}{x}^{M}=\sum _{j=0}^{M}\text{ }{w}_{j}{x}^{j}.$ (1)

For N pieces of two dimensional real data

$\left\{\left({x}_{1},{y}_{1}\right),\left({x}_{2},{y}_{2}\right),\cdots ,\left({x}_{N},{y}_{N}\right)\right\}$

we assume that their scatter plot is given like Figure 1.

The coefficients of (1)

$w={\left({w}_{0},{w}_{1},\cdots ,{w}_{M}\right)}^{\text{T}}$ (2)

must be determined by the data set (T denotes the transposition of a vector or a matrix).

For this set of data the error function is given by

$E\left(w\right)=\frac{1}{2}\sum _{i=1}^{N}{\left\{{y}_{i}-f\left({x}_{i}\right)\right\}}^{2}=\frac{1}{2}\sum _{i=1}^{N}{\left({y}_{i}-\sum _{j=0}^{M}{w}_{j}{x}_{i}^{j}\right)}^{2}.$ (3)

Figure 1. Scatter plot.

The aim of least squares method is to minimize the error function (3) with respect to $w$ in (2). Usually it is obtained by solving the simultaneous differentiable equations

$\left\{\begin{array}{l}\frac{\partial E\left(w\right)}{\partial {w}_{0}}=0,\\ \frac{\partial E\left(w\right)}{\partial {w}_{1}}=0,\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\\ \frac{\partial E\left(w\right)}{\partial {w}_{M}}=0.\end{array}$

However, in this paper another approach based on quadratic form is given, which is instructive.

Let us calculate the error function (3). By using the definition of inner product

$〈a|b〉={a}_{1}{b}_{1}+{a}_{2}{b}_{2}+\cdots +{a}_{n}{b}_{n}={a}^{\text{T}}b$

it is not difficult to see

$2E\left(w\right)=〈y-\Phi w|y-\Phi w〉$ (4)

where

$y={\left({y}_{1},{y}_{2},\cdots ,{y}_{N}\right)}^{\text{T}},\text{ }w={\left({w}_{0},{w}_{1},\cdots ,{w}_{M}\right)}^{\text{T}}$

and

$\Phi =\left(\begin{array}{ccccc}1& {x}_{1}^{1}& {x}_{1}^{2}& \cdots & {x}_{1}^{M}\\ 1& {x}_{2}^{1}& {x}_{2}^{2}& \cdots & {x}_{2}^{M}\\ ⋮& ⋮& ⋮& \ddots & ⋮\\ 1& {x}_{N}^{1}& {x}_{N}^{2}& \cdots & {x}_{N}^{M}\end{array}\right).$

Here we make an important

Assumption $N>M+1$ and $\text{rank}\left(\Phi \right)=M+1$ (full rank).

Let us deform (4). From

$\begin{array}{c}〈y-\Phi w|y-\Phi w〉={\left(y-\Phi w\right)}^{\text{T}}\left(y-\Phi w\right)\\ =\left({y}^{\text{T}}-{w}^{\text{T}}{\Phi }^{\text{T}}\right)\left(y-\Phi w\right)\\ =\left({w}^{\text{T}}{\Phi }^{\text{T}}-{y}^{\text{T}}\right)\left(\Phi w-y\right)\\ ={w}^{\text{T}}{\Phi }^{\text{T}}\Phi w-{w}^{\text{T}}{\Phi }^{\text{T}}y-{y}^{\text{T}}\Phi w+{y}^{\text{T}}y\end{array}$

we set for simplicity

$x=w,\text{ }A={\Phi }^{\text{T}}\Phi ,\text{ }b={\Phi }^{\text{T}}y,\text{ }c={y}^{\text{T}}y.$

Namely, we have a general quadratic form

$2E\left(w\right)=〈y-\Phi w|y-\Phi w〉={x}^{\text{T}}Ax-{x}^{\text{T}}b-{b}^{\text{T}}x+c.$ (5)

On the other hand, the deformation of (5) is well-known.

Formula For a symmetric and invertible matrix $A$ (: $\exists {A}^{-1}$ ) we have

${x}^{\text{T}}Ax-{x}^{\text{T}}b-{b}^{\text{T}}x+c={\left(x-{A}^{-1}b\right)}^{\text{T}}A\left(x-{A}^{-1}b\right)-{b}^{\text{T}}{A}^{-1}b+c.$ (6)

The proof is easy. Since ${A}^{\text{T}}=A⇒{\left({A}^{-1}\right)}^{\text{T}}={A}^{-1}$ we obtain

$\begin{array}{c}{\left(x-{A}^{-1}b\right)}^{\text{T}}A\left(x-{A}^{-1}b\right)=\left({x}^{\text{T}}-{b}^{\text{T}}{A}^{-1}\right)A\left(x-{A}^{-1}b\right)\\ ={x}^{\text{T}}Ax-{x}^{\text{T}}b-{b}^{\text{T}}x+{b}^{\text{T}}{A}^{-1}b\end{array}$

and this gives (6).

Therefore, our case becomes

$\begin{array}{c}2E\left(w\right)={\left\{w-{\left({\Phi }^{\text{T}}\Phi \right)}^{-1}{\Phi }^{\text{T}}y\right\}}^{\text{T}}{\Phi }^{\text{T}}\Phi \left\{w-{\left({\Phi }^{\text{T}}\Phi \right)}^{-1}{\Phi }^{\text{T}}y\right\}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-{y}^{\text{T}}\Phi {\left({\Phi }^{\text{T}}\Phi \right)}^{-1}{\Phi }^{\text{T}}y+{y}^{\text{T}}y\end{array}$ (7)

because ${\Phi }^{\text{T}}\Phi$ is symmetric and invertible by the assumption.

If we choose

$w={\left({\Phi }^{\text{T}}\Phi \right)}^{-1}{\Phi }^{\text{T}}y$ (8)

then the minimum is given by

$2E{\left(w\right)}_{\mathrm{min}}={y}^{\text{T}}y-{y}^{\text{T}}\Phi {\left({\Phi }^{\text{T}}\Phi \right)}^{-1}{\Phi }^{\text{T}}y={y}^{\text{T}}\left\{{E}_{N}-\Phi {\left({\Phi }^{\text{T}}\Phi \right)}^{-1}{\Phi }^{\text{T}}\right\}y$ (9)

where ${E}_{N}$ is the N-dimensional identity matrix.

Our method is simple and clear (“smart” in our terminology).

3. Least Squares Method from Deep Learning

In this section we reconsider the least squares method in Section 2 from the view point of Deep Learning.

First we arrange the data in Section 2 like

$\text{Input}\text{\hspace{0.17em}}\text{data}:\left\{\left(1,{x}_{j},{x}_{j}^{2},\cdots ,{x}_{j}^{M}\right)|1\le j\le N\right\}$

$\text{Teacher}\text{\hspace{0.17em}}\text{signal}:\left\{{y}_{1},{y}_{2},\cdots ,{y}_{N}\right\}$

and consider a simple neuron model in [11] (see Figure 2).

Here we use the polynomial (1) instead of the sigmoid function $z=\sigma \left(x\right)$ .

In this case the square error function becomes

$L\left(w\right)=\frac{1}{2}〈y-\Phi w|y-\Phi w〉=\frac{1}{2}\left({w}^{\text{T}}{\Phi }^{\text{T}}\Phi w-{w}^{\text{T}}{\Phi }^{\text{T}}y-{y}^{\text{T}}\Phi w+{y}^{\text{T}}y\right).$ (10)

Figure 2. Simple neuron model.

We in general use $L\left(w\right)$ instead of $E\left(w\right)$ in (3).

Our aim is also to determine the parameters $w$ in order to minimize $L\left(w\right)$ . However, the procedure is different from the least squares method in Section 2. This is an important and interesting point.

The parameters $w$ are determined successively by the gradient descent method (see for example [12] ): For $t=0,1,2,\cdots$

$w\left(0\right)\to w\left(1\right)\to \cdots \to w\left(t\right)\to w\left(t+1\right)\to \cdots$

and

$w\left(t+1\right)=w\left(t\right)-ϵ\frac{\partial L}{\partial w\left(t\right)}$ (11)

where

$L=L\left(w\left(t\right)\right)=\frac{1}{2}\left\{w{\left(t\right)}^{\text{T}}{\Phi }^{\text{T}}\Phi w\left(t\right)-w{\left(t\right)}^{\text{T}}{\Phi }^{\text{T}}y-{y}^{\text{T}}\Phi w\left(t\right)+{y}^{\text{T}}y\right\}$ (12)

and $ϵ\left(0<ϵ<1\right)$ is a small parameter called the learning rate.

The initial value $w\left(0\right)$ is given appropriately. Pay attention that t is discrete time and T is the transposition.

Let us calculate (11) explicitly. Since

$\frac{\partial L}{\partial w\left(t\right)}={\Phi }^{\text{T}}\Phi w\left(t\right)-{\Phi }^{\text{T}}y$

from (12) we have

$w\left(t+1\right)=w\left(t\right)-ϵ\left\{{\Phi }^{\text{T}}\Phi w\left(t\right)-{\Phi }^{\text{T}}y\right\}=\left({E}_{M+1}-ϵ{\Phi }^{\text{T}}\Phi \right)w\left(t\right)+ϵ{\Phi }^{\text{T}}y.$ (13)

This equation is easily solved to be

$w\left(t\right)={\left({E}_{M+1}-ϵ{\Phi }^{\text{T}}\Phi \right)}^{t}w\left(0\right)+\left\{{E}_{M+1}-{\left({E}_{M+1}-ϵ{\Phi }^{\text{T}}\Phi \right)}^{t}\right\}{\left({\Phi }^{\text{T}}\Phi \right)}^{-1}{\Phi }^{\text{T}}y$ (14)

for $t=0,1,2,\cdots$ .

The proof is left to readers.

Since this is not a final form let us continue the calculation. From (14) we have

$\underset{t\to \infty }{\mathrm{lim}}w\left(t\right)={\left({\Phi }^{\text{T}}\Phi \right)}^{-1}{\Phi }^{\text{T}}y$ (15)

if

$\underset{t\to \infty }{\mathrm{lim}}{\left({E}_{M+1}-ϵ{\Phi }^{\text{T}}\Phi \right)}^{t}={O}_{M+1}$ (16)

where ${O}_{M+1}$ is the N-dimensional zero matrix. (15) is just the equation (8) and it is independent of $ϵ$ .

Let us evaluate (14) further. The matrix ${\Phi }^{\text{T}}\Phi$ is positive definite, so all eigenvalues are positive. This can be shown as follows. Let us consider the eigenvalue equation

${\Phi }^{\text{T}}\Phi v=\lambda v\text{ }\left(v\ne 0\right).$

Then we have

$\lambda 〈v|v〉=〈\lambda v|v〉=〈{\Phi }^{\text{T}}\Phi v|v〉=〈\Phi v|\Phi v〉>0⇒\lambda >0.$

Therefore we can arrange all eigenvalues like

${\lambda }_{1}\ge {\lambda }_{2}\ge \cdots \ge {\lambda }_{M+1}>0.$

Since ${\Phi }^{\text{T}}\Phi$ is symmetric, it is diagonalized as

${\Phi }^{\text{T}}\Phi =QD{Q}^{\text{T}}$ (17)

where Q is an element in $O\left(M+1\right)$ ( ${Q}^{\text{T}}={Q}^{-1}$ ) and D is a diagonal matrix

$D=\left(\begin{array}{cccc}{\lambda }_{1}& & & \\ & {\lambda }_{2}& & \\ & & \ddots & \\ & & & {\lambda }_{M+1}\end{array}\right).$

See for example [13] .

By substituting (17) into (14) and using the equation

${E}_{M+1}-ϵ{\Phi }^{\text{T}}\Phi =Q\left({E}_{M+1}-ϵD\right){Q}^{\text{T}}=Q\left(\begin{array}{cccc}1-ϵ{\lambda }_{1}& & & \\ & 1-ϵ{\lambda }_{2}& & \\ & & \ddots & \\ & & & 1-ϵ{\lambda }_{M+1}\end{array}\right){Q}^{\text{T}}$

we finally obtain

Theorem I A general solution to (14) is

$\begin{array}{c}w\left(t\right)=Q\left(\begin{array}{cccc}{\left(1-ϵ{\lambda }_{1}\right)}^{t}& & & \\ & {\left(1-ϵ{\lambda }_{2}\right)}^{t}& & \\ & & \ddots & \\ & & & {\left(1-ϵ{\lambda }_{M+1}\right)}^{t}\end{array}\right){Q}^{\text{T}}w\left(0\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+Q\left(\begin{array}{cccc}\frac{1-{\left(1-ϵ{\lambda }_{1}\right)}^{t}}{{\lambda }_{1}}& & & \\ & \frac{1-{\left(1-ϵ{\lambda }_{2}\right)}^{t}}{{\lambda }_{2}}& & \\ & & \ddots & \\ & & & \frac{1-{\left(1-ϵ{\lambda }_{M+1}\right)}^{t}}{{\lambda }_{M+1}}\end{array}\right){Q}^{\text{T}}{\Phi }^{\text{T}}y.\end{array}$ (18)

This is our main result.

Next, let us show how to choose the learning rate $ϵ\left(0<ϵ<1\right)$ , which is a very important problem in Deep Learning [7] [8] .

Let us remember

${\lambda }_{1}\ge {\lambda }_{2}\ge \cdots \ge {\lambda }_{M+1}>0.$

From (16) and (18) the equations

$\underset{t\to \infty }{\mathrm{lim}}{\left({E}_{M+1}-ϵ{\Phi }^{\text{T}}\Phi \right)}^{t}={O}_{M+1}⇔\underset{t\to \infty }{\mathrm{lim}}{\left(1-ϵ{\lambda }_{j}\right)}^{t}=0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{for}\text{ }\text{\hspace{0.17em}}1\le j\le M+1$ (19)

determine the range of $ϵ$ . Noting

$|1-x|<1\left(⇔0

and

$ϵ{\lambda }_{1}\ge ϵ{\lambda }_{2}\ge \cdots \ge ϵ{\lambda }_{M+1}>0$

we obtain

Theorem II The learning rate $ϵ$ must satisfy an inequality

$0<ϵ{\lambda }_{1}<2⇔0<ϵ<\frac{2}{{\lambda }_{1}}.$ (20)

The greater the value of $ϵ$ , the sooner goes the gradient descent (11) so long as the convergence (19) is guaranteed. Let us note that the choice of the initial values $w\left(0\right)$ is irrelevant when the convergence condition (20) is satisfied.

Comment For example, if we choose $ϵ$ like

$\frac{2}{{\lambda }_{1}}<ϵ<1$

then we cannot recover (15), which shows a difference in methods between Statistics and Deep Learning.

4. How to Estimate the Learning Rate

How do we calculate ${\lambda }_{1}$ ? Since $\left\{{\lambda }_{j}\right\}$ are the eigenvalues of the matrix ${\Phi }^{\text{T}}\Phi$ , they satisfy the equation

$F\left({\lambda }_{j}\right)=0,\text{\hspace{0.17em}}{\lambda }_{1}\ge {\lambda }_{2}\ge \cdots \ge {\lambda }_{M+1}>0$

where $F\left(\lambda \right)$ is the characteristic polynomial of ${\Phi }^{\text{T}}\Phi$ given by

$F\left(\lambda \right)=|\lambda {E}_{M+1}-{\Phi }^{\text{T}}\Phi |.$ (21)

This is abstract, so let us deform (21). For simplicity we write $\Phi$ as

$\Phi =\left(\begin{array}{ccccc}1& {x}_{1}^{1}& {x}_{1}^{2}& \cdots & {x}_{1}^{M}\\ 1& {x}_{2}^{1}& {x}_{2}^{2}& \cdots & {x}_{2}^{M}\\ ⋮& ⋮& ⋮& \ddots & ⋮\\ 1& {x}_{N}^{1}& {x}_{N}^{2}& \cdots & {x}_{N}^{M}\end{array}\right)\equiv \left({x}^{\left(0\right)},{x}^{\left(1\right)},{x}^{\left(2\right)},\cdots ,{x}^{\left(M\right)}\right).$ (22)

Then it is easy to see

${\Phi }^{\text{T}}\Phi ={\left(〈{x}^{\left(i\right)}|{x}^{\left(j\right)}〉\right)}_{0\le i,j\le M},\text{ }〈{x}^{\left(i\right)}|{x}^{\left(j\right)}〉=\sum _{k=1}^{N}\text{ }{x}_{k}^{i+j}$

where the notation $〈a|b〉$ is the (real) inner product of vectors.

For clarity let us write down (21) explicitly.

$F\left(\lambda \right)=|\begin{array}{cccc}\lambda -〈{x}^{\left(0\right)}|{x}^{\left(0\right)}〉& -〈{x}^{\left(0\right)}|{x}^{\left(1\right)}〉& \cdots & -〈{x}^{\left(0\right)}|{x}^{\left(M\right)}〉\\ -〈{x}^{\left(1\right)}|{x}^{\left(0\right)}〉& \lambda -〈{x}^{\left(1\right)}|{x}^{\left(1\right)}〉& \cdots & -〈{x}^{\left(1\right)}|{x}^{\left(M\right)}〉\\ ⋮& ⋮& \ddots & ⋮\\ -〈{x}^{\left(M\right)}|{x}^{\left(0\right)}〉& -〈{x}^{\left(M\right)}|{x}^{\left(1\right)}〉& \cdots & \lambda -〈{x}^{\left(M\right)}|{x}^{\left(M\right)}〉\end{array}|.$

As far as we know there is no viable method to determine the greatest root of $F\left(\lambda \right)=0$ if M is very large1. Therefore, let us get satisfied by obtaining an approximate value which is both greater than ${\lambda }_{1}$ and easy to calculate.

For the purpose the Gerschgorin’s theorem is very useful2. Let $A=\left({a}_{ij}\right)$ be an $n×n$ complex (real in our case) matrix, and we set

${R}_{i}=\sum _{j=1,\text{\hspace{0.17em}}j\ne i}^{n}|{a}_{ij}|$ (23)

and

$D\left({a}_{ii};{R}_{i}\right)=\left\{z\in C||z-{a}_{ii}|\le {R}_{i}\right\}$ (24)

for each i. This is a closed disc centered at ${a}_{ii}$ with radius ${R}_{i}$ called the Gerschgorin’s disc.

Theorem (Gerschgorin [14] ) For any eigenvalue $\lambda$ of A we have

$\lambda \in \underset{i=1}{\overset{n}{\cup }}\text{ }D\left({a}_{ii};{R}_{i}\right).$ (25)

The proof is simple. See for example [7] .

Our case is real and $n=M+1$ and

$A={\Phi }^{\text{T}}\Phi ={\left(〈{x}^{\left(i\right)}|{x}^{\left(j\right)}〉\right)}_{0\le i,j\le M}.$

Therefore, all eigenvalues $\left\{\lambda \right\}$ satisfy

$\lambda \in \underset{i=1}{\overset{n}{\cup }}\text{ }D\left({a}_{ii};{R}_{i}\right)=\underset{i=1}{\overset{M+1}{\cup }}\left[{a}_{ii}-{R}_{i},{a}_{ii}+{R}_{i}\right]$ (26)

where $\left[A,B\right]$ is a closed interval and

${a}_{ii}=〈{x}^{\left(i-1\right)}|{x}^{\left(i-1\right)}〉\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{and}\text{ }\text{\hspace{0.17em}}\text{\hspace{0.17em}}{R}_{i}=\sum _{k=1,\text{\hspace{0.17em}}k\ne i}^{M+1}|〈{x}^{\left(i-1\right)}|{x}^{\left(i-1\right)}〉|.$

If we define

${\mathcal{F}}_{M+1}=\underset{1\le i\le M+1}{\mathrm{max}}\left\{{a}_{ii}+{R}_{i}\right\}$ (27)

then it is easy to see

${\lambda }_{1}\le {\mathcal{F}}_{M+1}⇒\frac{2}{{\mathcal{F}}_{M+1}}\le \frac{2}{{\lambda }_{1}}.$

from (26).

Thus we arrive at an admissible value of the learning rate $ϵ$ which is easily obtained.

Theorem III An admissible value of $ϵ$ is

$ϵ=\frac{2}{{\mathcal{F}}_{M+1}}.$ (28)

Let us show an example in the case of $M=1$ ( [1] ), which is very instructive for non-experts.

Example In this case it is easy to see and we set

${\Phi }^{\text{T}}\Phi =\left(\begin{array}{cc}N& {\sum }_{i=1}^{N}{x}_{i}\\ {\sum }_{i=1}^{N}{x}_{i}& {\sum }_{i=1}^{N}{x}_{i}^{2}\end{array}\right)\equiv \left(\begin{array}{cc}a& x\\ x& X\end{array}\right)$

for simplicity. Moreover, we may assume $x>0$ . Then from (21) we have

$f\left(\lambda \right)=|\lambda {E}_{2}-{\Phi }^{\text{T}}\Phi |={\lambda }^{2}-\left(a+X\right)\lambda +\left(aX-{x}^{2}\right)$

and

$\begin{array}{c}{\lambda }_{1}=\frac{a+X+\sqrt{\left(a+X\right){\right)}^{2}-4\left(aX-{x}^{2}\right)}}{2}\\ =\frac{a+X+\sqrt{{\left(a-X\right)}^{2}+4{x}^{2}}}{2}.\end{array}$

On the other hand, from (27) we have

${\mathcal{F}}_{2}=\mathrm{max}\left\{a+x,X+x\right\}$

because $x>0$ .

Then it is easy to show

$\mathrm{max}\left\{a+x,X+x\right\}\ge \frac{a+X+\sqrt{{\left(a-X\right)}^{2}+4{x}^{2}}}{2}.$

To check this inequality is left to readers. Therefore, from (28) the admissible value becomes

$ϵ=\frac{2}{\mathrm{max}\left\{a+x,X+x\right\}}.$

We emphasize once more that ${\mathcal{F}}_{M+1}$ is easy to evaluate, while to calculate ${\lambda }_{1}$ is very hard if M is large.

5. Concluding Remarks

In this paper we have discussed the least squares method by polynomial approximation from the view point of Deep Learning and carried out calculation of the gradient descent thoroughly. A difference in methods between Statistics and Deep Learning delivers different results when the learning rate $ϵ$ is changed. Theorem III is the first result to provide an admissible value of $ϵ$ as far as we know.

Deep Learning plays an essential role in Data Science and maybe in almost all fields of Science. Therefore it is desirable for undergraduates to master it in the early stages. To master it they must study Calculus, Linear Algebra and Statistics from Mathematics. My textbook [7] is recommended.

Acknowledgements

We wishes to thank Ryu Sasaki for useful suggestions and comments.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

Cite this paper

Fujii, K. (2018) Least Squares Method from the View Point of Deep Learning II: Generalization. Advances in Pure Mathematics, 8, 782-791. https://doi.org/10.4236/apm.2018.89048

References

1. 1. Fujii, K. (2018) Least Squares Method from the View Point of Deep Learning. Advances in Pure Mathematics, 8, 485-493.

2. 2. Wikipedia: Least Squares. https://en.m.wikipedia.org/wiki/Least_Squares

3. 3. Wikipedia: Deep Learning. https://en.m.wikipedia.org/wiki/Deep_Learning

4. 4. Goodfellow, I., Bengio, Y. and Courville, A. (2016) Deep Learning. The MIT Press, Cambridge.

5. 5. Patterson, J. and Gibson, A. (2017) Deep Learning: A Practitioner’s Approach, O’Reilly Media, Inc., Sebastopol.

6. 6. Alpaydin, J. (2014) Introduction to Machine Learning. 3rd Edition, The MIT Press, Cambridge.

7. 7. Fujii, K. (2018) Introduction to Mathematics for Understanding Deep Learning. Scientific Research Publishing Inc., Wuhan.

8. 8. Okaya, T. (2015) Deep Learning (In Japanese). Kodansha Ltd., Tokyo.

9. 9. Nakai, E. (2015) Introduction to Theory of Machine Learning (In Japanese). Gijutsu-Hyouronn Co., Ltd., Tokyo.

10. 10. Amari, S. (2016) Brain Heart Artificial Intelligence (In Japanese). Kodansha Ltd., Tokyo.

11. 11. Fujii, K. (2018) Mathematical Reinforcement to the Minibatch of Deep Learning. Advances in Pure Mathematics, 8, 307-320. https://doi.org/10.4236/apm.2018.83016

13. 13. Kasahara, K. (2000) Linear Algebra (In Japanese). Saiensu Ltd., Tokyo.

14. 14. Gerschgorin, S. (1931) über die Abgrenzung der Eigenwerte einer Matrix. Izv. Akad. Nauk. USSR Otd. Fiz.-Mat. Nauk, 6, 749-754.

NOTES

1 ${\Phi }^{\text{T}}\Phi$ is not a sparse matrix.

2In my opinion this theorem is not so popular. Why?