﻿ RETRACTED: The Riemann Hypothesis Holds True: A Rigorous Proof with Mean Formula and Extremum Principle

Applied Mathematics
Vol.10 No.08(2019), Article ID:94527,13 pages
10.4236/am.2019.108049

The Riemann Hypothesis Holds True: A Rigorous Proof with Mean Formula and Extremum Principle

Jinliang Wang

Research Institute for ESMD Method and Its Applications, College of Science, Qingdao University of Technology, Qingdao, China    Received: July 31, 2019; Accepted: August 19, 2019; Published: August 22, 2019

ABSTRACT

The Riemann hypothesis is a well-known mathematical problem which has been in suspense for 160 years. Though its difficulty is daunting, the proof of it may be very simple provided that a feasible approach is founded. After reviewing all the related explorations together with many times of failures, the road was finally cleared. The Riemann hypothesis is true, and the present article is a report on its rigorous proof. Here the contradiction method is adopted, and the Mean Formula and Extremum Principle of harmonic functions together with the symmetric properties play key parts in the proof.

Keywords:

Riemann Hypothesis, Riemann Zeta Function, Nontrivial Zeros, Critical Line, Number Theory, Riemann-Wang Hypothesis 1. Introduction

The well-known “Riemann hypothesis” was left by the German mathematician Georg F. B. Riemann (1826-1866). He observed that the distribution of prime numbers among all natural numbers is very closely related to the behavior of an infinite series:

$\zeta \left(s\right)=\underset{n=1}{\overset{\infty }{\sum }}\frac{1}{{n}^{s}},$ (1)

which is usually called the Riemann Zeta function. Here s is a complex number whose real part is usually denoted by $\mathrm{Re}\left(s\right)$. If there is a ${s}_{0}$ who satisfies $\zeta \left({s}_{0}\right)=0$, then we call it one zero point of $\zeta \left(s\right)$. Just as stated by E. Bombieri  , this function has real zero points at the negative even integers $-2,-4,\cdots$ and one refers to them as the trivial zeros. Relatively, the other complex zero points of it are called the nontrivial zeros.

Riemann hypothesis: All the nontrivial zeros of $\zeta \left(s\right)$ have real part $\mathrm{Re}\left(s\right)=1/2$.

Is it true? In 1986 the first 1.5 billion nontrivial zeros of $\zeta \left(s\right)$ (arranged by increasing positive imaginary part) had been checked in  , and the result showed that they are simple and possess real part $\mathrm{Re}\left(s\right)=1/2$ . So the Riemann hypothesis is very likely true. The proof of it had ever been a global hot topic when British mathematician Michael F. Atiyah (1929-2019) reported his findings in Heidelberg Laureate Forum on Sep. 24, 2018. Unfortunately, his approach does not work and the Riemann hypothesis is still an open problem. For the achievements on this topic, one can refer to the reviews in    and the related references. In the following, we focus our attention on illustrating the useful terms for our proof.

In an epoch-making memoir published in 1859, Riemann introduced a transformation for the Zeta function  :

$\xi \left(s\right)=\left(s-1\right){\text{π}}^{-\frac{s}{2}}\Gamma \left(\frac{s}{2}+1\right)\zeta \left(s\right),$ (2)

where $\Gamma$ is the gamma function defined by

$\Gamma \left(\frac{s}{2}+1\right)={\int }_{0}^{+\infty }\text{ }\text{ }{t}^{\left(\frac{s}{2}+1\right)-1}{\text{e}}^{-t}\text{d}t={\int }_{0}^{+\infty }\text{ }\text{ }{t}^{\frac{s}{2}}{\text{e}}^{-t}\text{d}t$ (3)

with property $\Gamma \left(s/2+1\right)=\left(s/2\right)\Gamma \left(s/2\right)$. This transformation has three advantages below :

1) The zero points of $\xi \left(s\right)$ coincide with the nontrivial zeros of $\zeta \left(s\right)$;

2) In the complex plane $ℂ$,$\xi \left(s\right)$ is analytic at any point $s\ne \infty$;

3) The later possesses the symmetric property $\xi \left(s\right)=\xi \left(1-s\right)$.

The first term indicates that, $\zeta \left(s\right)$ is equivalent to $\xi \left(s\right)$ and the proof of Riemann hypothesis only requires the fulfilment of $\mathrm{Re}\left(s\right)=1/2$ for the zero points of $\xi \left(s\right)$. The last two terms contains a lot of latent information which need to be interpreted. Fortunately, during this interpreting process we have found the key to the door. For an analytic function, its real and imaginary parts are all harmonic functions which satisfy the two-dimensional Laplace equations. Hence, the Mean Formula and Extremum Principle for them can be exploited. Meanwhile, the symmetric property of $\xi \left(s\right)$ can be also converted to its real and imaginary parts. To combine these beneficial tools it leads to a new approach. To follow this, the abstract complex analysis on $\zeta \left(s\right)$ is avoided, and the proof is just an elementary one which is only associated with two bivariate real functions.

Since for the case $\mathrm{Re}\left(s\right)>1$ the modulus of $\zeta \left(s\right)$ satisfies $|\zeta \left(s\right)|>0$ (the proof is provided in  ), there is no zero points for $\xi \left(s\right)$ in the region $\mathrm{Re}\left(s\right)>1$. Meanwhile, the symmetric property $\xi \left(s\right)=\xi \left(1-s\right)$ indicates that there is no zero points in the region $\mathrm{Re}\left(s\right)<0$, either. So there is a natural setting for the proof: All the zero points of $\xi \left(s\right)$, that is, all the nontrivial zeros of $\zeta \left(s\right)$ lie in the strip bounded by $0\le \mathrm{Re}\left(s\right)\le 1$. Though there are some improvements on narrowing this strip, this original setting delimited by Riemann is enough for our proof.

The proof is rigorous. During this process, the contradiction method is adopted, and the symmetric property of $\xi \left(s\right)$, the Mean Formula and Extremum Principle of harmonic functions play key parts.

2. Convert to a Real-Valued Problem

To split the real part and imaginary part of $\xi \left(s\right)$, the complex-valued problem can be converted to a real-valued problem.

It follows from Equations (1)-(3) that

$\begin{array}{c}\xi \left(s\right)=\left(s-1\right){\text{π}}^{-\frac{s}{2}}{\int }_{0}^{+\infty }\text{ }\text{ }{t}^{\frac{s}{2}}{\text{e}}^{-t}\text{d}t\text{ }\text{ }\underset{n=1}{\overset{\infty }{\sum }}{n}^{-s}\\ =\left(s-1\right){\text{e}}^{-\frac{s}{2}\mathrm{ln}\pi }{\int }_{0}^{+\infty }\text{ }\text{ }{\text{e}}^{\frac{s}{2}\mathrm{ln}t}{\text{e}}^{-t}\text{d}t\text{ }\text{ }\underset{n=1}{\overset{\infty }{\sum }}{\text{e}}^{-s\mathrm{ln}n}\\ =\left(s-1\right){\int }_{0}^{+\infty }\text{ }\text{ }{\text{e}}^{\frac{s}{2}\mathrm{ln}\left(\frac{t}{\text{π}}\right)-t}\text{d}t\text{ }\text{ }\underset{n=1}{\overset{\infty }{\sum }}{\text{e}}^{-s\mathrm{ln}n},\end{array}$ (4)

here only the principal values are concerned. To set $s=x+iy$ and denote

$\begin{array}{l}\varphi \left(x,y\right)={\int }_{0}^{+\infty }\text{ }\text{ }{\text{e}}^{\frac{x}{2}\mathrm{ln}\left(\frac{t}{\text{π}}\right)-t}\mathrm{cos}\left[\frac{y}{2}\mathrm{ln}\left(\frac{t}{\text{π}}\right)\right]\text{d}t,\\ \psi \left(x,y\right)={\int }_{0}^{+\infty }\text{ }\text{ }{\text{e}}^{\frac{x}{2}\mathrm{ln}\left(\frac{t}{\text{π}}\right)-t}\mathrm{sin}\left[\frac{y}{2}\mathrm{ln}\left(\frac{t}{\text{π}}\right)\right]\text{d}t,\\ u\left(x,y\right)=\underset{n=1}{\overset{\infty }{\sum }}\text{ }\text{ }{\text{e}}^{-x\mathrm{ln}n}\mathrm{cos}\left(y\mathrm{ln}n\right),\\ v\left(x,y\right)=-\underset{n=1}{\overset{\infty }{\sum }}\text{ }\text{ }{\text{e}}^{-x\mathrm{ln}n}\mathrm{sin}\left(y\mathrm{ln}n\right),\end{array}$ (5)

Equation (4) is rewritten as $\xi \left(x+iy\right)=\left(x-1+iy\right)\left(\varphi +i\psi \right)\left(u+iv\right)$. To split the real part $U\left(x,y\right)$ and imaginary part $V\left(x,y\right)$ of $\xi$, it yields

$U=\left(x-1\right)\left(\varphi u-\psi v\right)-y\left(\varphi v+\psi u\right),$ (6)

$V=\left(x-1\right)\left(\varphi v+\psi u\right)+y\left(\varphi u-\psi v\right).$ (7)

According to 2) the function $\xi \left(s\right)$ is analytic and the Cauchy-Riemann conditions hold for its real and imaginary parts:

${U}_{x}={V}_{y},\text{\hspace{0.17em}}\text{\hspace{0.17em}}{U}_{y}=-{V}_{x},$ (8)

here the subscripts mean the taking of partial derivatives, such as, ${U}_{x}=\partial U/\partial x$. One can check these with Equation (6) and Equation (7). There is a direct result for these:

Proposition 1. The gradients of U and V are orthogonal with each other, that is,

$\nabla U\cdot \nabla V={U}_{x}{V}_{x}+{U}_{y}{V}_{y}=0,$ (9)

which implies that the isolines of U and V are perpendicular with each other.

Let $\Omega$ be any finite two-dimensional domain in ${ℝ}^{2}$. The analytic property of $\xi \left(s\right)$ implies good smoothness for U and V on $\Omega$. So the second-order derivatives of them exist and are continuous, that is, $U,V\in {C}^{2}\left(\Omega \right)$. In addition, $U,V\in {C}^{0}\left(\stackrel{¯}{\Omega }\right)$ mean they are continuous on $\Omega$ together with its boundary $\partial \Omega$. Here this request is naturally satisfied. Simple deduction from Equation (8) results in:

${U}_{xx}+{U}_{yy}=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{V}_{xx}+{V}_{yy}=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(x,y\right)\in \Omega .$ (10)

These mean both U and V satisfy the two-dimensional Laplace equation, and the results below (which may appear in any textbook named Mathematical Physical Equations) hold for them:

Proposition 2. (Mean Formula) : For each $w\in {C}^{2}\left(\Omega \right)\cap {C}^{0}\left(\stackrel{¯}{\Omega }\right)$ which satisfies ${w}_{xx}+{w}_{yy}=0$ in $\Omega$, then for each disc $O\subset \Omega$ with center $\left({x}_{0},{y}_{0}\right)$, radius R and boundary $\partial O$,

$w\left({x}_{0},{y}_{0}\right)=\frac{1}{2\text{π}R}\underset{\partial O}{\int }\text{ }w\left(x,y\right)\text{d}l,$ (11)

$w\left({x}_{0},{y}_{0}\right)=\frac{1}{\text{π}{R}^{2}}\underset{O}{\iint }w\left(x,y\right)\text{d}x\text{d}y.$ (12)

Proposition 3. (Extremum Principle) : If w satisfies ${w}_{xx}+{w}_{yy}=0$ in $\Omega$, then there is no extreme point for it in the interior of $\Omega$, unless it is a constant.

The above three results are the consequences of analytic property, while the symmetric property of $\xi \left(s\right)$ results in the following theorem:

Theorem 1. The real and imaginary parts of $\xi \left(s\right)$ possess the symmetric properties:

$U\left(x,-y\right)=U\left(x,y\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}U\left(1-x,y\right)=U\left(x,y\right),$

$V\left(x,-y\right)=-V\left(x,y\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}V\left(1-x,y\right)=-V\left(x,y\right).$ (13)

Proof. It follows from Equation (5) and Equation (6) that

$\begin{array}{c}U\left(x,-y\right)=\left(x-1\right)\left[\varphi \left(x,-y\right)u\left(x,-y\right)-\psi \left(x,-y\right)v\left(x,-y\right)\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+y\left[\varphi \left(x,-y\right)v\left(x,-y\right)+\psi \left(x,-y\right)u\left(x,-y\right)\right]\\ =\left(x-1\right)\left[\varphi \left(x,y\right)u\left(x,y\right)-\psi \left(x,y\right)v\left(x,y\right)\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+y\left[-\varphi \left(x,y\right)v\left(x,y\right)-\psi \left(x,y\right)u\left(x,y\right)\right]\\ =U\left(x,y\right).\end{array}$ (14)

In addition, $\xi \left(s\right)=\xi \left(1-s\right)$ reads

$U\left(x,y\right)+iV\left(x,y\right)=U\left(1-x,-y\right)+iV\left(1-x,-y\right)$,

which indicates that $U\left(x,y\right)=U\left(1-x,-y\right)$ and $V\left(x,y\right)=V\left(1-x,-y\right)$. To substitute $x$ by $1-x$ in Equation (14) the relation also holds, that is, $U\left(1-x,-y\right)=U\left(1-x,y\right)$. Furthermore, to combine these two equations we get $U\left(x,y\right)=U\left(1-x,-y\right)=U\left(1-x,y\right)$. In the same way the relations for V are proved to be true. The proof is finished.

For the particular cases $y\equiv 0$ and $x\equiv 1/2$, the relations for V read $V\left(x,0\right)\equiv -V\left(x,0\right)$ and $V\left(1/2,y\right)\equiv -V\left(1/2,y\right)$. So $V\left(x,0\right)\equiv V\left(1/2,y\right)\equiv 0$. It indicates $\xi \left(x+iy\right)$ has 0 imaginary part on the lines $y\equiv 0$ and $x\equiv 1/2$. This theorem can be understood as: U and V are symmetric and anti-symmetric about the two lines $y\equiv 0$ and $x\equiv 1/2$, respectively. Particularly, due to the direct relationship with the Riemann hypothesis, the line $x\equiv 1/2$ has drawn much attention. It owns a particular appellation “critical line” . Relatively, the value of the line $y\equiv 0$ (which accords with the real axis) is usually ignored. Yet the symmetric properties about it cannot be neglected. They are beneficial for the proof.

As the real-valued problem concerned, the Riemann hypothesis is restated as: Except on the critical line $x\equiv 1/2$,$U\left(x,y\right)$ and $V\left(x,y\right)$ have no other mutual zero point in ${ℝ}^{2}$. The proof will be done by a contradiction method.

3. The Proof of Riemann Hypothesis

Suppose there is a mutual zero point $\left({x}^{*},{y}^{*}\right)$ for $U\left(x,y\right)$ and $V\left(x,y\right)$ (that is, $U\left({x}^{*},{y}^{*}\right)=V\left({x}^{*},{y}^{*}\right)=0$ ) away from the critical line $x\equiv 1/2$. In view of the symmetric properties in Theorem 1 together with the natural setting $0\le \mathrm{Re}\left(s\right)\le 1$ for $\xi \left(s\right)$, without loss of generality we require $1/2<{x}^{*}\le 1$ and ${y}^{*}>0$.

First of all, we say $\left({x}^{*},{y}^{*}\right)$ can’t be an isolated zero point for $U\left(x,y\right)$ or $V\left(x,y\right)$. On the contrary, if it is an isolated zero point for $V\left(x,y\right)$, to draw a small disc O with its center at $\left({x}^{*},{y}^{*}\right)$, then the sign of $V\left(x,y\right)$ maintains unchanged on its boundary $\partial O$. Hence, the inner point $\left({x}^{*},{y}^{*}\right)$ must be a minimum point or a maximum point of $V\left(x,y\right)$ on O which violates the Extremum Principle. The same thing occurs for $U\left(x,y\right)$.

Now that $\left({x}^{*},{y}^{*}\right)$ is not an isolated zero point, there should be one or two continuous zero-valued lines across it. The case with two lines may occur if $\left({x}^{*},{y}^{*}\right)$ is a saddle point. In fact, since $U\left(x,y\right)$ and $V\left(x,y\right)$ are two-dimensional surfaces, these zero-valued lines are actually the intersected ones with the $x$ - $y$ plane. It follows from Proposition 1 that the zero-valued lines for $U\left(x,y\right)$ and $V\left(x,y\right)$ differ from each other. Notice that the anti-symmetry is more favorable than the symmetry for the proof, the function V is stressed in our consideration.

Firstly we consider the variation of $V\left(x,y\right)$ respect to the vertical anti-symmetric axis $x\equiv 1/2$. To draw a circle O with center $\left({x}^{*},{y}^{*}\right)$ and radius R, then we see that in case $R>{x}^{*}-1/2$ it intersects with the line $x\equiv 1/2$ and on the contrary case it does not. Particularly, for the case $R>{x}^{*}-1/2$ there exists a part ${D}_{L}$ of O to the left of this line (see Figure 1). Meanwhile, to the right of this line, there is a symmetric area ${D}_{R}$ which is also included in O (This area can be seen as the part intersected by the critical line and another circle ${O}^{\prime }$ with center $\left(1-{x}^{*},{y}^{*}\right)$ and radius R). To get rid of ${D}_{L}$ and ${D}_{R}$, the remainder C of disc O can be further divided into two symmetric parts, that is, the upper part ${C}^{+}$ and lower part ${C}^{-}$, respect to the horizontal line $y\equiv {y}^{*}$. Here the remainder C is like the moon, it waxes for $R\le {x}^{*}-1/2$ and wanes for $R>{x}^{*}-1/2$. For visualization we call this remainder by “the moon”. Particularly, the bigger the radius, the thinner the moon.

Figure 1. Respect to the anti-symmetric axis $x\equiv 1/2$ of $V\left(x,y\right)$, the position relationships between the circles and the related areas.

3.1. The Integral on the Disc with Respect to the Anti-Symmetric Property

For the case $R>{x}^{*}-1/2$, to integrate $V\left(x,y\right)$ on the disc O, the Mean Formula leads to

$\begin{array}{c}0=\text{π}{R}^{2}V\left({x}^{*},{y}^{*}\right)=\underset{O}{\iint }V\left(x,y\right)\text{d}x\text{d}y\\ =\underset{{D}_{L}}{\iint }V\left(x,y\right)\text{d}x\text{d}y+\underset{{D}_{R}}{\iint }V\left(x,y\right)\text{d}x\text{d}y\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\underset{{C}^{+}}{\iint }V\left(x,y\right)\text{d}x\text{d}y+\underset{{C}^{-}}{\iint }V\left(x,y\right)\text{d}x\text{d}y.\end{array}$ (15)

It follows from the circle equation ${\left[x-\left(1-{x}^{*}\right)\right]}^{2}+{\left(y-{y}^{*}\right)}^{2}={R}^{2}$ of ${O}^{\prime }$ that, the two intersected points with the line $x\equiv 1/2$ are

${y}^{+}={y}^{*}+\sqrt{{R}^{2}-{\left({x}^{*}-1/2\right)}^{2}}$ and ${y}^{-}={y}^{*}-\sqrt{{R}^{2}-{\left({x}^{*}-1/2\right)}^{2}}$.

With the denotation $f\left(y\right)=1-{x}^{*}+\sqrt{{R}^{2}-{\left(y-{y}^{*}\right)}^{2}}$, the circular arcs for ${D}_{R}$

and ${D}_{L}$ are expressed by $x=f\left(y\right)$ and $x=1-f\left(y\right)$, respectively. It follows from Theorem 1 that $V\left(1-x,y\right)=-V\left(x,y\right)$, which results in

$\begin{array}{l}\underset{{D}_{L}}{\iint }V\left(x,y\right)\text{d}x\text{d}y+\underset{{D}_{R}}{\iint }V\left(x,y\right)\text{d}x\text{d}y\\ ={\int }_{{y}^{-}}^{{y}^{+}}{\int }_{1-f\left(y\right)}^{1/2}\text{ }\text{ }V\left(x,y\right)\text{d}x\text{d}y+{\int }_{{y}^{-}}^{{y}^{+}}{\int }_{1/2}^{f\left(y\right)}\text{ }\text{ }V\left(x,y\right)\text{d}x\text{d}y\\ =-{\int }_{{y}^{-}}^{{y}^{+}}{\int }_{f\left(y\right)}^{1/2}\text{ }\text{ }V\left(1-{x}^{\prime },y\right)\text{d}{x}^{\prime }\text{d}y+{\int }_{{y}^{-}}^{{y}^{+}}{\int }_{1/2}^{f\left(y\right)}\text{ }\text{ }V\left(x,y\right)\text{d}x\text{d}y\\ ={\int }_{{y}^{-}}^{{y}^{+}}{\int }_{1/2}^{f\left(y\right)}\left[V\left(1-x,y\right)+V\left(x,y\right)\right]\text{d}x\text{d}y=0.\end{array}$ (16)

The combination of Equation (15) and Equation (16) indicates that, for the waned case the integral on the moon C satisfies

$\begin{array}{c}0=\underset{{C}^{+}}{\iint }V\left(x,y\right)\text{d}x\text{d}y+\underset{{C}^{-}}{\iint }V\left(x,y\right)\text{d}x\text{d}y\\ ={\int }_{1/2}^{{x}_{1}}{\int }_{{g}_{1}\left(x\right)}^{{g}_{2}\left(x\right)}\text{ }\text{ }V\left(x,y\right)\text{d}y\text{d}x+{\int }_{{x}_{1}}^{{x}_{2}}{\int }_{{y}^{*}}^{{g}_{2}\left(x\right)}\text{ }\text{ }V\left(x,y\right)\text{d}y\text{d}x\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\int }_{1/2}^{{x}_{1}}{\int }_{2{y}^{*}-{g}_{2}\left(x\right)}^{2{y}^{*}-{g}_{1}\left(x\right)}\text{ }\text{ }V\left(x,y\right)\text{d}y\text{d}x+{\int }_{{x}_{1}}^{{x}_{2}}{\int }_{2{y}^{*}-{g}_{2}\left(x\right)}^{{y}^{*}}\text{ }\text{ }V\left(x,y\right)\text{d}y\text{d}x\\ ={\int }_{1/2}^{{x}_{1}}{\int }_{{g}_{1}\left(x\right)}^{{g}_{2}\left(x\right)}\text{ }\text{ }V\left(x,y\right)\text{d}y\text{d}x+{\int }_{{x}_{1}}^{{x}_{2}}{\int }_{{y}^{*}}^{{g}_{2}\left(x\right)}\text{ }\text{ }V\left(x,y\right)\text{d}y\text{d}x\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-{\int }_{1/2}^{{x}_{1}}{\int }_{{g}_{2}\left(x\right)}^{{g}_{1}\left(x\right)}\text{ }\text{ }V\left(x,2{y}^{*}-{y}^{\prime }\right)\text{d}{y}^{\prime }\text{d}x-{\int }_{{x}_{1}}^{{x}_{2}}{\int }_{{g}_{2}\left(x\right)}^{{y}^{*}}\text{ }\text{ }V\left(x,2{y}^{*}-{y}^{\prime }\right)\text{d}{y}^{\prime }\text{d}x\end{array}$

$\begin{array}{l}={\int }_{1/2}^{{x}_{1}}{\int }_{{g}_{1}\left(x\right)}^{{g}_{2}\left(x\right)}\left[V\left(x,y\right)+V\left(x,2{y}^{*}-y\right)\right]\text{d}y\text{d}x\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }+{\int }_{{x}_{1}}^{{x}_{2}}{\int }_{{y}^{*}}^{{g}_{2}\left(x\right)}\left[V\left(x,y\right)+V\left(x,2{y}^{*}-y\right)\right]\text{d}y\text{d}x\\ =\underset{{C}^{+}}{\iint }\left[V\left(x,y\right)+V\left(x,2{y}^{*}-y\right)\right]\text{d}x\text{d}y,\end{array}$ (17)

where ${x}_{1}=1-{x}^{*}+R$,${x}_{2}={x}^{*}+R$,${g}_{1}\left(x\right)={y}^{*}+\sqrt{{R}^{2}-{\left(x-1+{x}^{*}\right)}^{2}}$ and ${g}_{2}\left(x\right)={y}^{*}+\sqrt{{R}^{2}-{\left(x-{x}^{*}\right)}^{2}}$.

For the waxed case, the Mean Formula can be directly applied and the integral on the moon C satisfies

$\begin{array}{c}0=\underset{{C}^{+}}{\iint }V\left(x,y\right)\text{d}x\text{d}y+\underset{{C}^{-}}{\iint }V\left(x,y\right)\text{d}x\text{d}y\\ ={\int }_{{x}_{1}}^{{x}_{2}}{\int }_{{y}^{*}}^{{g}_{2}\left(x\right)}\text{ }\text{ }V\left(x,y\right)\text{d}y\text{d}x+{\int }_{{x}_{1}}^{{x}_{2}}{\int }_{2{y}^{*}-{g}_{2}\left(x\right)}^{{y}^{*}}\text{ }\text{ }V\left(x,y\right)\text{d}y\text{d}x\\ ={\int }_{{x}_{1}}^{{x}_{2}}{\int }_{{y}^{*}}^{{g}_{2}\left(x\right)}\text{ }\text{ }V\left(x,y\right)\text{d}y\text{d}x-{\int }_{{x}_{1}}^{{x}_{2}}{\int }_{{g}_{2}\left(x\right)}^{{y}^{*}}\text{ }\text{ }V\left(x,2{y}^{*}-{y}^{\prime }\right)\text{d}{y}^{\prime }\text{d}x\\ ={\int }_{{x}_{1}}^{{x}_{2}}{\int }_{{y}^{*}}^{{g}_{2}\left(x\right)}\left[V\left(x,y\right)+V\left(x,2{y}^{*}-y\right)\right]\text{d}y\text{d}x\\ =\underset{{C}^{+}}{\iint }\left[V\left(x,y\right)+V\left(x,2{y}^{*}-y\right)\right]\text{d}x\text{d}y,\end{array}$ (18)

where ${x}_{1}={x}^{*}-R$,${x}_{2}={x}^{*}+R$ and ${g}_{2}\left(x\right)={y}^{*}+\sqrt{{R}^{2}-{\left(x-{x}^{*}\right)}^{2}}$.

Hence, no matter the moon C wanes (for $R>{x}^{*}-1/2$ ) or waxes (for $0 ), the integral of bivariate function

$F\left(x,y\right)=V\left(x,y\right)+V\left(x,2{y}^{*}-y\right)$

always maintains to be 0 on one half of it. To be specifically, with two new denotations $\varphi \left(x,R\right)={y}^{*}+\sqrt{{R}^{2}-{\left(x-1+{x}^{*}\right)}^{2}}$ and $\psi \left(x,R\right)={y}^{*}+\sqrt{{R}^{2}-{\left(x-{x}^{*}\right)}^{2}}$, the combination of Equation (17) and Equation (18) indicates that

$\Phi \left(R\right)=\left\{\begin{array}{l}{\int }_{{x}^{*}-R}^{{x}^{*}+R}{\int }_{{y}^{*}}^{\psi \left(x,R\right)}\text{ }\text{ }F\left(x,y\right)\text{d}y\text{d}x,\text{\hspace{0.17em}}0{R}_{0},\end{array}$ (19)

always satisfies $\Phi \left(R\right)\equiv 0$ on the interval $\left(0,\infty \right)$, where $\alpha \left(R\right)=1-{x}^{*}+R$ and ${R}_{0}={x}^{*}-1/2$.

Does the arbitrariness of radius R in Equation (19) imply $F\left(x,y\right)\equiv 0$ ? It seems true. A rigorous proof is needed.

3.2. The Derivative of the Integral Respect to the Radius

There is a known formula for the differential under the integral symbol: For a given integral of the form

$h\left(y\right)={\int }_{a\left(y\right)}^{b\left(y\right)}\text{ }f\left(x,y\right)\text{d}x,$ (20)

where $a\left(y\right)$,$b\left(y\right)$ and $f\left(x,y\right)$ are all differentiable functions, its derivative satisfies

${h}^{\prime }\left(y\right)={\int }_{a\left(y\right)}^{b\left(y\right)}\text{ }{{f}^{\prime }}_{y}\left(x,y\right)\text{d}x+f\left(b\left(y\right),y\right){b}^{\prime }\left(y\right)-f\left(a\left(y\right),y\right){a}^{\prime }\left(y\right),$ (21)

here the superscript “ $\text{'}$ ” means the taking of ordinary derivative, ${{f}^{\prime }}_{y}\left(x,y\right)$ denotes the partial derivative $\partial f/\partial y$.

To follow this formula we consider the derivative of $\Phi \left(R\right)$. For the case $0, it reads

$\begin{array}{c}{\Phi }^{\prime }\left(R\right)={\int }_{{x}^{*}-R}^{{x}^{*}+R}\text{ }F\left(x,\psi \left(x,R\right)\right){{\psi }^{\prime }}_{R}\left(x,R\right)\text{d}x\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\int }_{{y}^{*}}^{\psi \left({x}^{*}+R,R\right)}\text{ }F\left({x}^{*}+R,y\right)\text{d}y\text{\hspace{0.17em}}{\left({x}^{*}+R\right)}^{\prime }\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-{\int }_{{y}^{*}}^{\psi \left({x}^{*}-R,R\right)}\text{ }F\left({x}^{*}-R,y\right)\text{d}y\text{\hspace{0.17em}}{\left({x}^{*}-R\right)}^{\prime }\end{array}$

$\begin{array}{l}={\int }_{{x}^{*}-R}^{{x}^{*}+R}\text{ }F\left(x,\psi \left(x,R\right)\right)\frac{R}{\sqrt{{R}^{2}-{\left(x-{x}^{*}\right)}^{2}}}\text{d}x\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\int }_{{y}^{*}}^{{y}^{*}}\text{ }F\left({x}^{*}+R,y\right)\text{d}y+{\int }_{{y}^{*}}^{{y}^{*}}F\left({x}^{*}-R,y\right)\text{d}y\\ ={\int }_{{x}^{*}-R}^{{x}^{*}+R}\text{ }F\left(x,\psi \left(x,R\right)\right)\frac{R}{\sqrt{{R}^{2}-{\left(x-{x}^{*}\right)}^{2}}}\text{d}x.\end{array}$ (22)

Similarly, for the case $R>{R}_{0}$,

$\begin{array}{c}{\Phi }^{\prime }\left(R\right)={\int }_{\alpha \left(R\right)}^{{x}^{*}+R}\text{ }F\left(x,\psi \left(x,R\right)\right){{\psi }^{\prime }}_{R}\left(x,R\right)\text{d}x\\ +{\int }_{{y}^{*}}^{\psi \left({x}^{*}+R,R\right)}\text{ }F\left({x}^{*}+R,y\right)\text{d}y\text{\hspace{0.17em}}{\left({x}^{*}+R\right)}^{\prime }\\ -{\int }_{{y}^{*}}^{\psi \left(\alpha \left(R\right),R\right)}\text{ }F\left(\alpha \left(R\right),y\right)\text{d}y\text{\hspace{0.17em}}{\alpha }^{\prime }\left(R\right)\\ +{\int }_{1/2}^{\alpha \left(R\right)}\left[F\left(x,\psi \left(x,R\right)\right){{\psi }^{\prime }}_{R}\left(x,R\right)-F\left(x,\varphi \left(x,R\right)\right){{\varphi }^{\prime }}_{R}\left(x,R\right)\right]\text{d}x\\ +{\int }_{\varphi \left(\alpha \left(R\right),R\right)}^{\psi \left(\alpha \left(R\right),R\right)}\text{ }F\left(\alpha \left(R\right),y\right)\text{d}y\text{\hspace{0.17em}}{\alpha }^{\prime }\left(R\right)\end{array}$

$\begin{array}{l}={\int }_{1/2}^{{x}^{*}+R}\text{ }F\left(x,\psi \left(x,R\right)\right)\frac{R}{\sqrt{{R}^{2}-{\left(x-{x}^{*}\right)}^{2}}}\text{d}x\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }-{\int }_{1/2}^{\alpha \left(R\right)}\text{ }F\left(x,\varphi \left(x,R\right)\right)\frac{R}{\sqrt{{R}^{2}-{\left(x-1+{x}^{*}\right)}^{2}}}\text{d}x,\end{array}$ (23)

here $\varphi \left(\alpha \left(R\right),R\right)={y}^{*}$ and $\psi \left({x}^{*}+R,R\right)={y}^{*}$ are used. It is easy to check from Equation (22) and Equation (23) that the derivative of $\Phi \left(R\right)$ is continuous at the point $R={R}_{0}$.

Since for all positive R the equality $\Phi \left(R\right)\equiv 0$ holds, its derivative should satisfy ${\Phi }^{\prime }\left(R\right)\equiv 0$. We note here that the zero values of Equation (22) and Equation (23) can be obtained by the curvilinear integral form of Mean Formula in Equation (11) in a direct way. However, if this form of Mean Formula is employed from the beginning, the deduction process is un-reducible either, since the curvilinear integral is awkward in revealing the characteristics of the function $F\left(x,y\right)=V\left(x,y\right)+V\left(x,2{y}^{*}-y\right)$. So the previous arguments on the surface integral are necessary. Notice that only for the case $R>{R}_{0}$ the anti-asymmetric property of $V\left(x,y\right)$ about the critical line $x\equiv 1/2$ is employed, in the following only Equation (23) is considered.

To denote the points $\left(1/2,\varphi \left(1/2,R\right)\right)$,$\left(\alpha \left(R\right),{y}^{*}\right)$ and $\left({x}^{*}+R,{y}^{*}\right)$ by P, M and N respectively, then for every given R the two circular arcs $\stackrel{⌢}{\text{PM}}$ and $\stackrel{⌢}{\text{PN}}$ differs from each other. Except the mutual point P they are composed by different points, so except on P the values of the function $F\left(x,y\right)$ on $\stackrel{⌢}{\text{PM}}$ have no relation with those on $\stackrel{⌢}{\text{PN}}$. To satisfy ${\Phi }^{\prime }\left(R\right)\equiv 0$ in Equation (23) it requires

${\int }_{1/2}^{{x}^{*}+R}\text{ }F\left(x,\psi \left(x,R\right)\right)\frac{R}{\sqrt{{R}^{2}-{\left(x-{x}^{*}\right)}^{2}}}\text{d}x\equiv 0,$ (24)

${\int }_{1/2}^{\alpha \left(R\right)}\text{ }F\left(x,\varphi \left(x,R\right)\right)\frac{R}{\sqrt{{R}^{2}-{\left(x-1+{x}^{*}\right)}^{2}}}\text{d}x\equiv 0.$ (25)

3.3. The Anti-Asymmetric Property of V about the Line $y\equiv {y}^{\ast }$

For the function $F\left(x,y\right)=V\left(x,y\right)+V\left(x,2{y}^{*}-y\right)$, it is easily checked that ${F}_{xx}+{F}_{yy}=0$. So the Mean Formula and Extremum Principle also hold for $F\left(x,y\right)$. Particularly, $\left({x}^{*},{y}^{*}\right)$ is also a zero point of $F\left(x,y\right)$, since for this case $F\left({x}^{*},{y}^{*}\right)=2V\left({x}^{*},{y}^{*}\right)=0$. The anti-asymmetric property of $V\left(x,y\right)$ about the critical line $x\equiv 1/2$ passes on to $F\left(x,y\right)$, and this line is also a zero-valued one for it. In addition, since

$F\left(x,2{y}^{*}-y\right)=V\left(x,2{y}^{*}-y\right)+V\left(x,y\right)=F\left(x,y\right)$,

the function $F\left(x,y\right)$ is symmetric about the horizontal line $y={y}^{*}$.

We claim that the equality $F\left(x,y\right)\equiv 0$ holds on any finite domain $\Omega$. To prove this it needs to rule out other possibilities. First of all, it follows from the Extremum Principle that the point $\left({x}^{*},{y}^{*}\right)$ is not an isolated zero point. The case with a zero-valued patch at any location is not permitted either, since for this situation there must be an extreme point in the interior of an arbitrary large domain $\Omega$ which includes this patch, and it further leads to a contradictory result $F\left(x,y\right)\equiv 0$. Hence, the left possible cases are as follows: There is one or two continuous zero-valued lines across the point $\left({x}^{*},{y}^{*}\right)$, and to one side of each branch (for one-line case and two-lines case there are 2 and 4 branches separated by the point $\left({x}^{*},{y}^{*}\right)$, respectively) the values of $F\left(x,y\right)$ have the same sign. It only needs to show the impossibility of these cases.

Firstly, these zero-valued branches cannot intersect with the critical line $x\equiv 1/2$ or form a closed loop by themselves. In fact, if there exists a branch which intersect with this line, there must be another one due to the symmetric property of $F\left(x,y\right)$. For this case, these two branches together with the line $x\equiv 1/2$ form a closed loop, and on the enclosed domain $\Omega$ the value of $F\left(x,y\right)$ has the same sign in the interior. Notice that $F\left(x,y\right)=0$ on the boundary $\partial \Omega$, there must be an extreme point in the interior, and this violates the Extremum Principle. Similarly, it is impossible for these branches form a closed loop by themselves, either.

To recall the symmetric property of $F\left(x,y\right)$, in addition to the particular case with $y\equiv {y}^{*}$, there should be one pair or two pairs of symmetric zero-valued branches as in Figure 2. For convenience of illustration, this particular case will be considered in the last. Without loss of generality, we assume that the values of $F\left(x,y\right)$ possess positive sign between $x\equiv 1/2$ and the upper branch (when there are two pairs, the one which is closer to the line $x\equiv 1/2$ is chosen). For all the cases, there is always a suitable radius R such that the circular arc $\stackrel{⌢}{\text{PM}}$ (except the point P) lies in this positive domain. The interval to be chosen for R is given by $1/2<\alpha \left(R\right)<{x}^{*}$, that is, ${R}_{0}. In fact, only if R is sufficiently close to ${R}_{0}$ the above request is fulfilled. Notice that on this arc the point P is the unique zero point for $F\left(x,y\right)$, for a given small number $\delta >0$, there must be another small number $\epsilon >0$ such that $F\left(x,\varphi \left(x,R\right)\right)\ge \epsilon$ on the interval $\left[1/2+\delta ,\alpha \left(R\right)\right]$. Hence, it follows from Equation (25) that

Figure 2. Respect to the horizontal line $y\equiv {y}^{*}$, the position relationships between the arcs and the zero-valued lines of $V\left(x,y\right)$.

$\begin{array}{c}0\equiv {\int }_{1/2}^{\alpha \left(R\right)}\text{ }F\left(x,\varphi \left(x,R\right)\right)\frac{R}{\sqrt{{R}^{2}-{\left(x-1+{x}^{*}\right)}^{2}}}\text{d}x\\ ={\int }_{1/2}^{1/2+\delta }\text{ }F\left(x,\varphi \left(x,R\right)\right)\frac{R}{\sqrt{{R}^{2}-{\left(x-1+{x}^{*}\right)}^{2}}}\text{d}x\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\int }_{1/2+\delta }^{\alpha \left(R\right)}\text{ }F\left(x,\varphi \left(x,R\right)\right)\frac{R}{\sqrt{{R}^{2}-{\left(x-1+{x}^{*}\right)}^{2}}}\text{d}x\\ \ge 0+\epsilon {\int }_{1/2+\delta }^{\alpha \left(R\right)}\text{ }\frac{R}{\sqrt{{R}^{2}-{\left(x-1+{x}^{*}\right)}^{2}}}\text{d}x>0.\end{array}$

For the case that the line $y\equiv {y}^{*}$ is a zero-valued one, the above contradiction still holds, nothing but another zero point M is added.

In a word, all the possibilities other than $F\left(x,y\right)=V\left(x,y\right)+V\left(x,2{y}^{*}-y\right)\equiv 0$ are excluded. Hence, on any finite domain $\Omega$ there is always an anti-symmetric property for V: $V\left(x,2{y}^{*}-y\right)=-V\left(x,y\right)$. Due to the arbitrariness of $\Omega$, this property should also hold in ${ℝ}^{2}$. Particularly, on the anti-symmetric axis $y\equiv {y}^{*}$, it reads $V\left(x,{y}^{*}\right)=-V\left(x,{y}^{*}\right)$ which leads to $V\left(x,{y}^{*}\right)\equiv 0$. So the horizontal line $y\equiv {y}^{*}$ is a zero-valued one for $V\left(x,y\right)$ across the concerned point $\left({x}^{*},{y}^{*}\right)$. By the way, this anti-symmetric property for $V\left(x,y\right)$ is a strong result. It implies that, in addition to $y\equiv {y}^{*}$ and $y\equiv 0$, there are infinitely many horizontal zero-valued lines of $V\left(x,y\right)$ with equal interval ${y}^{*}$.

3.4. The Final Proof

Since the horizontal line $y\equiv 0$ is also an anti-symmetric axis for $V\left(x,y\right)$, we take $G\left(x,y\right)=V\left(x,y\right)+V\left(2{x}^{*}-x,y\right)$ as the research object. Here we note that, for this case, the zero-valued lines for $G\left(x,y\right)$ differ from those for $F\left(x,y\right)$, and the line $x\equiv 1/2$ is not a zero-valued one for $G\left(x,y\right)$ anymore. The similar deduction process as the previous respect to Figure 3 results in $G\left(x,y\right)\equiv 0$. This indicates that the anti-symmetric property $V\left(2{x}^{*}-x,y\right)=-V\left(x,y\right)$ holds for every $\left(x,y\right)\in {ℝ}^{2}$. So the line $x\equiv {x}^{*}$ is another zero-valued one for $V\left(x,y\right)$ across the point $\left({x}^{*},{y}^{*}\right)$.

In all, if $\left({x}^{*},{y}^{*}\right)$ is a mutual zero point for $U\left(x,y\right)$ and $V\left(x,y\right)$, then $y\equiv {y}^{*}$ and $x\equiv {x}^{*}$ are all the zero-valued lines for $V\left(x,y\right)$ across it. Hence, it should be a saddle point for $V\left(x,y\right)$. Yet, for this case, the rectangular region PQRS is enclosed by 4 segments of zero-valued lines (see Figure 3). In the interior of this region $V\left(x,y\right)$ possesses the same sign, and there must be a minimum point or maximum point, which violates the Extremum Principle. This contradiction indicates that, it is impossible for the existence of mutual zero point for $U\left(x,y\right)$ and $V\left(x,y\right)$ away from the critical line $x\equiv 1/2$. In another word, the function $\xi \left(s\right)=\xi \left(x+iy\right)=U\left(x,y\right)+iV\left(x,y\right)$ only possesses zero points for the case $\mathrm{Re}\left(s\right)=1/2$. Furthermore, notice that the nontrivial zeros of $\zeta \left(s\right)$ coincide with the zero points of $\xi \left(s\right)$, all their real parts must also satisfy $\mathrm{Re}\left(s\right)=1/2$. The proof of Riemann hypothesis is finished.

Figure 3. Respect to the anti-symmetric axis $y\equiv 0$ of $V\left(x,y\right)$, the position relationships between the circles and the related areas.

4. Discussions

We note that, in this contradiction proof the real part $U\left(x,y\right)$ of $\xi \left(s\right)$ is not involved, and it has been solely finished by the imaginary part $V\left(x,y\right)$. This means, no matter $U\left(x,y\right)$ equals to zero or not, $V\left(x,y\right)$ does not equal to zero except on the critical line $x\equiv 1/2$. This is a surprising result! Relative to the original function $\zeta \left(s\right)$, this good characteristic may owe to the symmetric property of $\xi \left(s\right)$. It requires the imaginary part $V\left(x,y\right)$ to be anti-symmetric both about the vertical line $x\equiv 1/2$ and about the horizontal line $y\equiv 0$. In fact, it is not strange. It follows from Equation (7) that

$V=\left[\left(x-1\right)\psi +y\varphi \right]u+\left[\left(x-1\right)\varphi -y\psi \right]v.$

So $\xi =U+iV$ differs from $\zeta =u+iv$, and the imaginary part of $\xi$ depends not only on $v$ but also on $u$. Particularly, $u=v=0$ accords with $V=0$.

Now that the Riemann hypothesis is proved, all the nontrivial zeros of $\zeta \left(s\right)$ are in the form $s=1/2+it$. Due to the symmetric property of $\xi \left(s\right)$ about the real axis, here only positive t is concerned. Though there are many numerical approaches for solving these nontrivial zeros, the distribution characteristics of them along the critical line are not clear. Is there a uniform explicit expression for these imaginary parts? For $s=1/2+it$ the infinite series:

$\eta \left(s\right)=\underset{n=1}{\overset{\infty }{\sum }}{\left(-1\right)}^{n+1}\frac{1}{{n}^{s}}=\underset{n=1}{\overset{\infty }{\sum }}{\left(-1\right)}^{n+1}{n}^{-\frac{1}{2}}\mathrm{cos}\left(t\mathrm{ln}n\right)+i\underset{n=1}{\overset{\infty }{\sum }}{\left(-1\right)}^{n}{n}^{-\frac{1}{2}}\mathrm{sin}\left(t\mathrm{ln}n\right),$ (26)

is convergent, and it is easily checked that $\eta \left(s\right)=\left(1-{2}^{1-s}\right)\zeta \left(s\right)$. Notice that $1-{2}^{1-s}\ne 0$ for this case, the nontrivial zeros of $\zeta \left(s\right)$ can be calculated by $\eta \left(s\right)=0$. Let ${ℤ}^{+}$ be the set of positive integers. For a fixed t, though $\mathrm{sin}\left(t\mathrm{ln}z\right)$ and $\mathrm{cos}\left(t\mathrm{ln}z\right)$ have some kind of periodicity about z on $\left(0,\infty \right)$, that is, $f\left({\text{e}}^{2k\text{π}/t}z\right)=f\left(z\right)$ for some $k\in {ℤ}^{+}$, when the real number z is substituted by $n\in {ℤ}^{+}$, this periodicity is not necessarily sustained. Particularly, if there exists a $m\in {ℤ}^{+}$ such that $2k\text{π}/t=\mathrm{ln}m$, then the periodicity of the form $f\left(mn\right)=f\left(n\right)$ is met. On the contrary, if the inequality $2k\text{π}/t\ne \mathrm{ln}m$ holds for all $k,m\in {ℤ}^{+}$, then the terms of the series vary in a chaotic manner respect to $n$, and it may be in vain for pinning one’s hope on that the two sums of infinite series in Equation (26) converge to 0 simultaneously. Enlightened by this, we give an extended version of “Riemann hypothesis” below:

Riemann-Wang hypothesis: The nontrivial zeros of $\zeta \left(s\right)$ possess same real part $\sigma =1/2$ and different imaginary parts which satisfy a uniform explicit expression:

$t=\frac{2k\text{π}}{\mathrm{ln}m},\text{\hspace{0.17em}}\text{\hspace{0.17em}}k,m\in {ℤ}^{+}.$

Here the statement about the real part is proved, and the one about the imaginary part is open.

In addition to this research, we had also done some exploring on another well-known problem named “P versus NP” in  , where a surprising result “P = NP” was proved. One can also comment on it.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

Cite this paper

Wang, J.L. (2019) The Riemann Hypothesis Holds True: A Rigorous Proof with Mean Formula and Extremum Principle. Applied Mathematics, 10, 691-703. https://doi.org/10.4236/am.2019.108049

References

1. 1. Bombieri, E. (2000) Official Problem Description: The Riemann Hypothesis. http://www.claymath.org/millennium-problems/riemann-hypothesis

2. 2. Lune, J., Riele, J.J. and Winter, D.T. (1986) On the Zeros of the Riemann Zeta Function in the Critical Strip. IV. Mathematics of Computation, 46, 667-681.https://doi.org/10.2307/2008005

3. 3. Lu, C.H. (2016) Ramble on the Riemann Hypothesis. Tsinghua University Press, Beijing, 20-38. (In Chinese)

4. 4. Devlin, K.J. (2003) The Millennium Problems: The Seven Greatest Unsolved Mathematical Puzzles of Our Time. Basic Books, New York, 19-60.

5. 5. Gilbarg, D. and Trudinger, N.S. (2001) Elliptic Partial Differential Equations of Second Order. Springer-Verlag, New York, 13-30. https://doi.org/10.1007/978-3-642-61798-0_2

6. 6. Wang, J.L. (2018) Fast Algorithm for the Travelling Salesman Problem and the Proof of P = NP. Applied Mathematics, 9, 1351-1359. https://doi.org/10.4236/am.2018.912088