**Journal of Applied Mathematics and Physics**

Vol.06 No.11(2018), Article ID:88719,7 pages

10.4236/jamp.2018.611198

Numerical Simulation of Stochastic Kuramoto-Sivashinsky Equation

Ping Gao^{*}, Chengjian Cai, Xiaoyi Liu^{ }

College of Mathematics and Information Science, Guangzhou University, Guangzhou, China

Copyright © 2018 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: September 13, 2018; Accepted: November 23, 2018; Published: November 26, 2018

ABSTRACT

In this paper, the random Kuramoto-Sivashinsky equation with additive noise is studied numerically, using the finite difference method to simulate the effect of different amplitude of noise on the solitary wave. And numerical experiments show that the white noise does not affect the propagation of the solitary wave, but can increase the amplitude of the solitary wave.

**Keywords:**

Random Kuramoto-Sivashinsky Equation, Difference Scheme, White Noise, Wiener Process

1. Introduction

In recent years, many scholars have studied deterministic k-s equations and made important achievements, but there are relatively few studies on stochastic Kuramoto-Sivashinsky equations, and studying the numerical solution of the equation is a new field. In general, there is no analytic solution to stochastic Kuramoto-Sivashinsky equation, so numerical analysis becomes an important tool to develop its properties. Moreover, it has high computational efficiency, low computational complexity and good reliability. In this paper, its accuracy can be seen by comparing the numerical solution with the exact solution. Moreover we can also discover some phenomena about solution properties directly by numerical analysis.

We consider the following form of nonlinear evolution equation

${u}_{t}+u{u}_{x}+\alpha {u}_{xx}+\beta {u}_{xxxx}=0$ (1.1)

The coefficients of $\alpha $ and $\beta $ are real constants, which are a number of important mathematical physics equations in many physical problems. The second and fourth order terms represent the dissipation and instability of the system respectively, and the second one represents the convective nonlinear effect. Equation (1.1) is called the Kuramoto-Sivashinsky equation, hereinafter referred to as k-s equation; it is independently obtained in the analysis of Kuramotol [1] in dissipative structure of reaction diffusion system and Sivashinsky [2] in flame combustion and fluid dynamics instability. However, in practical situations, we must consider the effect of small irregular random factors, for example, adding a random force term to the right of the equation.

Let’s think about the k-s equation with a random term

${u}_{t}+u{u}_{x}+\alpha {u}_{xx}+\beta {u}_{xxxx}=\lambda \stackrel{\dot{}}{\xi},\mathrm{}\left(x,t\right)\in I\times R$ (1.2)

Here $\lambda $ is the amplitude of the noise, $\stackrel{\dot{}}{\xi}$ is additive noise, and a real value gaussian process. Suppose that $u\left(x\mathrm{,}t\right)$ is defined in the region $R\mathrm{:}\left[-L\mathrm{,}L\right]\times \left[\mathrm{0,}t\right].$

The initial condition

$u\left(0,x\right)={u}_{0}\left(x\right),\mathrm{}-L<x<L;$ (1.3)

and boundary condition

${u}_{x}\left(0,x\right)=0,\mathrm{}t>0$ (1.4)

The following is a mathematical definition of $\stackrel{\dot{}}{\xi}$ .

Setting ${\left(W\left(t\right)\right)}_{t\ge 0}$ be a cylinder wiener process on ${L}^{2}\left({R}^{n}\mathrm{,}R\right)$ , for the arbitrary orthogonal basis ${\left({e}_{i}\right)}_{i\in N}$ on the ${L}^{2}\left({R}^{n}\mathrm{,}R\right)$ space, setting

${\beta}_{i}\left(t\right)=\left(W\left(t\right),{e}_{i}\right),\mathrm{}i\in N,\mathrm{}t\ge 0,$

so ${\left({\beta}_{i}\right)}_{i\in N}$ is a column of independent random Brownian motion, the column of Brownian motion ${\beta}_{i}\left(t,\omega \right),t\ge o,\omega \in \Omega $ is stochastic process which is defined at random base $\left(\Omega \mathrm{,}\u03dc\mathrm{,}P\mathrm{,}{\left(\u03dc\left(t\right)\right)}_{t\ge 0}\right)$ , as long as $t\ge s$ is ${\u03dc}_{t}$ -measurable gaussian random independent variable of ${\u03dc}_{s}$ for each i, ${\beta}_{i}\left(t\right)-{\beta}_{i}\left(s\right)$ . Therefore, W can be written as:

$W\left(t,x,\omega \right)={\displaystyle \underset{i\in N}{\sum}}\text{\hspace{0.05em}}{\beta}_{i}\left(t,\omega \right){e}_{i}\left(x\right),\mathrm{}t\ge 0,\mathrm{}x\in R,\mathrm{}\omega \in \Omega .$ (1.5)

Then the temporal and spatial white noise $\stackrel{\dot{}}{\xi}$ is the derivative of W to the time, that is:

$\stackrel{\dot{}}{\xi}=\frac{\text{d}W}{\text{d}t}={W}_{t}$ (1.6)

In the same way, we can also define space related noise, giving a kernel k and a linear operator $\Phi $ :

$\Phi f\left(x\right)={\displaystyle {\int}_{{R}^{n}}}k\left(x,y\right)f\left(y\right)\text{d}y,\mathrm{}f\in {L}^{2}\left({R}^{n}\right),$ (1.7)

defining the Wiener process $\stackrel{\u02dc}{W}=\Phi W$ , then its time derivative $\stackrel{\dot{}}{\stackrel{\u02dc}{\xi}}$ is the related noise of the time ${\delta}_{t-s}$ , and its spatial correlation function c:

$c\left(x,y\right)={\displaystyle {\int}_{{R}^{n}}}k\left(x,z\right)k\left(y,z\right).$

In form, there are $E\left(\stackrel{\dot{}}{\stackrel{\u02dc}{\xi}}\left(x\mathrm{,}t\right)\mathrm{,}\stackrel{\dot{}}{\stackrel{\u02dc}{\xi}}\left(y\mathrm{,}s\right)\right)=c\left(x\mathrm{,}y\right){\delta}_{t-s}\mathrm{.}$ If $k\left(x,y\right)=k\left(x-y\right)$ is a convolution kernel, noise is uniform in space namely $c\left(x,y\right)=c\left(x-y\right)$ and the noise is temporal and spatial white noise, then there are $k\left(x,y\right)={\delta}_{x-y}$ and $\Phi ={I}_{d}$ , then the Equation (1.2) can be written in the form of the following

$\text{d}u+\left(u{u}_{t}+\alpha {u}_{xx}+\beta {u}_{xxxx}\right)\text{d}t=\lambda \text{d}\stackrel{\u02dc}{W}$ (1.8)

Literature [3] proves that the Equations (1.8), (1.3) and (1.4) have a unique solution. In this paper, the finite difference method is used to simulate the solutions of Equations (1.8), (1.3) and (1.4), and the results of the numerical analysis will be obtained.

2. Derivation of the Difference Scheme

Assuming that $u\left(x\mathrm{,}t\right)$ is defined on region $R\mathrm{:}\left[-L\mathrm{,}L\right]\times \left[\mathrm{0,}t\right]$ , the following partition is made to R

$-L={x}_{0}<{x}_{1}<\cdot \cdot \cdot <{x}_{J-1}<{x}_{J}=L,$

$0={t}_{0}<{t}_{1}<\cdot \cdot \cdot <{x}_{N-1}<{x}_{N}=t,$

remember ${u}_{j}^{n}=u\left(jh,n\tau \right)$ , then in point $\left(jh\mathrm{,}n\tau \right)$ there are

${\left[{u}_{t}\right]}_{j}^{n}+\frac{1}{2}{\left[{\left({u}^{2}\right)}_{x}\right]}_{j}^{n}+\alpha {\left[{u}_{xx}\right]}_{j}^{n}+\beta {\left[{u}_{xxxx}\right]}_{j}^{n}=\gamma {f}_{j}^{n+\frac{1}{2}}$ (2.1)

First considering the above equation as the form of the K-S equation, replacing the ${\left[{u}_{t}\right]}_{j}^{n}$ with the first order difference, and replace the ${\left[{\left({u}^{2}\right)}_{x}\right]}_{j}^{n}$ ${\left[{u}_{xx}\right]}_{j}^{n}$ and ${\left[{u}_{xxxx}\right]}_{j}^{n}$ with the center difference, so that

$\begin{array}{l}{u}_{j}^{n+1}-{u}_{j}^{n}+\frac{\tau}{4}\left({\left[{\left({u}^{2}\right)}_{x}\right]}_{j}^{n+1}+{\left[{\left({u}^{2}\right)}_{x}\right]}_{j}^{n}\right)+\frac{\tau \alpha}{2}\left({\left[{u}_{xx}\right]}_{j}^{n+1}+{\left[{u}_{xx}\right]}_{j}^{n}\right)\\ +\frac{\tau \beta}{2}\left({\left[{u}_{xxxx}\right]}_{j}^{n+1}+{\left[{u}_{xxxx}\right]}_{j}^{n}\right)=0\end{array}$ (2.2)

If the partial derivative of (2.2) with respect to x is simply substituted by the difference quotient, the problem of solving nonlinear equations will be encountered, in order to overcome this difficulty, we did Taylor expansion for nonlinear terms.

$\begin{array}{c}{\left[{\left({u}^{2}\right)}_{x}\right]}_{j}^{n+1}={\left[{\left({u}^{2}\right)}_{x}\right]}_{j}^{n}+{\left[{\left({u}^{2}\right)}_{tx}\right]}_{j}^{n}\tau +o\left({\tau}^{2}\right)\\ ={\left[{\left({u}^{2}\right)}_{x}\right]}_{j}^{n}+{\left[{\left(2u{u}_{t}\right)}_{x}\right]}_{j}^{n}\tau +o\left({\tau}^{2}\right)\\ ={\left[{\left({u}^{2}\right)}_{x}\right]}_{j}^{n}+2{\left[\left({u}_{j}^{n}\right)\left(\frac{{u}_{j}^{n+1}-{u}_{j}^{n}}{\tau}+o\left(\tau \right)\right)\tau \right]}_{x}+o\left({\tau}^{2}\right)\\ ={\left[{\left({u}^{2}\right)}_{x}\right]}_{j}^{n}+2{\left[\left({u}_{j}^{n}\right)\left({u}_{j}^{n+1}-{u}_{j}^{n}\right)\right]}_{x}+o(\tau 2)\end{array}$

thus

$\begin{array}{c}{\left[{\left({u}^{2}\right)}_{x}\right]}_{j}^{n}+{\left[{\left({u}^{2}\right)}_{x}\right]}_{j}^{n+1}=2{\left[\left({u}_{j}^{n}\right)\left({u}_{j}^{n+1}-{u}_{j}^{n}\right)\right]}_{x}+2{\left[{\left({u}^{2}\right)}_{x}\right]}_{j}^{n}+o\left({h}^{2}\right)\\ =2{\left[{u}_{j}^{n}{u}_{j}^{n+1}\right]}_{x}+o\left({h}^{2}\right)\\ =\frac{{u}_{j+1}^{n}{u}_{j+1}^{n+1}-{u}_{j-1}^{n}{u}_{j-1}^{n+1}}{h}+o\left({h}^{2}\right)\end{array}$ (2.3)

You can get the following difference scheme

$a{u}_{j-2}^{n+1}+{b}_{j}^{n}{u}_{j-1}^{n+1}+e{u}_{j}^{n+1}+{c}_{j}^{n}{u}_{j+1}^{n+1}+a{u}_{j+2}^{n+2}={d}_{j}^{n}$ (2.4)

among

$a=\frac{\tau \beta}{2{h}^{4}},\mathrm{}{b}_{j}^{n}=\frac{\tau \alpha}{2{h}^{2}}-\frac{2\tau \beta}{{h}^{4}}-\frac{\tau}{4h}{u}_{j-1}^{n},$

$e=1+\frac{3\tau \beta}{{h}^{4}}-\frac{\tau \alpha}{{h}^{2}},\mathrm{}{c}_{j}^{n}=\frac{\tau \alpha}{2{h}^{2}}-\frac{2\tau \beta}{{h}^{4}}+\frac{\tau}{4h}{u}_{j+1}^{n},$

$\begin{array}{l}{d}_{j}^{n}=-\frac{\tau \beta}{2{h}^{4}}{u}_{j-2}^{n}-\left(\frac{\tau \alpha}{2{h}^{2}}-\frac{2\tau \beta}{{h}^{4}}\right){u}_{j-1}^{n}+\left(1-\frac{3\tau \beta}{{h}^{4}}+\frac{\tau \alpha}{{h}^{2}}\right){u}_{j}^{n}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-\left(\frac{\tau \alpha}{2{h}^{2}}-\frac{2\tau \beta}{{h}^{4}}\right){u}_{j+1}^{n}-\frac{\tau \beta}{2{h}^{4}}{u}_{j+2}^{n},\text{\hspace{1em}}n=0,1,\cdots ,N,\mathrm{}j=0,1,\cdots ,J.\end{array}$

For the difference scheme (2.4), the value of each node is required, we need to solve a large linear system of linear equations with a matrix order of J at every step of time t, according to the supposition of the boundary conditions, ${u}_{-1}={u}_{0}={u}_{1}$ and ${u}_{J+2}={u}_{J+1}={u}_{J}$ .

And the ${f}_{j}^{n+\frac{1}{2}}$ of (2.1) can use the following formula to approximate

$\frac{1}{h\tau}{\displaystyle {\int}_{\left(j-\frac{1}{2}\right)h}^{\left(j+\frac{1}{2}\right)h}}{\displaystyle {\int}_{{t}_{n}}^{{t}_{n+1}}}\stackrel{\dot{}}{\xi}\text{d}s\text{d}x,\mathrm{}j=0,\cdots ,J.$

Substituting the previous (1.4) and (1.5) into the above equation, we can get

$\begin{array}{c}{f}_{j}^{n+\frac{1}{2}}=\frac{1}{h\tau}{\displaystyle {\int}_{\left(j-\frac{1}{2}\right)h}^{\left(j+\frac{1}{2}\right)h}}{\displaystyle {\int}_{{t}_{n}}^{{t}_{n+1}}}{\displaystyle \underset{i\in N}{\sum}}{e}_{i}\left(x\right)\text{d}{\beta}_{i}\left(s\right)\text{d}x\\ =\frac{1}{h\tau}{\displaystyle \underset{i\in N}{\sum}}\left({\displaystyle {\int}_{\left(j-\frac{1}{2}\right)h}^{\left(j+\frac{1}{2}\right)h}}{\displaystyle {\int}_{{t}_{n}}^{{t}_{n+1}}}{e}_{i}\left(x\right)\text{d}x\right)\left({\beta}_{i}\left({t}_{n+1}\right)-{\beta}_{i}\left({t}_{n}\right)\right)\end{array}$

if the orthogonal basis ${\left({e}_{i}\right)}_{i\in N}$ on ${L}^{2}\left(-L\mathrm{,}L\right)$ is taken as the following form

${e}_{j}=\frac{1}{\sqrt{h}}{1}_{\left[\left(j-\frac{1}{2}\right)h,\left(j+\frac{1}{2}\right)h\right]},\mathrm{}j=-J+1,\cdots ,J-1$

${e}_{-J}=\frac{1}{\sqrt{h/2}}{1}_{\left[-Jh,\left(-J+\frac{1}{2}\right)h\right]},\mathrm{}{e}_{J}=\frac{1}{\sqrt{h/2}}{1}_{\left[\left(J-\frac{1}{2}\right)h,Jh\right]},$

then through orthogonalization, ${\int}_{\left(j-\frac{1}{2}\right)h}^{\left(j+\frac{1}{2}\right)h}}{e}_{i}\left(x\right)\text{d}x=0$ if $i\ne j,j=0,\cdots ,J,i\in N$ . Further

${f}_{j}^{n+\frac{1}{2}}=\frac{1}{\tau \sqrt{h}}\left({\beta}_{i}\left({t}_{n+1}\right)-{\beta}_{i}\left({t}_{n}\right)\right),\mathrm{}j=-J+1,\cdots ,J-1,$

${f}_{-J}^{n+\frac{1}{2}}=\frac{1}{\tau \sqrt{h/2}}\left({\beta}_{-J}\left({t}_{n+1}\right)-{\beta}_{-J}\left({t}_{n}\right)\right),\mathrm{}{f}_{J}^{n+\frac{1}{2}}=\frac{1}{\tau \sqrt{h/2}}\left({\beta}_{J}\left({t}_{n+1}\right)-{\beta}_{J}\left({t}_{n}\right)\right)$

Due to $\left({\beta}_{j}\left({t}_{n+1}\right)-{\beta}_{j}\left({t}_{n}\right)\right)/\sqrt{\tau}$ is independent random variable and obeys

the standard normal distribution $N\left(\mathrm{0,1}\right)$ , selecting ${\left({\chi}_{j}^{n+1/2}\right)}_{n\ge 0,j=-J,\cdots ,J}$ is a

random variable that obeys the standard normal distribution. So for each time increment, ${f}^{n+\frac{1}{2}}$ can use the vector $\left({\chi}_{-J}^{n+1/2}\mathrm{,}\cdot \cdot \cdot \mathrm{,}{\chi}_{J}^{n+1/2}\right)$ to simulate.

3. Numerical Simulation

Although our purpose is to simulate the solution of K-S equation and study its properties, there is a very important problem that we need to verify whether the format described above is effective. First in the interval $I\times R=\left[\mathrm{0,1}\right]\times \left[-\mathrm{5,5}\right]$ we simulate the initial value problem (1.1), and the initial condition is

${u}_{0}\left(x\right)=-\frac{c}{k}+\frac{60}{19}k\left(-38\beta {k}^{2}+\alpha \right)\mathrm{tanh}kx+120\beta {k}^{3}{\mathrm{tanh}}^{3}kx$ (3.1)

among $k=\sqrt{\frac{11\alpha}{76\beta}}$ , this problem has the following solitary wave solution [4]

$u\left(x,t\right)=-\frac{c}{k}+\frac{60}{19}k\left(-38\beta {k}^{2}+\alpha \right)\mathrm{tanh}\left(kx+ct\right)+120\beta {k}^{3}{\mathrm{tanh}}^{3}\left(kx+ct\right)$ (3.2)

Taking $\alpha =0.1,\beta =0.1$ , space step $h=0.1$ , and time step $\tau =0.01$ , Figure 1 below shows an image of a numerical solution and an exact solution, Table 1 and Table 2 show the absolute error between different numerical solutions and exact solutions. The obtained numerical solution can well reflect the solution of the equation, indicating that the format described in this paper is valid.

Now we have the numerical simulation Equation (1.7), using the methods described above and the initial conditions.

When the amplitude of noise is small, $\lambda ={10}^{-3}$ , as shown in Figure 2(a) made a time t in the interval $\left[\mathrm{0,3}\right]$ image, the other parameters are the same as before, it can be seen that the solitary wave is not destroyed, and the noise does not stop the propagation of the wave. We can see the same phenomenon by choosing different noise tracks.

In order to further study the stability of the solitary wave, we increase the amplitude of the noise, $\lambda =0.5\times {10}^{-2}$ , the impact will be strengthened. The initial conditions are the same as before, As shown in Figure 2(b), it can be seen that the solitary wave is not destroyed, but it can be seen that the amplitude

Figure 1. The comparison of the numerical solution (a) and the exact solution (b), here $\alpha =0.1,\beta =0.1,c=1$ .

Figure 2. (a) $\lambda =0.001$ , (b) $\lambda =0.005$ , (c) $\lambda =0.01$ , the contour curves of $d\left(u\right)$ . ( $\alpha =0.1,\beta =0.1,c=1$ ).

Table 1. The absolute error data table between the numerical solutions and the exact solutions, here $\alpha =0.1,\beta =0.1,c=1$ .

Table 2. The absolute error data table between the numerical solutions and the exact solutions here $\alpha =0.1,\beta =0.1,c=1$ .

increases through the propagation of the solitary wave, which makes it clear that the noise will enhance the amplitude of the wave.

Increasing the amplitude of the noise again, $\lambda =0.01$ as shown in Figure2(c), in this case, the whole solitary wave is still intact, in order to further study this phenomenon, we use another representation method to represent the image of the solution, as shown in Figure 2(d), we draw the contour curves of solutions, we observed that although the amplitude of the noise is very high, we can clearly see that the amplitude of the wave increases during the propagation of solitary waves. We believe that this is the effect of the energy injected by the noise, which increases the amplitude of the solitary wave.

4. Conclusion

In this paper, the finite difference method is used to carry out numerical experiments on the solution of the random K-S equation. The results show that the noise does not affect the propagation of the solitary wave, but it can enhance the amplitude of the solitary wave. This is similar to the phenomenon observed in random Kdv equations [5] .

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Cite this paper

Gao, P., Cai, C.J. and Liu, X.Y. (2018) Numerical Simulation of Stochastic Kuramoto-Sivashinsky Equation. Journal of Applied Mathematics and Physics, 6, 2363-2369. https://doi.org/10.4236/jamp.2018.611198

References

- 1. Kuramoto, Y. and Tsuzuki, T. (1975) On the Formation of Dissipative Structures in Reaction-Diffusion Systems. Progress of Theoretical Physics, 54, 687-699. https://doi.org/10.1143/PTP.54.687
- 2. Sivashinsky, G.I. (1977) Nonlinear Analysis of Hydrodynamic Instability in Laminar Flames. Acta Astronautica, 4, 1177-1206. https://doi.org/10.1016/0094-5765(77)90096-0
- 3. Duan, J. and Ervin, V.J. (2001) On the Stochastic Kuramoto Sivashinsky Equation. Nonlinear Analysis, 44, 205-216. https://doi.org/10.1016/S0362-546X(99)00259-X
- 4. Khater, A.H. and Temsah, R.S. (2008) Numerical Solutions of the Generalized Kuramoto-Sivashinsky Equation by Chebyshev Spectral Collocation Methods. Computers and Mathematics with Applications, 56, 1465-1472. https://doi.org/10.1016/j.camwa.2008.03.013
- 5. Debussche, A. and Di Menza, L. (2002) Numerical Simulation of Focusing Stochastic Nonlinear Schrodinger Equations. Physica D, 162, 131-154. https://doi.org/10.1016/S0167-2789(01)00379-7