﻿ Parameter Estimation for the Continuous Time Stochastic Logistic Diffusion Model

Open Journal of Statistics
Vol.07 No.06(2017), Article ID:81295,14 pages
10.4236/ojs.2017.76072

Parameter Estimation for the Continuous Time Stochastic Logistic Diffusion Model*

Zhiwei Zheng1, Huisheng Shu1#, Xiu Kan2†, Yingyi Fang1, Xin Zhang1

1School of Science, Donghua University, Shanghai, China

2School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, China    Received: October 27, 2017; Accepted: December 22, 2017; Published: December 25, 2017

ABSTRACT

In this paper, the parameter estimation problem is investigated for the continuous time stochastic logistic diffusion system. A new continuous process is built based on the likelihood ratio scheme, the Radon-Nikodym derivative and the explicit expressions of the error of estimation are given under this new continuous process. By using the random time transformations, law of large numbers for martingales, law of iterated logarithm and stationary distribution of solution, the consistency property are proved for the estimation error. Finally, a numerical simulation is presented to demonstrate the effectiveness of the proposed method in this paper.

Keywords:

Parameter Estimation, Stationary Distribution, Likelihood Ratio, Random Time Transformations 1. Introduction

In the past few decades, parameter estimation problem for the stochastic differential equation have been studied by many scholars whose results mostly base on the discrete observation. In order to get more accurate estimators, we should observe through continuous time. Deterministic model, which parameters are deterministic irrespective of environmental fluctuations, are usually used to describe the overall impact of changes between different factors. these models obviously impose limitations in mathematical modeling of whole real systems. However, in the real world, many random factors (i.e. earthquakes, typhoons, car accidents and other unforeseen factors) may make the parameters into random variables. Therefore, it is more reasonable to use the stochastic differential equation with to describe the real systems disturbed by random noises. For example, the stochastic logistic diffusion model has been widely used in the field of social life, application of stochastic logistic diffusion model has been used in the field of applied economics    , biology     , power engineering    and so on. Very recently, considerable research results have been reported on the parameter estimation based on discrete observation. To be special,  used the least squares method to estimate the parameters, also obtained the point estimators and confidence intervals as well as joint confidence regions.  used conditional least squares and weighted conditional least squares method to study the parameter estimation of two-type continuous-state branching processes with immigration based on low frequency observations at equidistant time points.  studied the asymptotic behaviour of parametric estimator for nonstationary reflected Ornstein-Uhlenbeck process by applying maximum likelihood estimation. For the stochastic logistic diffusion model,  worked out the optimization problem with respect to stationary probability density and provide a new equivalent, an ergodic method is used to show the almost surely equivalency between the time averaging yield and sustainable yield.  considered a stochastic logistic growth model involving both birth and death rates in the drift and diffusion coefficients and the associated complete Fokker-Planck equation is also established to describe the law of the process.  focused on stochastic dynamics involve continuous states as well as discrete events, and obtain weak convergence of the underlying system, and utilized the structure of limit system as a bridge to invest stochastic permanence of original system driving by a singular Markov chain with a large number of states.  presented some basic aspects of adequate numerical analysis for the random extensions such as numerical regularity and mean square convergence.  improved two mathematically tractable cases: at the limit of the number of individuals and at the limit of basic reproduction ratio. In the discrete observations, we let the time interval tends to 0 to get more accurate result. Therefore, this means that parametric inference based on continuous time observation is much more accurate in dealing with parameter estimation problem. During the estimation processing, two important theories have been used to estimate parameters based on continuous observation in the existing literature. One is denoting Radon-Nikodym derivative with likelihood ratio and the other one is by using stationary distribution of solution.

The stochastic logistic diffusion model can be described by the following stochastic differential equation:

$\text{d}{X}_{t}=\left(\alpha {X}_{t}-\beta {X}_{t}^{2}\right)\text{d}t+\epsilon {X}_{t}\text{d}{W}_{t},\text{ }{X}_{0}=x$ (1.1)

where ${X}_{t}$ represents the population capacity at time t, $\alpha >0$ represents the

natural birth rate, $\beta >0$ represents mortality, $\frac{\alpha }{\beta }$ represents load capacity,

usually also represents the largest population that environmental resources can support. $\epsilon >0$ represents the dynamic effect of noise on ${X}_{t}$ . ${W}_{t}$ is a Wiener process modelling the random factor.  studied the existence, uniqueness and global attractively of positive solutions for model (1.1), and established a maximum likelihood estimator for the parameters.  proved that no matter how small $\sigma >0$ , the solution will not explode in a limited time.

In this paper, the continuous observations shall be used to obtain more accurate results than discrete observations, and the likelihood ratio will be employed to get Radon-Nikodym derivative which can be used to solve the parameter estimation problem for logistic diffusion model. As we all know that logistic diffusion model is a diffusion process, for a general diffusion model $\text{d}{X}_{t}=\mu \left({X}_{t}|\theta \right)+\sigma \left({X}_{t}|\theta \right)$ , the parameter $\theta$ enter into the description of ${X}_{t}$ through $\mu$ or $\sigma$ or both. However, the nature of diffusions allows us to evaluate $\sigma$ exactly under a given continuous record, from the formula ${\sum }_{j=1}^{{2}^{n}}{\left({X}_{j}s{2}^{-n}-X\left(j-1\right)s{2}^{-n}\right)}^{2}\to {\int }_{0}^{s}\text{ }{\sigma }^{2}\left({X}_{u}\right)\text{d}s$ a.s. as $n\to +\infty$ (a.s. means almost surely) as pointed out in  . This result can be rewritten descriptively as ${\int }_{0}^{t}{\left(\text{d}{X}_{u}\right)}^{2}\to {\int }_{0}^{s}\text{ }{\sigma }^{2}\left({X}_{u}\right)\text{d}s$ a.s. as $n\to +\infty$ . Thus we could consider a parameter involved in $\sigma$ as being know a single realization which means we should only consider the estimation of $\theta$ in $\mu =\mu \left({X}_{s}|\theta \right)$ . Then the likelihood ratio as Radon-Nikodym derivative and the expressions of all estimators would be obtained. As  have studied the stationary distribution of ${X}_{t}$ , based on the result and strong law of large numbers of martingales the law of iterated logarithm, the consistency property and the normality of asymptotic shall be proved for the estimation error.

This paper is organized as follows. In Section 2, a new method of estimating parameters is given and estimators are obtained. In Section 3, the strong consistency properties of estimators and estimation of asymptotic normality of error are proved. In Section 4, a numerical example for the estimators and error of estimation between estimators and trues is given to demonstrate the effectiveness of the proposed results. The conclusion is given in Section 5.

2. Preliminaries

In this paper, the parameter estimation problem shall be studied for the logistic diffusion model described by a stochastic differential equation as given in (1.1). In this model $\alpha$ , $\beta$ are unknown parameters. We can calculate $\epsilon$ use the method in  and this means that $\epsilon$ is also a known parameter. Because of the complication of the transitional density function for this model, it is difficult to obtain commonly used expression for the unknown parameters. Therefore, we will calculate the likelihood ratio (with respect to ${P}_{\alpha ,\beta }$ , $\alpha$ and $\beta$ are the true parameters) for a finite set of time points $0={t}_{0}<{t}_{1}<\cdot \cdot \cdot <{t}_{n}=t$ , and then let the number of time points tend to infinity to get the likelihood function. From now on we shall work under the assumptions below.

Assumption 1: $\alpha ,\beta$ and $\epsilon$ are positive, ${X}_{0}$ is positive and independent with ${W}_{t}$ .

Assumption 2: $2\alpha >{\epsilon }^{2}$ , which implies that ${X}_{t}$ cannot reach zero.

Assumption 3: ${X}_{0}$ is a positive random variable, and there is a $Q>2$ such that $E\left[{X}_{0}^{Q}\right]<\infty$ hold.

Next, the specific steps with respect to derivations of the likelihood function and parameter estimators are given below.

Assume that ${\epsilon }^{2}$ is known for all ${X}_{t}$ . ${X}_{t}$ can be observed continuously throughout the time interval $0\le s\le t$ . For observations in this detail enable the true diffusion parameter ${\epsilon }^{2}$ to be determinate through following result

$\sum _{j=1}^{{2}^{n}}{\left({X}_{j}s{2}^{-n}-X\left(j-1\right)s{2}^{-n}\right)}^{2}\to {\int }_{0}^{s}\text{ }{\sigma }^{2}\left({X}_{u}\right)\text{d}s\text{ }a.s.\text{ }\text{as}\text{\hspace{0.17em}}n\to +\infty .$

For all $s\in \left[0,t\right]$ , above equation can be rewritten descriptively as follows:

${\int }_{0}^{t}{\left(\text{d}{X}_{s}\right)}^{2}={\int }_{0}^{t}{\left(\epsilon {X}_{s}\right)}^{\text{2}}\text{d}s\text{ }a.s..$

Then, ${\epsilon }^{2}$ may there be assumed know as:

${\epsilon }^{2}=\frac{{\int }_{0}^{t}{\left(\text{d}{X}_{s}\right)}^{2}}{{\int }_{0}^{t}\text{ }{X}_{s}^{2}\text{d}s}.$ (2.1)

The parameter $\alpha$ and $\beta$ enter into the description of ${X}_{t}$ of (1.1), we can get $\epsilon$ exactly by (2.1). We begin with a class of probability $\left(\Omega ,F,{P}_{\alpha ,\beta }\right)$ , where the real stochastic process $X={X}_{s};s\ge 0$ on $\left(\Omega ,F\right)$ evolves according to one of probability laws ${P}_{\alpha ,\beta }$ . For each $t\ge 0$ define

${F}_{t}=\sigma \left({X}_{s}:0\le s\le t,N\right),$ (2.2)

with σ-field generated by sets $w\in \Omega :{X}_{s}={X}_{s}\left(w\right)\in B$ , and B is a Borel set on R and the class N is ${P}_{\alpha ,\beta }$ -null set of $F$ .

Define

$\rho \left(t\right)={\int }_{0}^{t}{\left(\epsilon {X}_{s}\right)}^{2}\text{d}s,\text{ }t\ge 0$ (2.3)

and

${Y}_{\rho \left(t\right)}={X}_{t},\text{ }t\ge 0.$ (2.4)

From the theory of random time transformations, we have

$\text{d}{Y}_{t}=\frac{\alpha {Y}_{t}-\beta {Y}_{t}^{2}}{{\left(\epsilon {Y}_{t}\right)}^{2}}\text{d}t+\text{d}{{W}^{\prime }}_{t}$ (2.5)

where ${{W}^{\prime }}_{t}$ is a standard Brownian motion(or its measure). (1.1) with the initial distribution $\mu$ . ${W}_{t}$ is a Brownian motion with respect to the filtration ${F}_{t}$ and ${X}_{0}=x$ is a ${F}_{0}$ -measurable random variable with distribution $\mu$ . Then, for the process of option times (2.3), we can create the inverse process that is ${X}_{t}={Y}_{\rho \left(t\right)}$ with filtration ${G}_{t}={F}_{\rho \left(t\right)}$ , and we can find a Brownian motion ${{W}^{\prime }}_{t}$ with respect to $G$ . For more details please see reference  , the existence and uniqueness of ${{W}^{\prime }}_{t}$ will be found. From Kailath and Zakai’s  , we know

$\begin{array}{c}\frac{\text{d}{P}_{\alpha ,\beta }^{t}}{\text{d}{{W}^{\prime }}_{t}}=\mathrm{exp}\left\{{\int }_{0}^{\rho \left(t\right)}\frac{\left(\alpha {Y}_{s}-\beta {Y}_{s}^{2}\right)}{{\epsilon }^{2}{Y}_{s}^{2}}\text{d}{Y}_{s}-\frac{1}{2}{\int }_{0}^{\rho \left(t\right)}\frac{{\left(\alpha {Y}_{s}-\beta {Y}_{s}^{2}\right)}^{2}}{{\epsilon }^{2}{Y}_{s}^{4}}\text{d}s\right\}\\ =\mathrm{exp}\left\{{\int }_{0}^{t}\frac{\left(\alpha {X}_{s}-\beta {X}_{s}^{2}\right)}{{\epsilon }^{2}{X}_{s}^{2}}\text{d}{X}_{s}-\frac{1}{2}{\int }_{0}^{t}\frac{{\left(\alpha {X}_{s}-\beta {X}_{s}^{2}\right)}^{2}}{{\epsilon }^{2}{X}_{s}^{4}}\text{d}\rho \left(s\right)\right\}\end{array}$ (2.6)

by substituting $\rho \left(s\right)$ for s,

$=\mathrm{exp}\left\{{\int }_{0}^{t}\frac{\left(\alpha {X}_{s}-\beta {X}_{s}^{2}\right)}{{\epsilon }^{2}{X}_{s}^{2}}\text{d}{X}_{s}-\frac{1}{2}{\int }_{0}^{t}\frac{{\left(\alpha {X}_{s}-\beta {X}_{s}^{2}\right)}^{2}}{{\epsilon }^{2}{X}_{s}^{2}}\text{d}s\right\}.$ (2.7)

Similarly, we can get,

$\frac{\text{d}{P}_{\stackrel{^}{\alpha },\stackrel{^}{\beta }}^{t}}{\text{d}{{W}^{\prime }}_{t}}=\mathrm{exp}\left\{{\int }_{0}^{t}\frac{\left(\stackrel{^}{\alpha }{X}_{s}-\stackrel{^}{\beta }{X}_{s}^{2}\right)}{{\epsilon }^{2}{X}_{s}^{2}}\text{d}{X}_{s}-\frac{1}{2}{\int }_{0}^{t}\frac{{\left(\stackrel{^}{\alpha }{X}_{s}-\stackrel{^}{\beta }{X}_{s}^{2}\right)}^{2}}{{\epsilon }^{2}{X}_{s}^{2}}\text{d}s\right\}.$ (2.8)

Thus,

$\frac{\text{d}{P}_{\stackrel{^}{\alpha },\stackrel{^}{\beta }}^{t}}{\text{d}{P}_{\alpha ,\beta }^{t}}=\left[\frac{\text{d}{P}_{\stackrel{^}{\alpha },\stackrel{^}{\beta }}^{t}}{\text{d}{{W}^{\prime }}_{t}}\right]/\left[\frac{\text{d}{P}_{\alpha ,\beta }^{t}}{\text{d}{{W}^{\prime }}_{t}}\right].$ (2.9)

Writing ${P}_{\alpha ,\beta }^{t}$ for the restriction of ${P}_{\alpha ,\beta }$ to ${F}_{t}$ , and we can now define following likelihood function as a Radon-Nikodym derivative:

$\begin{array}{l}{L}_{t}\left(\stackrel{^}{\alpha },\stackrel{^}{\beta }\right)=\frac{\text{d}{p}_{\stackrel{^}{\alpha },\stackrel{^}{\beta }}^{t}}{\text{d}{p}_{\alpha ,\beta }^{t}}=\mathrm{exp}\left\{{\int }_{0}^{t}\frac{\left(\stackrel{^}{\alpha }{X}_{s}-\stackrel{^}{\beta }{X}_{s}^{2}\right)-\left(\alpha {X}_{s}-\beta {X}_{s}^{2}\right)}{{\epsilon }^{2}{X}_{s}^{2}}\text{d}{X}_{s}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-\frac{1}{2}{\int }_{0}^{t}\frac{{\left(\stackrel{^}{\alpha }{X}_{s}-\stackrel{^}{\beta }{X}_{s}^{2}\right)}^{2}-{\left(\alpha {X}_{s}-\beta {X}_{s}^{2}\right)}^{2}}{{\epsilon }^{2}{X}_{s}^{2}}\text{d}s\right\},\text{ }\forall t>0.\end{array}$ (2.10)

Let ${l}_{t}\left(\stackrel{^}{\alpha },\stackrel{^}{\beta }\right)=\mathrm{log}{L}_{t}\left(\stackrel{^}{\alpha },\stackrel{^}{\beta }\right)$ , (the “log()” function has the basement “e”) solving following equation

$\left\{\begin{array}{l}\frac{\partial {l}_{t}\left(\stackrel{^}{\alpha },\stackrel{^}{\beta }\right)}{\partial \stackrel{^}{\alpha }}=0\hfill \\ \frac{\partial {l}_{t}\left(\stackrel{^}{\alpha },\stackrel{^}{\beta }\right)}{\partial \stackrel{^}{\beta }}=0,\hfill \end{array}$ (2.11)

we can obtain the estimators as follows:

$\left\{\begin{array}{l}\stackrel{^}{\alpha }=\frac{{\int }_{0}^{t}\frac{\text{d}{X}_{s}}{{X}_{s}}{\int }_{0}^{t}{X}_{s}^{2}\text{d}s-\left({X}_{t}-{X}_{0}\right){\int }_{0}^{t}{X}_{s}\text{d}s}{t{\int }_{0}^{t}{X}_{s}^{2}\text{d}s-{\left({\int }_{0}^{t}{X}_{s}\text{d}s\right)}^{2}},\\ \stackrel{^}{\beta }=\frac{{\int }_{0}^{t}\frac{\text{d}{X}_{s}}{{X}_{s}}{\int }_{0}^{t}{X}_{s}\text{d}s-t\left({X}_{t}-{X}_{0}\right)}{t{\int }_{0}^{t}{X}_{s}^{2}\text{d}s-{\left({\int }_{0}^{t}{X}_{s}\text{d}s\right)}^{2}}.\end{array}$ (2.12)

3. Main Results and Proofs

In this section, we shall give the asymptotic distribution of the estimated errors and the corresponding proof. It’s easy to konow the solution of Equation (1.1) has following expression:

${X}_{t}=\frac{\mathrm{exp}\left\{\left(\alpha -\frac{{\epsilon }^{2}}{2}\right)t+\epsilon {W}_{t}\right\}}{x+\beta {\int }_{0}^{t}\mathrm{exp}\left\{\left(\alpha -\frac{{\epsilon }^{2}}{2}\right)s+\epsilon {W}_{s}\right\}\text{d}s}.$ (3.1)

Firstly, let us give following four lemmas.

Lemma 1 (The law of iterated logarithm) 

$\underset{t\to +\infty }{\mathrm{lim}}\mathrm{sup}\frac{{W}_{t}}{\sqrt{2t\mathrm{log}\mathrm{log}t}}=1\text{ }a.s..$ (3.2)

Lemma 2  Assume that there is a positive number C such that

$-\lambda :={\lambda }_{\mathrm{max}}^{+}\left(-2C\beta \right)=\underset{{X}_{t}\in {R}_{+},|{X}_{t}|=1}{\mathrm{sup}}\left(-2C\beta \right){X}_{t}^{2}<0.$ (3.3)

Then, for any given initial value $x\in {ℝ}_{+}$ , the solution ${X}_{t}$ of equation $\text{d}{X}_{t}=\left(\alpha {X}_{t}-\beta {X}_{t}^{2}\right)\text{d}t+\epsilon {X}_{t}\text{d}{W}_{t}$ has following properties

$\underset{t\to +\infty }{\mathrm{lim}}\mathrm{sup}{\int }_{t}^{t+1}\text{ }\text{ }\mathbb{E}{|X\left(s\right)|}^{2}\text{d}s\le \frac{4{C}^{2}|\alpha |}{{\lambda }^{2}}\left(1+\frac{|C\alpha |}{C}\right),$ (3.4)

$\underset{t\to +\infty }{\mathrm{lim}}\mathrm{sup}\mathbb{E}\left(\underset{t\le s\le t+1}{\mathrm{sup}}|{X}_{s}|\le \frac{2{C}^{2}|\alpha |}{C}\left(1+\frac{|C\alpha |}{C}\right)+\frac{6|C\epsilon |}{C}\sqrt{\frac{{C}^{2}|\alpha |}{{\lambda }^{2}}\left(1+\frac{|C\alpha |}{C}\right)}\right),$ (3.5)

and

$\underset{t\to +\infty }{\mathrm{lim}}\mathrm{sup}\frac{\mathrm{log}\left(|X\left(t\right)|\right)}{\mathrm{log}t}\le 1\text{ }a.s..$ (3.6)

Lemma 3  If the conditions ${\lambda }_{\mathrm{max}}^{+}\left(-2C\beta \right)<0$ and $\alpha >\frac{{\epsilon }^{2}}{2}$ hold. Then, for any given initial value $x\in {ℝ}_{+}$ , the solution $X\left(t\right)$ of $\text{d}{X}_{t}=\left(\alpha {X}_{t}-\beta {X}_{t}^{2}\right)\text{d}t+\epsilon {X}_{t}\text{d}{W}_{t}$ has the property that

$\underset{t\to +\infty }{\mathrm{lim}}\mathrm{inf}\frac{\mathrm{log}\left(|X\left(t\right)|\right)}{\mathrm{log}\left(t\right)}\ge -\frac{{\epsilon }^{2}}{2\alpha -{\epsilon }^{2}}\text{ }a.s..$ (3.7)

Lemma 4 Assume that ${X}_{t}$ is a solution to the stochastic differential Equation (1.1) and Assumptions 1 - 3 hold. Then, we have

$\underset{t\to +\infty }{\mathrm{lim}}\frac{1}{t}{\int }_{0}^{t}{X}_{s}\text{d}s=\frac{\alpha -\frac{{\epsilon }^{2}}{2}}{\beta }.$ (3.8)

proof: It is known from lemma 2 that the solution (3.1) obeys

$\underset{t\to +\infty }{\mathrm{lim}}\mathrm{sup}\frac{\mathrm{log}\left({X}_{t}\right)}{\mathrm{log}t}\le t\text{ }a.s..$

While, from Assumption 2 we have $2\alpha >{\epsilon }^{2}$ . Then, by the properties in lemma 2, the solution (3.1) satisfies

$\underset{t\to +\infty }{\mathrm{lim}}\mathrm{inf}\frac{\mathrm{log}\left({X}_{t}\right)}{\mathrm{log}t}\ge -\frac{{\epsilon }^{2}}{2\alpha -{\epsilon }^{2}}\text{ }a.s..$

Consequently,

$\underset{t\to +\infty }{\mathrm{lim}}\frac{1}{t}\mathrm{log}\left({X}_{t}\right)=0\text{ }a.s..$

By the Itô formula, it is easy to know that

$\mathrm{log}\left({X}_{t}\right)=\mathrm{log}\left(x\right)+\left(\alpha -\frac{{\epsilon }^{2}}{2}\right)t-\beta {\int }_{0}^{t}{X}_{s}\text{d}s+\epsilon {W}_{t}.$

Dividing both side by t and then letting $t\to +\infty$ , we obtain

$\underset{t\to +\infty }{\mathrm{lim}}\frac{1}{t}{\int }_{0}^{t}{X}_{s}\text{d}s=\frac{\alpha -\frac{{\epsilon }^{2}}{2}}{\beta }\text{ }a.s..$ (3.9)

The proof is complete.,

Remark: (The one-dimensional Itô formula) Let $x\left(t\right)$ be an Itô process on $t\ge 0$ with the stochastic differential

$\text{d}x\left(t\right)=f\left(t\right)\text{d}t+g\left(t\right)\text{d}B\left(t\right),$ (3.10)

where $f\in {L}^{1}\left({R}_{+};R\right)$ and $g\in {L}^{2}\left({R}_{+};R\right)$ . Let $V\in {C}^{2,1}\left(R×{R}_{+};R\right)$ .Then $V\left(x\left(t\right),t\right)$ is again an Itô process with the stochastic differential given by

$\begin{array}{c}\text{d}V\left(x\left(t\right),t\right)=\left[{V}_{t}\left(x\left(t\right),t\right)+{V}_{x}\left(x\left(t\right),t\right)f\left(t\right)+\frac{1}{2}{V}_{xx}\left(x\left(t\right),t\right){g}^{2}\left(t\right)\right]\text{d}t\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{V}_{x}\left(x\left(t\right),t\right)g\left(t\right)\text{d}{B}_{t}\text{ }a.s..\end{array}$ (3.11)

Theorem 1 Let ${X}_{t}$ be a solution to the stochastic differential Equation (1.1) and Assumptions 1-3 hold. Then, we have

$\underset{t\to +\infty }{lim}\frac{1}{t}{\int }_{0}^{t}{X}_{s}^{2}\text{d}s=\frac{{\alpha }^{2}-\frac{\alpha {\epsilon }^{2}}{2}}{{\beta }^{2}}.$ (3.12)

proof: It follows from (1.1) that ${X}_{t}-x=\alpha {\int }_{0}^{t}{X}_{s}\text{d}s-\beta {\int }_{0}^{t}{X}_{s}^{2}\text{d}s+\epsilon {\int }_{0}^{t}{X}_{s}\text{d}{W}_{s}$ , dividing both sides by t and then letting $t\to +\infty$ , one has

$\underset{t\to +\infty }{\mathrm{lim}}\frac{{X}_{t}-x}{t}=\underset{t\to +\infty }{\mathrm{lim}}\frac{\alpha }{t}{\int }_{0}^{t}{X}_{s}\text{d}s-\underset{t\to +\infty }{lim}\frac{\beta }{t}{\int }_{0}^{t}{X}_{s}^{2}\text{d}s+\underset{t\to +\infty }{lim}\frac{\epsilon }{t}{\int }_{0}^{t}{X}_{s}\text{d}{W}_{s}.$

Since

$\begin{array}{c}\mathbb{E}\left[{\int }_{0}^{t}\text{ }{X}_{s}\text{d}{W}_{s}\right]={\int }_{0}^{t}\text{ }\mathbb{E}\left[{X}_{s}\text{d}{W}_{s}\right]\\ ={\int }_{0}^{t}\text{ }\mathbb{E}\left[\mathbb{E}\left[{X}_{s}\text{d}{W}_{s}|{F}_{s}\right]\right]\\ ={\int }_{0}^{t}\text{ }\mathbb{E}\left[{X}_{s}\mathbb{E}\left[\text{d}{W}_{s}\right]\right]\\ =0,\end{array}$

and

$\begin{array}{c}\mathbb{E}\left[{\int }_{0}^{t}\text{ }{X}_{s}\text{d}{W}_{s}|{F}_{{t}_{-}}\right]={\int }_{0}^{{t}_{-}}\text{ }{X}_{s}\text{d}{W}_{s}+\mathbb{E}\left[{\int }_{{t}_{-}}^{t}{X}_{s}\text{d}{W}_{s}|{F}_{{t}_{-}}\right]\\ ={\int }_{0}^{{t}_{-}}\text{ }{X}_{s}\text{d}{W}_{s}+{\int }_{{t}_{-}}^{t}{X}_{s}\left[\text{d}{W}_{s}\right]\\ ={\int }_{0}^{{t}_{-}}\text{ }{X}_{s}\text{d}{W}_{s}.\end{array}$

${\int }_{0}^{{t}_{-}}{X}_{s}\text{d}{W}_{s}$ is a matingale with zero mean with respect to the σ-algebra ${F}_{{t}_{-}}$ . Moreover, according to (1.1), ${X}_{{t}_{i}}-{X}_{{t}_{i-1}}=\left(\alpha {X}_{{t}_{i-1}}-\beta {X}_{{t}_{i-1}}^{2}\right)\Delta +\epsilon {X}_{{t}_{i-1}}\sqrt{\Delta }{ϵ}_{{t}_{i}}$ . The equation $f\left(x\right)=x+\left(\alpha x-\beta {x}^{2}\right)\Delta$ (among them $\Delta =\mathrm{max}|{t}_{i}-{t}_{i-1}|$ , $0\le {t}_{i}\le t$ ,

${ϵ}_{{t}_{i}}~N\left(0,1\right)$ ) gets maximum when $x=\frac{\alpha \Delta +1}{2\beta }$ , thus,

${X}_{t}\le \frac{{\left(\alpha \Delta +1\right)}^{2}}{4\beta }<\infty .$

Therefore, $\mathbb{E}\left[{\left({X}_{s}\text{d}{W}_{s}\right)}^{2}\right]=\mathbb{E}\left[{X}_{s}^{2}\right]$ is bounded. It then follows that

$\underset{t\to +\infty }{\mathrm{lim}}\mathrm{sup}\frac{{\int }_{0}^{t}\text{ }{X}_{s}^{2}\text{d}s}{t}<\infty \text{ }a.s..$ (3.13)

By the strong law of large numbers of martingales, we have

$\underset{t\to +\infty }{\mathrm{lim}}\frac{{\int }_{0}^{t}{X}_{s}\text{d}{W}_{s}}{t}=0\text{ }a.s..$

Together with (3.1) and Lemma 1, we obatain

$\begin{array}{l}0<\underset{t\to +\infty }{\mathrm{lim}}{X}_{t}=\underset{t\to +\infty }{\mathrm{lim}}\frac{\mathrm{exp}\left\{\left(\alpha -\frac{{\epsilon }^{2}}{2}\right)t+2\sqrt{t\mathrm{log}\mathrm{log}t}\right\}}{{x}^{-1}+\beta {\int }_{0}^{t}\mathrm{exp}\left\{\left(\alpha -\frac{{\epsilon }^{2}}{2}\right)s+2\sqrt{s\mathrm{log}\mathrm{log}s}\right\}\text{d}s}.\hfill \end{array}$ (3.14)

By LHospital rule, we get

$\underset{t\to +\infty }{\mathrm{lim}}\frac{\mathrm{exp}\left\{\left(\alpha -\frac{{\epsilon }^{2}}{2}\right)t+2\sqrt{t\mathrm{log}\mathrm{log}t}\right\}}{{x}^{-1}+\beta {\int }_{0}^{t}\mathrm{exp}\left\{\left(\alpha -\frac{{\epsilon }^{2}}{2}\right)s+2\sqrt{s\mathrm{log}\mathrm{log}s}\right\}\text{d}s}=\frac{\alpha -\frac{{\epsilon }^{2}}{2}}{\beta }.$

Then,

$\underset{t\to +\infty }{\mathrm{lim}}\frac{{X}_{t}}{t}=0\text{ }a.s..$

According to (3.10), we get

$\underset{t\to +\infty }{\mathrm{lim}}\frac{1}{t}{\int }_{0}^{t}{X}_{s}^{2}\text{d}s=\frac{{\alpha }^{2}-\frac{\alpha {\epsilon }^{2}}{2}}{{\beta }^{2}}\text{ }a.s..$ (3.15)

The proof is now complete.,

Remark: (LHospital rule) The general form of LHospital rule covers many cases. Let c and L be extended real numbers.The real valued function f and g are assumed to be differentiable on an open interval with endpoint c, and additionally

${g}^{\prime }\left(x\right)\ne 0$ on the interval. It is also assumed that ${\mathrm{lim}}_{x\to c}\frac{{f}^{\prime }\left(x\right)}{{g}^{\prime }\left(x\right)}=L$ . Thus the

rule applies to situations in which the ratio of the derivatives has a finite or infinite limit, and not to situations in which that ratio fluctuates permanenty as x gets closer and closer to c.

If either

$\underset{x\to c}{\mathrm{lim}}f\left(x\right)=\underset{x\to c}{\mathrm{lim}}g\left(x\right)=0$ (3.16)

or

$\underset{x\to c}{\mathrm{lim}}|f\left(x\right)|=\underset{x\to c}{\mathrm{lim}}|g\left(x\right)|=\infty ,$ (3.17)

then

$\underset{x\to c}{\mathrm{lim}}\frac{f\left(x\right)}{g\left(x\right)}=L.$ (3.18)

Theorem 2 Under Assumptions 1-3, $\stackrel{^}{\alpha }$ and $\stackrel{^}{\beta }$ are strongly consistent.

Proof: Substituting (1.1) into the expression of $\stackrel{^}{\alpha }$ yields

$\stackrel{^}{\alpha }-\alpha =\frac{\epsilon {W}_{t}{\int }_{0}^{t}{X}_{s}^{2}\text{d}s-\epsilon {\int }_{0}^{t}{X}_{s}\text{d}{W}_{s}{\int }_{0}^{t}{X}_{s}\text{d}s}{t{\int }_{0}^{t}{X}_{s}^{2}\text{d}s-{\left({\int }_{0}^{t}{X}_{s}\text{d}s\right)}^{2}}.$ (3.19)

Letting $t\to +\infty$ , and according to Lemma 4 and Theorem 1, we have

$\begin{array}{c}\stackrel{^}{\alpha }-\alpha =\frac{\frac{\epsilon {W}_{t}}{t}\frac{{\int }_{0}^{t}{X}_{s}^{2}\text{d}s}{t}-\frac{\epsilon }{t}{\int }_{0}^{t}{X}_{s}\text{d}{W}_{s}\frac{1}{t}{\int }_{0}^{t}{X}_{s}\text{d}s}{\frac{1}{t}{\int }_{0}^{t}{X}_{s}^{2}\text{d}s-{\left(\frac{1}{t}{\int }_{0}^{t}{X}_{s}\text{d}s\right)}^{2}}\\ =\underset{t\to +\infty }{\mathrm{lim}}\frac{\frac{\epsilon {W}_{t}}{t}\frac{{\alpha }^{2}-\frac{\alpha {\epsilon }^{2}}{2}}{{\beta }^{2}}-\frac{\epsilon }{t}\frac{\alpha -\frac{{\epsilon }^{2}}{2}}{\beta }}{\frac{{\alpha }^{2}-\frac{\alpha \epsilon 2}{2}}{{\beta }^{2}}-\frac{{\left(\alpha -\frac{{\epsilon }^{2}}{2}\right)}^{2}}{{\beta }^{2}}}\\ =\frac{2\alpha }{\epsilon }\underset{t\to +\infty }{\mathrm{lim}}\frac{{W}_{t}}{t}-\frac{2\beta }{\epsilon }\underset{t\to +\infty }{\mathrm{lim}}\frac{{\int }_{0}^{t}{X}_{s}\text{d}{W}_{s}}{t}.\end{array}$

It follows from (3.12) that

$\stackrel{^}{\alpha }-\alpha \to 0\text{ }a.s..$ (3.20)

Substituting (1.1) into the expression of $\stackrel{^}{\beta }$ yields

$\stackrel{^}{\beta }-\beta =\frac{\epsilon {W}_{t}{\int }_{0}^{t}{X}_{s}\text{d}s-t\epsilon {\int }_{0}^{t}{X}_{s}\text{d}{W}_{s}}{t{\int }_{0}^{t}{X}_{s}^{2}\text{d}s-{\left({\int }_{0}^{t}{X}_{s}\text{d}s\right)}^{2}}.$ (3.21)

Similarly, we have

$\begin{array}{c}\stackrel{^}{\beta }-\beta =\frac{\frac{\epsilon {W}_{t}}{t}\frac{{\int }_{0}^{t}{X}_{s}\text{d}s}{t}-\frac{\epsilon }{t}{\int }_{0}^{t}{X}_{s}\text{d}{W}_{s}}{\frac{1}{t}{\int }_{0}^{t}{X}_{s}^{2}\text{d}s-{\left(\frac{1}{t}{\int }_{0}^{t}{X}_{s}\text{d}s\right)}^{2}}=\underset{t\to +\infty }{\mathrm{lim}}\frac{\frac{\epsilon {W}_{t}}{t}\frac{\alpha -\frac{{\epsilon }^{2}}{2}}{\beta }-\frac{\epsilon }{t}{\int }_{0}^{t}{X}_{s}\text{d}{W}_{s}}{\frac{{\alpha }^{2}-\frac{\alpha \epsilon 2}{2}}{{\beta }^{2}}-\frac{{\left(\alpha -\frac{{\epsilon }^{2}}{2}\right)}^{2}}{{\beta }^{2}}}\\ =\frac{2\alpha }{\epsilon }\underset{t\to +\infty }{\mathrm{lim}}\frac{{W}_{t}}{t}-\frac{4{\beta }^{2}}{2\alpha \epsilon -{\epsilon }^{3}}\underset{t\to +\infty }{\mathrm{lim}}\frac{{\int }_{0}^{t}{X}_{s}\text{d}{W}_{s}}{t}\end{array}$

It is easy to get from (3.12) that

$\stackrel{^}{\beta }-\beta =0\text{ }a.s..$ (3.22)

Thus, $\stackrel{^}{\alpha }$ and $\beta$ are strongly consistent. The proof is complete.,

Theorem 3 Under Assumptions 1-3, we have

$\begin{array}{l}\sqrt{\frac{\epsilon t}{2\alpha }}\left(\stackrel{^}{\alpha }-\alpha \right)\stackrel{L}{\to }N\left(0,1\right),\\ \sqrt{\frac{\epsilon t}{2\beta }}\left(\stackrel{^}{\beta }-\beta \right)\stackrel{L}{\to }N\left(0,1\right).\end{array}$ (3.23)

Proof: It follows from (3.15) that

$\stackrel{^}{\alpha }-\alpha =\frac{\frac{\epsilon {W}_{t}}{t}\frac{{\int }_{0}^{t}{X}_{s}^{2}\text{d}s}{t}-\frac{{X}_{t}-{x}_{0}}{t}\frac{{\int }_{0}^{t}{X}_{s}\text{d}s}{t}+\alpha {\left(\frac{{\int }_{0}^{t}{X}_{s}\text{d}s}{t}\right)}^{2}-\beta \frac{{\int }_{0}^{t}{X}_{s}^{2}}{t}\frac{{\int }_{0}^{t}{X}_{s}\text{d}s}{t}}{\frac{{\int }_{0}^{t}{X}_{s}^{2}\text{d}s}{t}-{\left(\frac{{\int }_{0}^{t}{X}_{s}\text{d}s}{t}\right)}^{2}}.$

Substituting (3.9) and (3.11) into the above expression and then letting $t\to +\infty$ , we have

$\stackrel{^}{\alpha }-\alpha =\underset{t\to +\infty }{lim}\frac{\frac{\epsilon {W}_{t}}{t}\frac{{\alpha }^{2}-\frac{\alpha {\epsilon }^{2}}{2}}{{\beta }^{2}}-\frac{{X}_{t}-x}{t}\frac{\alpha -\frac{{\epsilon }^{2}}{2}}{\beta }}{\frac{2\alpha {\epsilon }^{2}-{\epsilon }^{2}}{4{\beta }^{2}}}.$

According to (3.13), one has

$\underset{t\to +\infty }{lim}\sqrt{t}\frac{{X}_{t}-x}{t}\frac{\alpha -\frac{{\epsilon }^{2}}{2}}{\beta }=0\text{ }a.s..$

Then,

$\underset{t\to +\infty }{lim}\sqrt{\frac{\epsilon t}{2\alpha }}\frac{\frac{\epsilon {W}_{t}}{t}\frac{{\alpha }^{2}-\frac{\alpha {\epsilon }^{2}}{2}}{{\beta }^{2}}-\frac{{X}_{t}-x}{t}\frac{\alpha -\frac{{\epsilon }^{2}}{2}}{\beta }}{\frac{2\alpha {\epsilon }^{2}-{\epsilon }^{2}}{4{\beta }^{2}}}\stackrel{L}{\to }N\left(0,1\right).$ (3.24)

Similarly, it follows easily from (3.17) that

$\stackrel{^}{\beta }-\beta =\frac{\frac{\epsilon {W}_{t}}{t}\frac{{\int }_{0}^{t}{X}_{s}\text{d}s}{t}-\frac{{X}_{t}-{x}_{0}}{t}+\frac{\alpha {\int }_{0}^{t}{X}_{s}\text{d}s}{t}-\frac{\beta {\int }_{0}^{t}{X}_{s}^{2}\text{d}s}{t}}{\frac{{\int }_{0}^{t}{X}_{s}^{2}\text{d}s}{t}-{\left(\frac{{\int }_{0}^{t}{X}_{s}\text{d}s}{t}\right)}^{2}}.$

Substituting (3.9) and (3.11) into the above expression and then letting $t\to +\infty$ yields

$\stackrel{^}{\beta }-\beta =\underset{t\to +\infty }{\mathrm{lim}}\frac{\frac{\epsilon {W}_{t}}{t}\frac{\alpha -\frac{{\epsilon }^{2}}{2}}{\beta }-\frac{{X}_{t}-x}{t}}{\frac{2\alpha {\epsilon }^{2}-{\epsilon }^{4}}{4{\beta }^{2}}}$ (3.25)

Therefore,

$\underset{t\to +\infty }{lim}\sqrt{\frac{\epsilon t}{2\beta }}\frac{\frac{\epsilon {W}_{t}}{t}\frac{\alpha -\frac{{\epsilon }^{2}}{2}}{\beta }-\frac{{X}_{t}-x}{t}}{\frac{2\alpha {\epsilon }^{2}-{\epsilon }^{4}}{4{\beta }^{2}}}\stackrel{L}{\to }N\left(0,1\right).$ (3.26)

The proof is complete.,

4. Simulation

In this section, a numerical simulation example shall be presented to demonstrate the effectiveness of the approach results.

The simulation is based on (2.6) (2.7) and (2.12). First according to (2.6) and (2.7), for given values of $\alpha ,\beta$ and $t$ such as $\alpha =0.3,\beta =0.6$ and $t=500$ , we can get the sample values based on the likelihood ratio estimation and MATLAB. Then, for substituting the sample values into (2.12), the values of $\left(\stackrel{^}{\alpha },\stackrel{^}{\beta }\right)$ can be obtained. Subsequently, we calculate the average values of the estimators. Finally, the average errors between estimators can also be calculated. Simulation results are shown in Table 1. In Table 1, the time is represented as “t” and likelihood ratio estimator is shown as “LR”. Table 1 lists the values of “ $\stackrel{^}{\alpha }-LR$ “, “ $\stackrel{^}{\beta }-LR$ “ and the average errors of “LR”. Table 1 illustrates that the average errors of $\alpha ,\beta$ depended on the size of given value of $\alpha ,\beta$ . But under the hypothesis of normal distribution, obvious difference can not be found between estimators and true values, estimators are good. From the Table 1 we can see clearly that the estimators become more and more close to the true value by increasing the time t. The was of continuous time estimation is better than the discrete observation  . (Those data comes from web of Statistical Data and I use MATLAB to simulate those data to get the result. The confidence intervals is $\left[0.3-0.01,0.3+0.01\right]$ for $\alpha =0.3$ , $\left[0.\text{6}-0.0\text{1},0.\text{6}+0.0\text{1}\right]$ for $\beta =$ $0.6$ ; $\left[0.\text{4}-0.0\text{1},0.\text{4}+0.0\text{1}\right]$ for $\alpha =0.4$ , $\left[0.7-0.01,0.7+0.01\right]$ for $\beta =0.7$ ; $\left[0.\text{5}-0.0\text{1},0.\text{5}+0.0\text{1}\right]$ for $\alpha =0.5$ , $\left[0.\text{8}-0.0\text{1},0.\text{8}+0.0\text{1}\right]$ for $\beta =0.8$ ; $\left[0.\text{6}-0.0\text{1},0.\text{6}+0.0\text{1}\right]$ for $\alpha =0.6$ , $\left[0.\text{9}-0.0\text{1},0.\text{9}+0.0\text{1}\right]$ for $\beta =0.9$ ;)

Table 1. Likelihood ratio estimator simulation results of $\alpha ,\beta$ .

5. Conclusion

In this paper, parameter estimation problem has been studied for the continuous time stochastic logistic diffusion model by using likelihood ratio. The explicit expressions for the estimation errors have been given and the according asymptotic properties have been proved by applying the he law of iterated logarithm, random time transformations, stationary distribution of solutions of stochastic differential equations and the law of large numbers for martingales. To get more accurate results, we use continuous observation method, and the proposed estimators are closer to the true value that be demonstrated by a simulation example. In the future research, we will consider the state estimation problem for non-linear systems with incomplete observation  and non-linear systems with random disturbance caused by Levy jump or Poisson jump  .

Cite this paper

Zheng, Z.W., Shu, H.S., Kan, X., Fang, Y.Y. and Zhang, X. (2017) Parameter Estimation for the Continuous Time Stochastic Logistic Diffusion Model. Open Journal of Statistics, 7, 1039-1052. https://doi.org/10.4236/ojs.2017.76072

References

1. 1. Makate, N. and Sattayatham, P. (2015) Stochastic Volatility Jump-Diffusion Model for Option Pricing. Journal of Huaiyin Teachers College, 01, 90-97.

2. 2. Lochstoer, L.A., Craine, R. and Syrtveit, K. (2000) Estimation of a Stochastic-Volatility Jump-Diffusion Model. Revista De Anlisis Econmico, 15, 61-87.

3. 3. Intarasit, A. and Sattayatham, P. (2011) Option Pricing for a Jump Diffusion Model with Fractional Stochastic Volatility. Journal of Nonlinear Analysis and Optimization, 11, 239-251.

4. 4. Roman-Roman, P. and Torres-Ruiz, F. (2012) Modelling Logistic Growth by a New Diffusion Process: Application to Biological Systems. Biosystems, 110, 9-21. https://doi.org/10.1016/j.biosystems.2012.06.004

5. 5. Moilanen, A. (2004) SPOMSIM: Software for Stochastic Patch Occupancy Models of Metapopulation Dynamics. Ecological Modelling, 179, 533-550. https://doi.org/10.1016/j.ecolmodel.2004.04.019

6. 6. Moller, J., Bergmann, K., Christiansen, L. and Madsen, H. (2012) Development of a Restricted State Space Stochastic Differential Equation Model for Bacterial Growth in Rich Media. Journal of Theoretical Biology, 305, 78-87. https://doi.org/10.1016/j.jtbi.2012.04.015

7. 7. Knape, J. and De Valpine, P. (2012) Fitting Complex Population Models by Combining Particle Filters with Markov Chain Monte Carlo. Ecology, 93, 256. https://doi.org/10.1890/11-0797.1

8. 8. Ji, L. (2016) Forecasting Petroleum Consumption in China: Comparison of Three Models. Journal-Energy Institute, 84, 34-37. https://doi.org/10.1179/014426011X12901840102526

9. 9. Sakai, Y. (2008) Diffusion in Turbulent Pipe Flow Using a Stochastic Model. Bulletin of the Jsme, 39, 667-675.

10. 10. Chen, J. and Chang, W. (1998) Modeling Differential Diffusion Effects in Turbulent Nonreacting/Reacting Jets with Stochastic Mixing Models. Combustion Science and Technology, 133, 343-375. https://doi.org/10.1080/00102209808952039

11. 11. Pan, J., Gray, A., Greenhalgh, D. and Mao, X. (2014) Parameter Estimation for the Stochastic SIS Epidemic Model. Statistical Inference for Stochastic Processes, 17, 75-98. https://doi.org/10.1007/s11203-014-9091-8

12. 12. Xu, W. (2014) Parameter Estimation in Two-Type Continuous-State Branching Processes with Immigration. Statistics and Probability Letters, 91, 124-134. https://doi.org/10.1016/j.spl.2014.04.021

13. 13. Zang, Q. (2016) Asymptotic Behaviour of Parametric Estimation for Nonstationary Reflected Ornstein? CUhlenbeck Processes. Journal of Mathematical Analysis and Applications, 444, 839-851. https://doi.org/10.1016/j.jmaa.2016.06.067

14. 14. Zou, X. and Wang, K. (2014) Optimal Harvesting for a Stochastic Regime-Switching Logistic Diffusion System with Jumps. Nonlinear Analysis Hybrid Systems, 13, 32-44. https://doi.org/10.1016/j.nahs.2014.01.001

15. 15. Campillo, F., Joannides, M. and Larramendyvalverde, I. (2013) Estimation of the Parameters of a Stochastic Logistic Growth Model. Statistics, 1-30.

16. 16. Li, X. and Yin, G. (2016) Switching Diffusion Logistic Models Involving Singularly Perturbed Markov Chains: Weak Convergence and Stochastic Permanence. Stochastic Analysis and Applications, 35, 364-389.

17. 17. Schurz, H. (2007) Modelling, Analysis and Discretization of Stochastic Logistic Equations. International Journal of Numerical Analysis and Modeling, 4, 178-197.

18. 18. Ovaskainen, O. (2001) The Quasistationary Distribution of the Stochastic Logistic Model. Journal of Applied Probability, 38, 898-907. https://doi.org/10.1017/S0021900200019112

19. 19. Pang, S., Deng, F. and Mao, X. (2008) Asymptotic Properties of Stochastic Population Dynamics. Dynamics of Continuous Discrete and Impulsive Systems Series A, 15, 603-620.

20. 20. Mao, X., Marion, G. and Renshaw, E. (2002) Environmental Brownian Noise Suppresses Explosions in Population Dynamics. Stochastic Processes and Their Applications, 97, 95-110. https://doi.org/10.1016/S0304-4149(01)00126-0

21. 21. Brown, B.M. and Hewitt, J. (1975) Asymptotic Likelihood Theory for Diffusion Process. Journal of Applied Probability, 12, 228-238. https://doi.org/10.1017/S0021900200047914

22. 22. Peterson, A. (2014) Random Time Change with Some Applications. Auburn University, Auburn.

23. 23. Kailath, T. and Zakai, M. (1971) Absolute Continuity and Radon-Nikodym Derivatives for Certain Measures Relative to Wiener Measure. Annals of Mathematical Statistics, 42, 130-140. https://doi.org/10.1214/aoms/1177693500

24. 24. Mao, X. (2008) Stochastic Differential Equations and Applications. 2nd Edition, Chichester, Horwood. https://doi.org/10.1533/9780857099402

25. 25. Liu, M. and Zhang, D. (2015) A Dynamic Logistic Model for Medical Resources Allocation in an Epidemic Control with Demand Forecast Updating. Journal of the Operational Research Society, 67, 841-852. https://doi.org/10.1057/jors.2015.105

26. 26. Sun, X. and Wang, Y. (2008) Stability Analysis of a Stochastic Logistic Model with Nonlinear Diffusion Term. Applied Mathematical Modelling, 32, 2067-2075. https://doi.org/10.1016/j.apm.2007.07.012

27. 27. Liu, M., Deng, M. and Du, B. (2015) Analysis of a Stochastic Logistic Model with Diffusion. Applied Mathematics and Computation, 266, 169-182. https://doi.org/10.1016/j.amc.2015.05.050

NOTES

*This work was supported in part by the National Nature Science Foundations of China under Grant No. 61673103 and No.61403248.