^{1}

^{*}

^{2}

^{*}

^{3}

^{*}

In this article, we discuss the long-time dynamical behavior of the stochastic non-autonomous nonclassical diffusion equations with linear memory and additive white noise in the weak topological space . By decomposition method of the solution, we give the necessary condition of asymptotic compactness of the solutions, and then prove the existence of random attractor, while the time-dependent forcing term only satisfies an integral condition.

In this article, we investigate the asymptotic behavior of solutions to the following stochastic nonclassical diffusion equations driven by additive noise and linear memory:

{ u t − Δ u t − Δ u − ∫ 0 ∞ k ( s ) Δ u ( t − s ) d s + λ u + f ( x , u ) = g ( x , t ) + h W ˙ , x ∈ Ω , t > 0 , u ( x , t ) = 0 , x ∈ ∂ Ω , u ( x , t ) = u 0 ( x , τ ) , x ∈ Ω , τ ≤ 0 , (1.1)

where Ω is a bounded domain in ℝ n ( n ≥ 3 ) , the initial data u 0 ∈ H 0 1 ( Ω ) , u = u ( x , t ) is a real valued function of x ∈ Ω , t ∈ ℝ , h ∈ H 0 1 ( Ω ) ∩ H 2 ( Ω ) , g ∈ L b 2 ( ℝ ; L 2 ( Ω ) ) , λ > 0 and W ˙ ( t ) is the generalized time derivative of an infinite dimensional wiener process W ( t ) defined on a probability space ( Ω , F , ℙ ) , where Ω = { ω ∈ C ( ℝ , ℝ ) : ω ( 0 ) = 0 } , F is the σ-algebra of Borel sets induced by the compact topology of Ω , ℙ is a corresponding wiener measure on F for which the canonical wiener process W ( t ) satisfies that both W ( t ) | t ≥ 0 and W ( t ) | t ≤ 0 are usual one dimensional Brownian motions. We may identify W ( t ) with ω ( t ) , that is, W ( t ) = W ( t , ω ) = ω ( t ) for all t ∈ ℝ .

To consider system (1.1), we assume that the memory kernel satisfies

k ( s ) ∈ C 2 ( ℝ + ) , k ( s ) ≥ 0, k ′ ( s ) ≤ 0, ∀ s ∈ ℝ + , (1.2)

and there exists a positive constant δ > 0 such that the function μ ( s ) = − k ′ ( s ) satisfies

μ ∈ C 1 ( ℝ + ) ∩ L 1 ( ℝ + ) , μ ′ ( s ) ≤ 0, μ ′ ( s ) + δ μ ( s ) ≤ 0, ∀ s ≥ 0. (1.3)

And suppose that the nonlinearity satisfies as follows:

f ( x , s ) = f 1 ( x , u ) + f 2 ( x , s ) , s ∈ ℝ and for every fixed x ∈ Ω , f 1 ( x , ⋅ ) ∈ C ( ℝ , ℝ ) satisfying

f 1 ( x , s ) s ≥ α 1 | s | p − ψ 1 ( x ) , ψ 1 ∈ L 1 ( Ω ) ∩ L 2 n n − 2 ( Ω ) , (1.4)

| f 1 ( x , s ) | ≤ β 1 | s | p − 1 + ψ 2 ( x ) , ψ 2 ∈ L 2 ( Ω ) ∩ L q ( Ω ) , (1.5)

and f 2 ( x , ⋅ ) ∈ C ( ℝ , ℝ ) satisfying

f 2 ( x , s ) s ≥ α 2 | s | p − γ , (1.6)

| f 2 ( x , s ) | ≤ β 2 | s | p − 1 + δ , (1.7)

where α i , β i ( i = 1 , 2 ) , γ , δ and l are positive constants, and q is a conjugate exponent of p.

In addition, we assume that for s ∈ ℝ and 2 ≤ p ≤ 2 n n − 2 , for n ≥ 3 ; p > 2 , for n = 1 , 2 .

We assume that the time-dependent external force term g ( x , t ) satisfies a condition

∫ Ω e σ s ‖ g ( ⋅ , s ) ‖ 2 d s < ∞ , for any s ∈ ℝ , (1.8)

and for some constant σ > 0 to be specified later.

Equation (1.1) has its physical background in the mathematical description of viscoelastic materials. It’s well known that the viscoelastic material exhibit natural damping, which according to the special property of these materials to retain a memory of their past history. And from the materials point of view, the property of memory comes from the memory kernel k ( s ) , which decays to zero with exponential rate. Many authors have constructed the mathematical model by some concrete examples, see [

The long-time behavior of Equation (1.1) without white additive noise and μ ≡ 0 has been considered by many researchers; on a bounded domain see, e.g. [

To our best knowledge, Equation (1.1) on a bounded domain in the weak topological space and the time-dependent forcing term has not been considered by any predecessors.

The article is organized as follows. In Section two, we recall the fundamental results related to some basic function spaces and the existence of random attractors. In Section three, firstly, we define a continuous random dynamical system to proving the existence and uniqueness of the solution, then prove the existence of a closed random absorbing set and establish the asymptotic compactness of the random dynamical system finally prove the existence of D-random attractor.

In this section, we recall some basic concepts and results related to function spaces and the existence of random attractors of the RDSs. For a comprehensive exposition on this topic, there is a large volume of literature, see [

Let A = − Δ , with the domain D ( A ) = H 0 1 ( Ω ) ∩ H 2 ( Ω ) , and fractional

power space D ( A r 2 ) , r ∈ ℝ , the ( ⋅ , ⋅ ) D ( A r 2 ) , ‖ ⋅ ‖ D ( A r 2 ) is the inner product and norm, respectively. For convenience, we use H r = D ( A r 2 ) , the norm ‖ ⋅ ‖ H r = ‖ ⋅ ‖ D ( A r 2 ) , and H 0 = L 2 ( Ω ) , H 1 = H 0 1 ( Ω ) .

Similar to [

〈 ϕ 1 , ϕ 2 〉 H r , μ = ∫ 0 ∞ μ ( s ) 〈 ϕ 1 , ϕ 2 〉 H r d s , ‖ ϕ ‖ H r , μ = ∫ 0 ∞ μ ( s ) ‖ ϕ ‖ H r 2 d s . (2.1)

Define the space

H μ 1 ( ℝ + ; H r ) = { ϕ | ϕ ( s ) , ∂ s ϕ ( s ) ∈ L μ 2 ( ℝ + ; H r ) }

with the inner product

〈 ϕ 1 , ϕ 2 〉 H μ 1 ( ℝ + ; H r ) = ∫ 0 ∞ μ ( s ) 〈 ϕ 1 , ϕ 2 〉 H r d s + ∫ 0 ∞ μ ( s ) 〈 ∂ s ϕ 1 ( s ) , ∂ s ϕ 2 ( s ) 〉 H r d s ,

and the norm

‖ ϕ ‖ H μ 1 ( ℝ + ; H r ) 2 = ‖ ϕ ‖ L μ 2 ( ℝ + ; H r ) 2 + ‖ ϕ ′ ‖ L μ 2 ( ℝ + ; H r ) 2 .

We also introduce the family of Hilbert space M r = H r × L μ 2 ( ℝ + ; H r ) , and endow norm

‖ z ‖ M r 2 = ‖ ( u , υ ) ‖ M r 2 = 1 2 ( ‖ u ‖ M r 2 + ‖ υ ‖ M r , μ 2 ) .

In the following of this article, we denote ‖ ⋅ ‖ H r , μ 2 : = ‖ ⋅ ‖ r , μ 2 . See [

Let Ω = { ω ∈ C ( ℝ , ℝ ) : ω ( 0 ) = 0 } , f is the Borel σ-algebra on Ω , and ℙ is the corresponding Wiener measure. Define

θ t ω ( ⋅ ) = ω ( ⋅ + t ) − ω ( t ) , ω ∈ Ω , t ∈ ℝ .

Then θ = ( θ ) t ∈ ℝ is the measurable map and θ 0 is the identity on Ω , θ t + s = θ t ∘ θ s for all s , t ∈ ℝ . That is, ( Ω , F , ℙ , ( θ t ) t ∈ ℝ ) is called a metric dynamical system.

Definition 2.1. ( Ω , F , ℙ , ( θ t ) t ∈ ℝ ) is called a metric dynamical system if θ : ℝ × Ω → Ω is ( B ( ℝ ) × F , F ) -measurable, θ 0 is the identity on Ω , θ t + s = θ t ∘ θ s for all s , t ∈ ℝ and θ t ℙ = ℙ for all t ∈ ℝ .

Definition 2.2. A continuous random dynamical system (RDS) on X over a metric dynamical system ( Ω , F , ℙ , ( θ t ) t ∈ ℝ ) is a mapping

ϕ : ℝ + × Ω × X → X , ( t , ω , x ) → ϕ ( t , ω , x ) ,

which is B ( ℝ + ) × F × B ( X ) , B ( X ) -measurable and satisfied, for P-a.e. ω ∈ Ω ,

1) ϕ ( 0 , ω , ⋅ ) is the identity on X;

2) ϕ ( t + s , ω , ⋅ ) = ϕ ( t , θ s ω , ⋅ ) ∘ ϕ ( s , ω , ⋅ ) for all t , s ∈ ℝ + ;

3) ϕ ( t , ω , ⋅ ) : X → X is continuous for all t ∈ ℝ + .

Definition 2.3. A random bounded set B = { B ( ω ) } ω ∈ Ω is a family of nonempty subsets of X is called tempered with respect to ( θ t ) t ∈ ℝ if for P-a.e. ω ∈ Ω , for all β > 0 ,

l i m | t | → ∞ e − β | t | d ( B ( θ t ω ) ) = 0,

where d ( B ) = s u p x ∈ B ‖ x ‖ X .

Definition 2.4. Let D be the collection of all tempered random sets in X. A set K = { K ( ω ) ; ω ∈ Ω } ∈ D is called a random absorbing set for RDS ϕ in D , if for every B ∈ D and P-a.e. ω ∈ Ω , there exists t B ( ω ) > 0 such that for all t ≥ t B ( ω ) ,

ϕ ( t , θ − t ω , B ( θ − t ω ) ) ⊆ K ( ω ) .

Definition 2.5. Let D be the collection of all tempered random subsets of X. Then ϕ is said to be asymptotically compact in X if for P-a.e. ω ∈ Ω , the sequence { ϕ ( t n , θ − t n ω , x n ) } n = 1 ∞ has a convergent subsequence in X whenever

t n → ∞ , and x n ∈ B ( θ − t n ω ) with { B ( ω ) } ω ∈ Ω ∈ D .

Definition 2.6. (See [

1) A ( ω ) is compact, and ω ↦ d ( x , A ( ω ) ) is measurable for every x ∈ X ;

2) A ( ω ) ω ∈ Ω is invariant, that is, ϕ ( t , ω , A ( ω ) ) = A ( θ t ω ) , ∀ t ≥ 0 ;

3) A ( ω ) ω ∈ Ω attracts every set in D , that is, for every

B = { B ( ω ) } ω ∈ Ω ∈ D , l i m t → ∞ d ( ϕ ( t , θ − t ω , B ( θ − t ω ) ) , A ( ω ) ) = 0,

where d is the Hausdorff semi-metric given by

d ( Z , Y ) = sup z ∈ Z inf y ∈ Y ‖ z − y ‖ X

for any Z ⊆ X and Y ⊆ X .

Theorem 2.1. Let ϕ be a continuous random dynamical system with state space X over ( Ω , F , ℙ , ( θ t ) t ∈ ℝ ) . If there is a closed random absorbing set B ( ω ) of ϕ and ϕ is asymptotically compact in X, then A ( → ) is a random attractor of ϕ , where

A ( ω ) = ∩ t ≥ 0 ∪ τ ≥ t ϕ ( τ , θ − τ ω ) B ( θ − τ ω ) ¯ , ω ∈ Ω .

Moreover, { A ( ω ) } is the unique random attractor of ϕ .

As mentioned in [

η t ( x , s ) = ∫ 0 s u ( x , t − r ) d r , s ≥ 0. (2.2)

Hence,

η t t + η s t = u , s ≥ 0. (2.3)

Therefore, we can rewrite (1.1) as follows.

{ u t − Δ u t − Δ u − ∫ 0 ∞ μ ( s ) Δ η t ( s ) d s + λ u + f ( x , u ) = g ( x , t ) + h W ˙ , ∂ t η t ( x , s ) = u ( x , t ) − ∂ s η t ( x , s ) , u ( x , t ) = 0 , η t ( x , s ) = 0 , ( x , t ) ∈ ∂ Ω × ℝ + , t ≥ 0 , u ( x , 0 ) = u 0 ( x , 0 ) , x ∈ Ω , η 0 ( x , s ) = η 0 ( x , s ) = ∫ 0 s u 0 ( x , − r ) d r , ( x , t ) ∈ Ω × ℝ + , (2.4)

where u ( x , s ) = 0 satisfies that there exist two positive constant C and k, such that

∫ 0 ∞ e − k s ‖ ∇ u 0 ( − s ) ‖ 2 d s ≤ C . (2.5)

Lemma 2.1. ( [

where, the embedding is compact. Let K ⊂ L μ 2 ( ℝ + ; B 1 ) satisfy

1) K in L μ 2 ( ℝ + ; B 0 ) ∩ H μ 1 ( ℝ + ; B 2 ) ;

2) s u p η ∈ K ‖ η ( s ) ‖ B 1 2 ≤ N , a.s. For some N ≥ 0 .

Then K is relatively compact in L μ 2 ( ℝ + ; B 1 ) .

In this section, we prove that the stochastic nonclassical diffusion problem (2.4) has a D-random attractor. First, We convert system (2.4) with a random perturbation term and linear memory into a deterministic one with a random parameter ω . For this purpose, we introduce the Ornstein-Uhlenbeck process taking the form

z ( t ) = z ( θ t ω ) : = − ∫ − ∞ 0 e s ( θ t ω ) ( s ) d s , t ∈ ℝ ,

where ω ( t ) = W ( t ) is one dimensional Wiener process defined in the introduction. Furthermore, z ( t ) satisfies the stochastic differential equations

d z + z d t = d W ( t ) forall t ∈ ℝ . (3.1)

It is known that there exists a θ t -invariant set Ω ¯ ⊆ Ω of full P measure such that t → z ( θ t ) is continuous for every ω ∈ Ω ¯ , and the random variable | z ( θ t ) | is tempered, see, e.g., [

d ( Δ z ) = h d W ( t ) (3.2)

where Δ is the Laplacian with domain H 0 1 ( Ω ) ∩ H 2 ( Ω ) . Using the change of variable υ ( t ) = u ( t ) − z ( θ t ω ) , υ ( t ) satisfies the equation (which depends on the random parameter ω )

{ υ t − Δ υ t − Δ υ − ∫ 0 ∞ μ ( s ) Δ η t ( s ) d s + λ ( υ + z ( θ t ω ) ) + f ( x , υ + z ( θ t ω ) ) = g ( x , t ) + Δ z ( θ t ω ) , η t t + η s t = υ + z ( θ t ω ) , υ ( x , 0 ) = : υ 0 ( x ) = u 0 ( x , 0 ) − z ( ω ) , η t ( x , 0 ) = 0 , η o ( x , s ) = η 0 ( x , s ) = ∫ 0 s υ 0 ( x , − r ) d r , υ ( x , t ) = 0 , η t ( x , s ) = 0 , ( x , t ) ∈ ∂ Ω × ℝ + , t ≥ 0. (3.3)

By the Galerkin method as in [

Throughout this article, we always write

u ( t , ω , u 0 ) = υ ( t , ω , u 0 ) + z ( θ t ω ) . (3.4)

If u is the solution of problem (1.1) in some sense, we can define a continuous dynamical system

ϒ ( t , ω , u 0 ) = u ( t , ω , u 0 ) = υ ( t , ω , u 0 ) + z ( θ t ω ) . (3.5)

In order to prove the asymptotic compactness and the existence of global attractor, we give the following results.

Lemma 3.1. ( [

〈 η t , η s t 〉 μ , H r ≥ δ 2 ‖ η t ‖ μ , H r 2 . (3.6)

We first show that the random dynamical system ϒ has a closed random absorbing set in D , and then prove that ϒ is asymptotically compact.

Lemma 3.2. Assume that h ∈ H 0 1 ( Ω ) ∩ H 2 ( Ω ) and (1.2)-(1.8) hold. Let B = { B ( ω ) } ω ∈ Ω ∈ D . Then for P-a.e. ω ∈ Ω , there is a positive random function r 1 ( ω ) and a constant T = T ( B , ω ) > 0 such that for all t ≥ T ,

‖ z 0 ‖ M 1 2 = ‖ υ 0 ( θ − t ω ) , η 0 ( θ − t ω ) ‖ M 1 2 ∈ B ,

the solution of (3.3) has the following uniform estimate

‖ υ ( t , θ − t ω , υ 0 ( ω ) ) ‖ 2 + ‖ ∇ υ ( t , θ − t ω , υ 0 ( ω ) ) ‖ 2 + ‖ η t ( t , θ − t ω , υ 0 ( ω ) ) ‖ 1, μ 2 ≤ r 1 ( ω ) . (3.7)

Proof. Taking the inner product of the first equation of (3.3) with υ ∈ L 2 ( Ω ) , we have

1 2 d d t ( ‖ υ ‖ 2 + ‖ ∇ υ ‖ 2 ) + ‖ ∇ υ ‖ 2 + λ ‖ υ ‖ 2 − ∫ 0 ∞ μ ( s ) ( Δ η t ( s ) , υ ) d s = ( − f ( x , υ + z ( θ t ω ) ) , υ ) + ( g ( x , t ) + Δ z ( θ t ω ) , υ ) . (3.8)

From (2.2) and (2.3), we obtain

− ∫ 0 ∞ μ ( s ) ( Δ η t ( s ) , υ ) d s = − ∫ 0 ∞ μ ( s ) ( Δ η t ( s ) , η t t + η s t ) d s + ∫ 0 ∞ μ ( s ) ( Δ η t ( s ) , z ( θ t ω ) ) d s = 1 2 d d t ‖ η t ‖ 1 , μ 2 + 〈 η t , η s t 〉 1 , μ − ∫ 0 ∞ μ ( s ) ( ∇ η t ( s ) , ∇ z ( θ t ω ) ) d s . (3.9)

Hence, we can rewrite (3.8) as follows

1 2 d d t ( ‖ υ ‖ 2 + ‖ ∇ υ ‖ 2 + ‖ η t ‖ 1 , μ 2 ) + ‖ ∇ υ ‖ 2 + λ ‖ υ ‖ 2 + δ ‖ η t ‖ μ , 1 2 − ∫ 0 ∞ μ ( s ) ( ∇ η t ( s ) , ∇ z ( θ t ω ) ) d s = ( − f ( x , υ + z ( θ t ω ) ) , υ ) + ( g ( x , t ) + Δ z ( θ t ω ) , υ ) . (3.10)

By Young inequality and Lemma 3.1, we get

∫ 0 ∞ μ ( s ) ( ∇ η t ( s ) , ∇ z ( θ t ω ) ) d s ≤ δ 2 ‖ η t ‖ 1, μ 2 + 1 2 δ ‖ ∇ z ( θ t ω ) ‖ 2 . (3.11)

From the first term on the right hand side of (3.8) f = f 1 + f 2 , First we estimate f 1 . By (1.4)-(1.5) and using a similar arguments as (4.2) in [

( f 1 ( x , υ + z ( θ t ω ) ) , υ ) = ( f 1 ( x , u ) , u − z ( θ t ω ) ) = ( f 1 ( x , u ) , u ) − ( f 1 ( x , u ) , z ( θ t ω ) ) ≥ α 1 2 ‖ u ‖ p p − c ( ‖ z ( θ t ω ) ‖ p p + ‖ z ( θ t ω ) ‖ 2 ) − c ( ‖ ψ 1 ‖ 1 + ‖ ψ 2 ‖ 2 ) . (3.12)

By using (1.6)-(1.7), we arrive at

( f 2 ( x , υ + z ( θ t ω ) ) , υ ) = ( f 2 ( x , u ) , u − z ( θ t ω ) ) ≥ α 2 ∫ Ω | u | p d x − γ ∫ Ω d x − β 2 ∫ Ω | u | p − 1 | z ( θ t ω ) | d x − δ ∫ Ω | z ( θ t ω ) | d x . (3.13)

By the young inequality, and using assumption (1.6), we see that

β 2 ∫ Ω | u | p − 1 | z ( θ t ω ) | d x ≤ α 2 2 ∫ Ω | u | p d x + c ∫ Ω | z ( θ t ω ) | p d x , (3.14)

δ ∫ Ω | z ( θ t ω ) | d x ≤ ∫ Ω | z ( θ t ω ) | 2 d x + δ 2 4 . (3.15)

where c = c ( α 2 , β 2 , p ) . Then, it follows from (3.13)-(3.15) that

( f 2 ( υ + z ( θ t ω ) ) , υ ) ≥ α 2 2 ‖ u ‖ p p d x − c ( ‖ z ( θ t ω ) ‖ p p + ‖ z ( θ t ω ) ‖ 2 ) − c . (3.16)

On the other hand, we have

( g , υ ) ≤ λ 2 ‖ υ ‖ 2 + 1 2 λ ‖ g ( t ) ‖ 2 . (3.17)

From the last term of (3.8), we obtain

( Δ z ( θ t ω ) , υ ) ≤ 1 2 ‖ ∇ z ( θ t ω ) ‖ 2 + 1 2 ‖ ∇ υ ‖ 2 . (3.18)

Then, we substituting (3.11), (3.12) and (3.16)-(3.18) into (3.10) we conclude that

1 2 d d t ( ‖ υ ‖ 2 + ‖ ∇ υ ‖ 2 + ‖ η t ‖ 1, μ 2 ) + 1 2 ‖ ∇ υ ‖ 2 + λ 2 ‖ υ ‖ 2 + δ ‖ η s t ‖ 1, μ 2 + α 1 2 ‖ u ‖ p p − c ( ‖ z ( θ t ω ) ‖ p p + ‖ z ( θ t ω ) ‖ 2 ) − c ( ‖ ψ 1 ‖ 1 + ‖ ψ 2 ‖ 2 ) + α 2 2 ∫ Ω | u | p d s − c ( ‖ z ( θ t ω ) ‖ p p + ‖ z ( θ t ω ) ‖ 2 ) ≤ δ 2 ‖ η t ‖ 1, μ 2 + 1 2 δ ‖ ∇ z ( θ t ω ) ‖ 2 + 1 2 λ ‖ g ( t ) ‖ 2 + 1 2 ‖ ∇ z ( θ t ω ) ‖ 2 ,

then we have

1 2 d d t ( ‖ υ ‖ 2 + ‖ ∇ υ ‖ 2 + ‖ η t ‖ 1, μ 2 ) + 1 2 ‖ ∇ υ ‖ 2 + λ 2 ‖ υ ‖ 2 + δ 2 ‖ η s t ‖ 1 , μ 2 + 1 2 ( α 1 + α 2 ) ‖ u ‖ p p ≤ 1 2 λ ‖ g ( t ) ‖ 2 + C ( ‖ z ( θ t ω ) ‖ p p + ‖ z ( θ t ω ) ‖ 2 + ‖ ∇ z ( θ t ω ) ‖ 2 ) + C . (3.19)

Furthermore, let

σ = max { 1 , λ , δ } . (3.20)

Then (3.19)-(3.20), it implies

d d t ( ‖ υ ‖ 2 + ‖ ∇ υ ‖ 2 + ‖ η t ‖ 1, μ 2 ) + σ ( ‖ υ ‖ 2 + ‖ ∇ υ ‖ 2 + ‖ η t ‖ 1, μ 2 ) ≤ C ( 1 + | Y ( θ t ω ) | 2 + | Y ( θ t ω ) | p ) + 1 2 λ ‖ g ( t ) ‖ 2 . (3.21)

According to Grnowall's Lemma, we obtain

‖ υ ( t , ω , υ 0 ( ω ) ) ‖ 2 + ‖ ∇ υ ( t , ω , υ 0 ( ω ) ) ‖ 2 + ‖ η t ( t , ω , η 0 ( ω ) ) ‖ 1, μ 2 ≤ e − 2 σ t ‖ υ 0 ( ω ) ‖ 2 + ‖ ∇ υ 0 ( ω ) ‖ 2 + ‖ η 0 ( ω ) ‖ 1, μ 2 + C ∫ 0 t e 2 σ ( t − s ) ( 1 + | Y ( θ s ω ) | 2 + | Y ( θ s ω ) | p ) d s + 1 2 λ ∫ 0 t e 2 σ ( t − s ) ‖ g ( s ) ‖ 2 d s . (3.22)

Substituting ω by θ − t ω , then from (3.22), we have that

‖ υ ( t , θ − t ω , υ 0 ( θ − t ω ) ) ‖ 2 + ‖ ∇ υ ( t , θ − t ω , υ 0 ( θ − t ω ) ) ‖ 2 + ‖ η t ( t , θ − t ω , η 0 ( θ − t ω ) ) ‖ 1, μ 2 ≤ e − 2 σ t ‖ υ 0 ( θ − t ω ) ‖ 2 + ‖ ∇ υ 0 ( θ − t ω ) ‖ 2 + ‖ η 0 ( θ − t ω ) ‖ 1, μ 2 + C ∫ − t 0 e − 2 σ r ( 1 + | Y ( θ r ω ) | 2 + | Y ( θ r ω ) | p ) d r + 1 2 λ ∫ − t 0 e 2 σ r ‖ g ( r ) ‖ 2 d r . (3.23)

Recalling that ‖ z 0 ‖ M 1 2 = ‖ υ 0 ( θ − t ω ) , η 0 ( θ − t ω ) ‖ M 1 2 ∈ B is tempered such that

l i m t → + ∞ e − 2 σ t ( ‖ υ 0 ( θ − t ω ) ‖ 2 + ‖ ∇ υ 0 ( θ − t ω ) ‖ 2 + ‖ η 0 ( θ − t ω ) ‖ 1, μ 2 ) = 0. (3.24)

Note that | Y ( θ s ω ) | is the tempered, and z ( θ t ω ) = Δ − 1 h Y ( θ t ω ) , h ( x ) ∈ H 0 1 ( Ω ) ∩ H 2 ( Ω ) , we can choose

r 1 ( ω ) = 2 C ∫ − ∞ 0 e − 2 σ r ( 1 + | Y ( θ r ω ) | 2 + | Y ( θ r ω ) | p ) d r + 1 2 λ ∫ − ∞ 0 e 2 σ r ‖ g ( r ) ‖ 2 d r . (3.25)

Then r 1 ( ω ) is the tempered since | Y ( θ s ω ) | has at most linear growth rate at infinity, now the proof is completed.

To prove the asymptotic compactness of the solution, we decompose the solution x ( t ) = ( u ( t ) , η t ) of (3.3) as follows [

x ( t ) = x 1 ( t ) + x 2 ( t ) , u ( t ) = u 1 ( t ) + u 2 ( t ) , η t = η 1 t + η 2 t ,

where x 1 ( t ) = ( u 1 ( t ) , η 1 t ) , x 2 ( t ) = ( u 2 ( t ) , η 2 t ) satisfy the following problems, respectively

{ u 1 t − Δ u 1 t − Δ u 1 − ∫ 0 ∞ μ ( s ) Δ η 1 t ( s ) d s + λ u 1 + f 1 ( x , u 1 ) = g ( x , t ) − g 1 ( x , t ) + ( h − h 1 ) W ˙ , ∂ t η 1 t ( x , s ) = u 1 ( x , t ) − ∂ s η 1 t ( x , s ) , u 1 ( x , t ) = 0 , η 1 t ( x , s ) = 0 , ( x , t ) ∈ ∂ Ω × ℝ + , t ≥ 0 , u 1 ( x , 0 ) = u 0 ( x , 0 ) − z 1 ( ω ) , x ∈ Ω , η 1 0 ( x , s ) = η 0 ( x , s ) , ( x , t ) ∈ Ω × ℝ + , (3.26)

and

{ u 2 t − Δ u 2 t − Δ u 2 − ∫ 0 ∞ μ ( s ) Δ η 2 t ( s ) d s + λ u 2 + f ( x , u ) − f 1 ( x , u 1 ) = g 1 ( x , t ) + h 1 W ˙ , ∂ t η 2 t ( x , s ) = u 2 ( x , t ) − ∂ s η 2 t ( x , s ) , u 2 ( x , t ) = 0 , η 2 t ( x , s ) = 0 , ( x , t ) ∈ ∂ Ω × ℝ + , t ≥ 0 , u 2 ( x , 0 ) = z 1 ( ω ) , x ∈ Ω , η 2 0 ( x , s ) = 0 , ( x , t ) ∈ Ω × ℝ + , (3.27)

here the nonlinearity f = f 1 + f 2 are satisfies (1.4)-(1.7). The drifting term h , h 1 ∈ H 0 1 ( Ω ) ∩ H 2 ( Ω ) and the forcing term satisfies a condition as in (1.8), g 1 ( x , t ) ∈ L b 2 ( ℝ ; L 2 ( Ω ) ) , for any ϵ > 0 , such that

‖ g − g 1 ‖ < ϵ , ‖ h − h 1 ‖ H 2 ( Ω ) < ϵ . (3.28)

Set Δ z 1 ( θ t ω ) = h 1 Y ( θ t ω ) , we find that

d ( Δ z 1 ) = h 1 d Y = − Δ z 1 d t + h 1 d W . (3.29)

Let υ 1 ( t , ω ) = u 1 ( t , ω ) − z ( θ t ω ) + z 1 ( θ t ω ) , where u 1 ( t , ω ) satisfies (3.26), and υ 2 ( t , ω ) = u 2 ( t , ω ) − z 1 ( θ t ω ) , u 2 ( t , ω ) is the solution of (3.27). Then for υ 1 ( t , ω ) and υ 2 ( t , ω ) we have that

{ υ 1 t − Δ υ 1 t − Δ υ 1 − ∫ 0 ∞ μ ( s ) Δ η 1 t ( s ) d s + λ ( υ 1 + z ( θ t ω ) ) + f 1 ( x , υ 1 + z ( θ t ω ) − z 1 ( θ t ω ) ) = g ( x , t ) − g 1 ( x , t ) + Δ z ( θ t ω ) − z 1 ( θ t ω ) , ∂ t η 1 t ( x , t ) = υ 1 − z 1 ( θ t ω ) − ∂ s η 1 t ( x , t ) , υ 1 ( x , 0 ) : = υ 10 ( x , 0 ) = u 0 ( x , 0 ) − z ( ω ) + z 1 ( ω ) , η 1 t ( x , 0 ) : = η 10 t = 0 , η 1 0 ( x , s ) = η 10 ( x , s ) = ∫ 0 s u 0 ( x , − r ) d r , υ 1 ( x , t ) = 0 , η 1 t ( x , s ) = 0 , ( x , t ) ∈ ∂ Ω × ℝ + , t ≥ 0 , (3.30)

and

{ υ 2 t − Δ υ 2 t − Δ υ 2 − ∫ 0 ∞ μ ( s ) Δ η 2 t ( s ) d s + λ ( υ 2 + z ( θ t ω ) ) + f ( x , υ + z ( θ t ω ) ) − f 1 ( x , υ 1 + z ( θ t ω ) − z 1 ( θ t ω ) ) = g 1 ( x , t ) + Δ z ( θ t ω ) + z 1 ( θ t ω ) , ∂ t η 2 t ( x , t ) = υ 2 + z 1 ( θ t ω ) − ∂ s η 2 t ( x , t ) , υ 2 ( x , 0 ) : = υ 20 ( x , 0 ) = − z 1 ( ω ) , η 2 t ( x , 0 ) : = η 20 t = 0 , η 2 0 ( x , s ) = η 20 ( x , s ) = 0 , υ 2 ( x , t ) = 0 , η 2 t ( x , s ) = 0 , ( x , t ) ∈ ∂ Ω × ℝ + , t ≥ 0. (3.31)

The same of the problem (3.3), we also have the corresponding existence and uniqueness of solutions for (3.30) and (3.31). For the convenience, we obtain the solution operators of (3.30) and (3.31) by { S 1 ( t ) } t ≥ 0 and { S 2 ( t ) } t ≥ 0 respectively. Then, for every z 0 ∈ M 1 , we get

z ( t , ω ) = S ( t ) z 0 = S 1 ( t ) z 0 + S 2 ( t ) z 0 , ∀ t ≥ 0.

Next, we give some Lemmas to prove the asymptotic compactness.

Lemma 3.3. Assume that the condition on f , f 1 , f 2 , g , g 1 hold. Let B = { B ( ω ) } ω ∈ Ω ∈ D . Then for P-a.e. ω ∈ Ω , there is a constant T 2 = T 2 ( B , ω ) > 0 , ∀ ϵ > 0 , if

‖ z 10 ‖ M 1 2 = ‖ υ 10 ( θ − t ω ) , η 10 ( θ − t ω ) ‖ M 1 2 ∈ B ,

then for all t ≥ T 2 , the solution of (3.30) satisfies the following uniform estimate

‖ S 1 ( t ) z 10 ( ω ) ‖ M 1 2 ≤ e − 2 σ t ‖ z 10 ‖ 2 + ε r 1 ( ω ) , (3.32)

where the positive random function r 1 ( ω ) is defined in Lemma 3.2.

Proof. From (3.10) we substituting f , g , z ( θ t ω ) by f 1 , f 2 , g − g 1 , z ( θ t ω ) − z 1 ( θ t ω ) , respectively. Similar to the proof the Lemma 3.2, we compute

‖ υ 1 ( t , θ − t ω , υ 10 ( θ − t ω ) ) ‖ 2 + ‖ ∇ υ 1 ( t , θ − t ω , υ 10 ( θ − t ω ) ) ‖ 2 + ‖ η 1 t ( t , θ − t ω , η 10 ( θ − t ω ) ) ‖ 1, μ 2 ≤ e − 2 σ t ‖ υ 10 ( θ − t ω ) ‖ 2 + ‖ ∇ υ 10 ( θ − t ω ) ‖ 2 + ‖ η 10 ( θ − t ω ) ‖ 1, μ 2 + C ∫ − t 0 e 2 σ r ( 1 + | Y ( θ r ω ) | 2 + | Y ( θ r ω ) | p ) d r + 1 2 λ ∫ − t 0 e 2 σ r ‖ g ( r ) ‖ 2 d r . (3.33)

Since ‖ z 10 ‖ M 1 2 = ‖ υ 0 ( θ − t ω ) , η 0 ( θ − t ω ) ‖ M 1 2 ∈ B , | Y ( θ s ω ) | are tempered, we can choose T 2 > 0 , ∀ t > T 2 , such that (3.32) is satisfied.

Lemma 3.4. Assume that the condition on f , f 1 , f 2 , g , g 1 , h , h 1 hold. Let B = { B ( ω ) } ω ∈ Ω ∈ D . Then for P-a.e. ω ∈ Ω , there is a positive random function r 1 ( ω ) and

‖ z 10 ‖ M 1 2 = ‖ υ 0 ( θ − t ω ) , η 0 ( θ − t ω ) ‖ M 1 2 ∈ B ,

such that for every given T ≥ 0 , the solution of (3.31) has the following uniform estimates

‖ S 2 ( T , z 0 ( ω ) ) ‖ M 1 + l 2 ≤ r 2 ( ω ) , (3.34)

where l = min { 1 , ( 2 n − p ( n − 2 ) ) / 2 } .

Proof. Multiplying (3.31) by A l υ 2 and integrating over Ω , we can get

1 2 d d t ( ‖ A l 2 υ 2 ‖ 2 + ‖ A l + 1 2 υ 2 ‖ 2 ) + λ ‖ A l 2 υ 2 ‖ 2 + ‖ A l + 1 2 υ 2 ‖ 2 − ∫ 0 ∞ μ ( s ) ( Δ η 2 t ( s ) , A l υ 2 ) d s + ( f ( x , υ + z ( θ t ω ) ) , A l υ 2 ) − ( f 1 ( x , υ 1 + z ( θ t ω ) − z 1 ( θ t ω ) ) , A l υ 2 ) = ( g 1 ( x , t ) + Δ z ( θ t ω ) + z 1 ( θ t ω ) , A l υ 2 ) . (3.35)

From (2.2) and (3.31), we obtain

− ∫ 0 ∞ μ ( s ) ( Δ η 2 t ( s ) , A l υ 2 ) d s = − ∫ 0 ∞ μ ( s ) ( Δ η 2 t ( s ) , A l ( η 2 t t ( s ) + η 2 s t ( s ) − z 1 ( θ t ω ) ) ) d s = 1 2 d d t ‖ η 2 t ‖ 1 + l , μ 2 + 〈 η 2 t , η 2 s t 〉 1 + l , μ + ∫ 0 ∞ μ ( s ) ( ∇ η 2 t ( s ) , A l z 1 ( θ t ω ) ) d s , (3.37)

hence

| ∫ 0 ∞ μ ( s ) ( ∇ η 2 t ( s ) , A l z 1 ( θ t ω ) ) d s | ≤ ε ‖ η 2 t ‖ 1 + l , μ 2 + C ‖ A l + 1 2 z 1 ( θ t ω ) ‖ 2 ,

and

− 〈 η 2 t , η 2 s t 〉 1, μ ≤ − δ 2 ‖ η t ‖ 1, μ 2 ≤ δ 2 ‖ η t ‖ 1, μ 2 .

By f , f 1 , f 2 and the mean value theorem, we have

( f ( x , υ + z ( θ t ω ) ) , A l υ 2 ) − ( f 1 ( x , υ 1 + z ( θ t ω ) − z 1 ( θ t ω ) ) , A l υ 2 ) = ( f 2 ( x , υ 1 + z ( θ t ω ) − z 1 ( θ t ω ) ) , A l υ 2 ) = β 2 ∫ Ω | υ 1 + z ( θ t ω ) − z 1 ( θ t ω ) | p − 1 | A l υ 2 | d x + δ | Ω | . (3.38)

Using the embedding theorem, we have

β 2 ∫ Ω | υ 1 + z ( θ t ω ) − z 1 ( θ t ω ) | p − 1 | A l υ 2 | d x + δ | Ω | ≤ C ‖ υ 1 + z ( θ t ω ) − z 1 ( θ t ω ) ‖ L 2 n ( p − 1 ) n + 2 − 2 l p − 1 ‖ A l υ 2 ‖ 2 n n + 2 − 2 l + δ | Ω | ≤ C ‖ ∇ υ 1 + z ( θ t ω ) − z 1 ( θ t ω ) ‖ p − 1 ‖ A 1 + l 2 υ 2 ‖ + δ | Ω | , (3.39)

where we have used inequality ( n − 2 ) ( p − 1 ) n + 2 − 2 l ≤ 1 , so 2 n − p ( n − 2 ) 2 ≤ 2 n n − 2 and the embedding theorem

H 1 = D ( A 1 2 ) → L 2 n n − 2 , H 1 + l = D ( A 1 + l 2 ) → L 2 n n − 2 ( 1 + l ) , H 1 − l = D ( A 1 − l 2 ) → L 2 n n − 2 ( 1 − l ) .

Note that

( g 1 ( x , t ) + Δ z ( θ t ω ) + z 1 ( θ t ω ) , A l υ 2 ) ≤ C ε ( ‖ g 1 ( x , t ) ‖ 2 + ‖ Δ z ( θ t ω ) ‖ 2 + ‖ z 1 ( θ t ω ) ‖ 2 ) + ε ‖ A 1 + l 2 υ 2 ‖ 2 . (3.40)

Thanks to Lemma 3.1, the property of the solution of (3.3) and (3.26), and (3.36)-(3.40), we conclude that

1 2 d d t ( ‖ A l 2 υ 2 ‖ 2 + ‖ A l + 1 2 υ 2 ‖ 2 + ‖ η 2 t ‖ 1 + l , μ 2 ) ≤ ‖ A l + l 2 υ 2 ‖ 2 + λ ‖ A l 2 υ 2 ‖ 2 + ( δ 2 + ε ) ‖ η 2 t ‖ 1 + l , μ 2 + C ( 1 + ‖ Δ z ( θ t ω ) ‖ 2 + ‖ z 1 ( θ t ω ) ‖ 2 ) ≤ β ( ‖ A l + l 2 υ 2 ‖ 2 + ‖ A l 2 υ 2 ‖ 2 + ‖ η 2 t ‖ 1 + l , μ 2 ) + C ( 1 + ‖ Δ z ( θ t ω ) ‖ 2 + ‖ z 1 ( θ t ω ) ‖ 2 + ‖ g 1 ( t ) ‖ 2 ) , (3.42)

where β = min { 2 , 2 λ , δ + 2 ε } .

‖ z 2 ( t , ω ) ‖ M 1 + l 2 ≤ ‖ A l 2 υ 2 ‖ 2 + ‖ A l + 1 2 υ 2 ‖ 2 + ‖ η 2 t ‖ 1 + l , μ 2 ≤ e 2 β t ( ‖ A l 2 υ 20 ( ω ) ‖ 2 + ‖ A l + 1 2 υ 20 ( ω ) ‖ 2 + ‖ η 20 t ( ω ) ‖ 1 + l , μ 2 ) + C ∫ 0 t e 2 β ( t − s ) ( 1 + | Y ( θ s ω ) | 2 ) d s + C ∫ 0 t e 2 β ( t − s ) ‖ g 1 ( s ) ‖ 2 d s ≤ C ∫ 0 t e 2 β ( t − s ) ( 1 + | Y ( θ s ω ) | 2 ) d s + C ∫ 0 t e 2 β ( t − s ) ‖ g 1 ( s ) ‖ 2 d s . (3.43)

Thus, for every given T > 0 , we get

‖ S 2 ( T , z 0 ( ω ) ) ‖ M 1 + l 2 ≤ r 2 ( ω ) , (3.44)

where r 2 ( ω ) = ∫ 0 T e 2 β ( t − s ) ( 1 + | Y ( θ s ω ) | 2 ) d s + ∫ 0 T e 2 β ( t − s ) ‖ g 1 ( s ) ‖ 2 d s is a random function.

The proof is complete.

Since η t ( x , s ) = ∫ 0 s u ( x , t − r ) d r , s ≥ 0 and (3.31), it follows

η 2 t ( x , s ) = { ∫ 0 s u 2 ( t − r ) d r , 0 < s ≤ t , ∫ 0 t u 2 ( t − r ) d r , s > t , (3.45)

for more information on η t ( x , s ) , see [

Lemma 3.5. Let Π : = H 1 × L μ 2 ( ℝ + , H 1 ) → L μ 2 ( ℝ + , H 1 ) is a projection operator, setting Γ 2 T : = Π S 2 ( T , B 0 ( ω ) ) , B 0 ( ω ) is a random bounded absorbing set from Lemma 3.4, S 2 ( T , ⋅ ) is the solution operator of (3.31), and under the assumption of Lemma 3.4, there is a positive random function r 3 ( ω ) depend on T, such that

1) Γ 2 T is bounded in L μ 2 ( ℝ + , H 1 + l ) ∩ L μ 2 ( ℝ + , H 1 ) ;

2) s u p η ∈ Γ 2 T ‖ η ( s ) ‖ H 1 2 ≤ r 3 ( ω ) .

Proof. By the random translation, (3.44) and Lemma 3.4, we can prove this Lemma.

Therefore, Lemma 2.1 implies that Γ 2 T is relatively compact in L μ 2 ( ℝ + , H 1 ) . And use the compact embedding H 1 + l → H 1 , we have that

Lemma 3.6. Let S 2 ( t , ⋅ ) be the corresponding solution operator of (3.31), and the assumption of Lemma 3.4 and 3.5 hold, then for any T > 0 , S 2 ( T , B 0 ( ω ) ) is relatively compact in M 1 .

Now we are on a position to prove the existence of a random attractor for the stochastic nonclassical diffusion equation with linear memory and additive white noise.

Theorem 3.1. Let { S ( t ) } t ≥ 0 be the solution operator of equation (3.3), and the conditions of the lemma 3.6 hold, then the random dynamical system ϒ has a unique random attractor in M 1 .

Proof. Notice that ϒ has a closed absorbing set B = { B ( ω ) } ω ∈ Ω ∈ D by Lemma 3.2, and is relatively compact in M 1 by Lemma 3.3 and Lemma 3.6. Hence the existence of a unique D-random attractor follows from Theorem 2.1 immediately.

This work was supported by the NSFC (11561064), and NWNU-LKQN-14-6.

The authors declare no conflicts of interest regarding the publication of this paper.

Mohamed, A.E., Ma, Q.Z. and Bakhet, M.Y.A. (2018) Random Attractors of Stochastic Non-Autonomous Nonclassical Diffusion Equations with Linear Memory on a Bounded Domain. Applied Mathematics, 9, 1299-1314. https://doi.org/10.4236/am.2018.911085