Applied Mathematics
Vol.08 No.12(2017), Article ID:81519,20 pages
10.4236/am.2017.812134

Variation of Parameters for Causal Operator Differential Equations

Reza R. Ahangar

Mathematics Department, Texas A & M University Kingsville, Kingsville, USA

Copyright © 2017 by author and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: November 7, 2017; Accepted: December 26, 2017; Published: December 29, 2017

ABSTRACT

The operator T from a domain D into the space of measurable functions is called a nonanticipating (causal) operator if the past information is independent from the future outputs. We will study the solution x(t) of a nonlinear operator differential equation where its changes depends on the causal operator T, and semigroup of operator A(t), and all initial parameters ( t 0 , x 0 ) . The initial information is described x ( t ) = φ ( t ) for almost all t t 0 and ϕ ( t 0 ) = ϕ 0 . We will study the nonlinear variation of parameters (NVP) for this type of nonanticipating operator differential equations and develop Alekseev type of NVP.

Keywords:

Nonlinear Operator Differential Equations (NODE), Variation of Parameters, Nonanticipating (Causal), Alekseev Theorem

1. Definitions and Example of Nonanticipative Operators

An important feature of ordinary differential equations is that the future behavior of solutions depends only upon the present (initial) values of the solution. There are many physical and social phenomena which have hereditary dependence. That means the future state of the system depends not only upon the present state, but also upon past information (see [1] - [6] ).

Twins before the time of conception share all of their genetic history and may go to a different path in their future life. We are going to study the phenomenon which can be formulated in principle that the “present” events are independent of the “future”. These kinds of events are called nonanticipation or causal events.

Definition 1.1: Continuous Nonanticipating System

A mapping of T from the space of functions Y into itself is said to be a nonanticipating mapping if for every fixed s in the real line R, ( T x ) ( t ) = ( T y ) ( t ) for all t < s , whenever x ( t ) = y ( t ) for all t < s .

Example 1.1: All of the delay operators and integral operators are nonanticipating. All compositions or Cartesian products of the nonanticipating operators are nonanticipating.

Example 1.2: Let I = [ t 0 , a ] R be a compact subset of the real line and f a function from the interval I × Y into Y. The knowledge of the state of the system

y ( t ) = f ( t , y ( t ) ) (1.1)

at a given time ( t 0 , y 0 ) is sufficient to determine its state at any future time. This system has no after-effect or “no memory”.

Example 1.3: In a dynamic system

y ( t ) = f ( t , y ( t ) , T ( y ) ( t ) ) (1.2)

when T is a nonanticipating operator, to find the state curve y ( t ) we need to have information about the initial function y ( t ) = ϕ ( t ) for t < 0 in order to determine the state of the solution.

The following are examples of continuous anticipating operators (see Naylor and Sell 1982 [7] ).

Definition 1.2: A mapping T of Y into itself is said to be causal if for each integer N, whenever two inputs x = { x n } and y = { y n } are such that x n = y n for n N , it follows that

T ( x n ) = T ( y n ) , for n N

where

T ( x ) = { , T ( x 1 ) , T ( x 0 ) , T ( x 1 ) , T ( x 2 ) , }

and

T ( y ) = { , T ( y 1 ) , T ( y 0 ) , T ( y 1 ) , T ( y 2 ) , }

In other words, if the inputs x and y agree up to some time N, then the outputs T(x) and T(y) agree up to time N. In particular, T(x) and T(y) agree up to time N no matter what the inputs x and y are in the future beyond N. The events in the past and present are independent from the future.

Example 1.3: Consider Z = l 2 ( R ) , and let T be a mapping of Z into itself represented by a convolution integral defined of the form

( T x ) ( t ) = t h ( t u ) x ( u ) d u

This is a nonanticipating mapping if and only if h ( t s ) = 0 for almost all t s < 0 . This Voltera integral mapping shows that ( T x ) ( t ) is independent of x(t) for t > s .

Notice that when a mapping is not nonanticipating it will be an anticipating mapping, meaning that the past and the present depend on the future.

Anticipating (anticausal) Mapping: This is a mapping that the future output is independent of the past input, meaning that the mapping T : Z Z is said to be (anticausal) or anticipating if for fixed s in I = [ 0 , a ] , ( T x ) ( t ) = ( T y ) ( t ) for t > s , whenever x ( t ) = y ( t ) for t > s .

Example 1.4: Let T be an operator from the space of a square summable function L 2 ( R ) into L 2 ( R ) . We can show that the following mappings are anticausal;

y ( t ) = t e ( t u ) x ( u ) d u

Since, for fixed real number s, the fact that x 1 ( t ) = x 2 ( t ) for t > s implies y 1 ( t ) = y 2 ( t ) for t > s and means that the future input { y ( t ) : t > s } will affect the past. Therefore, this is an anticausal operator.

2. Nonanticipating Operator Differential Equation

Notations. Let S be the interval of all nonpositive numbers. Let I be the compact interval [ t 0 , a ] , J = { t R : t t 0 } , and define J = S I . Assume Y, Z, and U are Banach spaces. Let M ( I , Y ) be the space of all essentially bounded Bochner measurable functions with respect to classical Lebesgue measure from the interval I into the Banach space Y. Denote by L ( J , Y ) the space of all Lipschitzian functions y strongly differentiable almost everywhere from J into Y.

Let ϕ be a fixed initial function from the space L ( S , Y ) . Denote by D ( ϕ , Y ) the subset of the lip-space L ( J , Y ) consisting of all functions y such that y ( t ) = ϕ ( t ) for all t in S.

According to these two definitions, D ( ϕ , Y ) L ( J , Y ) .

For any Banach space Y and Z, let L i p ( I ; Y , Z ) denote the space of all functions f ( t , y ) from the product I × Y into Z, Lipschitzian in y, and for every fixed y the function f ( ., y ) belongs to the space M ( I , Z ) . This space is called Lip-space.

We apply the definition of nonanticipative operators in Section 1.1 to the initial domain. An operator T from the initial domain D ( ϕ , Y ) into M ( I , Z ) will be called a nonanticipating operator if for every two functions y and z in D ( ϕ , Y ) and every point s I , the fact that

y ( t ) = z ( t ) for almost all t < s implies that T ( y ) ( t ) = T ( z ) ( t ) for almost all t < s .

An operator P from a subset D of Y into Z is said to be Lipschitzian if there exists a constant b such that

| P ( y 1 ) P ( y 2 ) | b | y 1 y 2 | (2.1)

for every y 1 , y 2 Y . For f L i p ( I , Y ; Z ) the operator

F : M ( I , Y ) M ( I , Z ) defined by F ( y ) ( t ) = f ( t , y ( t ) ) (2.2)

is called the operator induced by f and the operator F is called Induced Operator generated by the function f.

Lipschitzian Space (or simply the Lip-Space), denoted by L i p ( K , Y ; Z ) , is the set of all functions f : K × Y Z such that f ( t , y ) is uniformly bounded, Lipschitzian in y, and is measurable in t. That is, L i p ( K , Y ; Z ) = { f : K × Y Z | f is Lipschitzian in y and f ( ., y ) M ( K , Z ) } . The infimum of all Lipschitzian constants L will be denoted by f .

On the space M ( I , Y ) , we shall introduce a family of norms, called k-norm by the formula

y k = ess . s u p { e k t | y ( t ) | : t I }

for any fixed real number k. Observe that from this definition follows the inequality

| y ( t ) | y k e k t

for almost all t in I. Notice that for every k, the k norms k and 0 are equivalent.

A Lipschitzian operator P from a subset D of M ( J , Y ) into the space M ( I , U ) is called an operator of exponential type if for some constants b and k0,

P ( y ) P ( z ) ) k b y z k

for all y and z in the domain D and all k k 0 .

Example 2.1: The operator T ( y ) ( t ) = sin ( t , y ( t r ) ) for a constant real number r is an induced operator. Thus for any function y M ( I , Y ) the operator T is nonantipating and Lipschitzian.

Properties of the nonlinear operator F in (2.2) induced by the function f have been studied by Bogdan 1981 and 1982. In particular, it is known that for f L i p ( I , Y ; Z ) and y M ( I , Y ) the function g : I Z defined by g ( t ) = f ( t , y ( t ) ) belongs to the space of measurable functions M ( I , Z ) (see Ahangar 1989, [1] - [6] ).

When an operator T is nonanticipating, the future values of the input will have no effect on the present state. One can prove that the composition and the Cartesian product of nonanticipating and Lipschitzian operators are Nonanticipating and Lipschitzian. Furthermore, the operator F induced by the function f is a well defined, nonanticipating, and Lipschitzian operator.

Definition 2.1 (Direct Sum Operators): Let T i ( i = 1 , 2 ) be operators from the domain D M ( I , Y ) into the space M ( I , Z ) . Define the direct sum operator T = T 1 T 2 such that

( T 1 T 2 ) ( y ) ( t ) = T 1 ( y ) ( t ) + T 2 ( y ) ( t )

for every y in M ( J , Y ) and t in I.

Lemma 2.1: A direct sum operator of two nonanticipating and Lipschitzian operators is nonanticipating and Lipschitzian.

Proof: First let us prove that the direct sum operator is a nonanticipating operator. Assume that two functions y and z are in the space of M ( J , Y ) and for some point s in the interval I we have y ( t ) = z ( t ) for almost all t < s . Since T1 and T2 are nonanticipative, then

T 1 ( y ) ( t ) = T 1 ( z ) ( t ) for almost all t < s ,

T 2 ( y ) ( t ) = T 2 ( z ) ( t ) for almost all t < s .

These two equalities will imply that { T 1 ( y ) + T 2 ( y ) } ( t ) = { T 1 ( z ) + T 2 ( z ) } ( t ) for almost all t < s . Thus ( T 1 T 2 ) ( y ) ( t ) = ( T 1 T 2 ) ( z ) ( t ) for almost all t < s . This will imply that T ( y ) ( t ) and T ( z ) ( t ) coincide for almost all t < s .

Now let us prove that the operator T = T 1 T 2 is Lipschitzian. According to the definition

| T ( y ) ( t ) T ( z ) ( t ) | = | ( T 1 T 2 ) ( y ) ( t ) ( T 1 T 2 ) ( z ) ( t ) | = | ( ( T 1 ) ( y ) + ( T 2 ) ( y ) ) ( t ) ( ( T 1 ) ( z ) + ( T 2 ) ( z ) ) ( t ) | = | ( ( T 1 ) ( y ) ( t ) ( T z ) ( z ) ( t ) ) | + | ( ( T 2 ) ( y ) ( t ) ( T 2 ) ( z ) ( t ) ) |

Since both operators are Lipschitzian, the right hand side will be

L 1 y z 0 + L 2 y z 0

Notice that . 0 represents the ess.sup norm in the space measurable functions M ( I , Y ) . If we let max { L i } = L for i = 1 , 2 and take the essential supremum norm on the left hand side of the above relation then it will be

T ( y ) T ( z ) 0 L y z 0 (2.3)

for all y and z in the domain D. This proves that the direct sum operator is Lipschitzian. Q.E.D.

Example 2.2: Assume that T 1 ( y ) ( t ) = y ( t r ) for a constant real number r and T 2 ( y ) ( t ) = t 0 t y ( s ) d s . The operator T ( y ) = ( T 1 T 2 ) ( y ) is nananticipa- ting and Lipschitzian.

Nonanticipating Deterministic Dynamical System: Assume that the operator T is nonanticipating and Lipscitzian. The behavior of a dynamic system

y ' ( t ) = f ( t , y ( t ) , T ( y ) ( t ) ) (2.4)

is known as an after effect differential equation with the initial domain D ( ϕ , Y ) .

Given that f L i p ( I , Y × Z ; Y ) there exists a unique solution y to the system (2.4).

Equations of this type arise in many mathematical modeling problems. In a simplest case, T as a constant delay operator can be applied (see Hale 77 [8] and Driver 77 [9] ). The following is a single species growth model with time delay.

Example 2.3: A single species model with delay can be described by

y ( t ) = r y ( t ) [ 1 y ( t τ ) / K ]

where r is the growth rate of the species y, and K is called the environment capacity for y.

The chaotic behavior induced by time delays was presented by Yang Kuang 1996. The global existence of the general single species with stage structured model described by a system

y ' ( t ) = f ( y ( t τ ) ) g ( y ( t )

has been studied (See Kuang 1996, p.173, [10] ).

Example 2.4: Let T be the operator defined in example 2.2. One can verify the existence and uniqueness of the solution of the system y ' ( t ) = T ( y ) ( t ) with the initial data function y ( t ) = ϕ ( t ) for r t < t 0 . Our goal is to investigate the conditions which guarantee the solution of the system (2.4) when there is a random perturbation in the system.

Solution to the Nonanticipating Operator differential Equations: The following operator differential equation when G is a nonanticipating operator from the initial domain D ( ϕ , Y ) to the Banach space Z is called nonanticipating differential equation

{ y ( t ) = G ( y ) ( t ) ) t > t 0 y ( t ) = ϕ ( t ) , t t 0 (2.5)

for almost all t in the interval I. We define that a function y from the space M(I,Y) is a solution to the nonanticipating operator differential equation if it is strongly differentiable and satisfies the system (2.5) (see Bogdan 1981 [11] , Bogdan 1982 [12] , Ahangar 1989, [1] , and Ahangar 1986 [2] ). We accept the following theorem without proof.

Theorem 2.1: Given a nonanticipating and Lipschitzian operator G from the initial domain D ( ϕ , Y ) into the space of Bochner measurable functions M ( I , Y ) , there exists a unique solution y D ( ϕ , Y ) that satisfies the nonlinear operator system (2.5).

Note: The purpose of this paper is to develop a generalized nonlinear variation of parameters formula, analogous to Alekseev's result (see Alekseev 1961 [13] ). The generalization is listed below:

1) The classical existence and uniqueness theorem for the solution of abstract Cauchy problems no longer holds if the underlying space is an infinite dimensional Banach space (See Lakshmikantham 1972, [14] [15] and [16] ).

2) The nonlinear system in this paper includes all evolutionary equations of C0 semigrop of operators.

3) Instead of continuity of the nonlinear functions f ( t , y ( t ) , T ( y ) ( t ) ) , we will replace the more general form of these functions in Banach spaces to be Bochner measurable in t and Liptschitzian in y. For regulatory conditions, we will assume the nonlinear operator involved in the nonlinear system is nonanticipating and liptchitzian.

4) The solution functions either x or y are assumed strongly differential.

3. Strong Solution to the Perturbed Nonanticipating Operator Differential Equations

Definition 3.1: By Nant-Lip we mean nonanticipating and Lipschitzian operators.

The operator G in the system (2.5) is nonanticipating and Lipschitzian. We need to clarify the meaning of the solution to the nonlinear system of operator differential Equation (2.5). The important part is when we accept some other principles indirectly hidden in the proof of Theorem (2.1). In fact we use the equivalent relationship between (2.5) and the integral

y ( t ) = ϕ ( t ) + 0 t G ( y ) ( s ) d s

Notice that this equivalent relation requires the absolute continuity of function y and the summability of the operator G which implies the differentiability of y. The above nonlinear operator system similarly could be presented by the following operator differential equation

x ' ( t ) = f ( t , x ( t ) , T ( x ) ( t ) ) for almost all t > t 0 (3.1)

which contain the initial function ϕ for the past time interval S = { t R : t t 0 } . The solution of the system (3.1) is denoted by x(t) which depends on the initial time t 0 and the initial function ϕ 0 and can described by x ( t , t 0 , ϕ ) which is called the strong solution to the system.

Definition 3.2: A function x(t) is said to be a strong solution to the system (3.1) if it satisfies the following conditions:

1) x is strongly differentiable,

2) x satisfies the system (3.1) almost everywhere in the interval I,

3) there exists a function ϕ D ( J , Y ) such that x ( t ) = ϕ ( t ) , for almost all t t 0 .

The following proposition will show the existence and uniqueness of the solution to the perturbed operator differential Equation (3.1). For introductory perturbation theory see Brauer 66 and Brauer 67.

Proposition 3.1: Assume that the operator T is Nant-Lip and functions f and g belong to the Lip-space which is f L i p ( I , Y × Z ; Y ) and g L i p ( I , Y ; Y ) .

1) If g is the perturbation to the Equation (3.1) then there is a unique strong solution y(t) in the initial domain D ( ϕ , Y ) which satisfies the perturbed system of differential equation

y ' ( t ) = f ( t , y ( t ) , T ( y ) ( t ) ) + g ( t , y ( t ) ) (3.2)

2) Given a solution x ( t , t 0 , ϕ ) of (3.1) then the solution to the pertrubed equation will satisfy the integral equation

y ( t ) = x ( t , t 0 , ϕ ) + t 0 t g ( s , y ( s ) ) d s (3.3)

Proof: 1) Let us assume that the operator P 1 ( y ) ( t ) = f ( t , y ( t ) , T ( y ) ( t ) ) and P 2 ( y ) ( t ) = g ( t , y ( t ) ) . Define the direct sum operator G = P 1 P 2 .

By Lemma 2.1, the operator G will be Nant-Lip and the differential Equation (3.2) will be in the following form

y ( t ) = G ( y ) ( t ) (3.4)

for almost all t in I. According to Bogdan’s theorm (see Bogdan 1981 and 1982, [11] , [12] ), there exists a unique solution y(t) in D ( ϕ , Y ) to the Equation (3.4).

Proof of 2) The equivalent integral equation of (3.4) will be

y ( t ) = ϕ ( t ) + t 0 t G ( y ) ( s ) d s (3.5)

Applying the direct sum operators P1 and P2 we get the conclusion which is (3.3). Q.E.D.

Substitute for unperturbed solution x ( t , t 0 , ϕ ) = ϕ ( t ) + t 0 t f ( s , y ( s ) , T ( y ) ( s ) ) d s in (3.3) as a solution of (3.1) we will get the following

y ' ( t ) = ϕ ( t ) + t 0 t f ( s , y ( s ) , T ( y ) ( s ) ) d s + t 0 t g ( s , y ( s ) ) d s (3.6)

This completes the proof of part (ii).Q.E.D

4. Generalized Operator Differential Equations

Introduction to the mild (Weak) solutions: For the definition of strong solution in the previous section, it was assumed equivalent relations between the differential and integral forms. This assumption required the differentiability of the solution. This condition may not be true in a large class partial differential equations. We are going to review the difficulties of applying the concepts of strong solution to the operator differential equations. The following are some examples.

The collection of solutions of the problem of free oscillations of an infinite string expressible in the form

2 u t 2 = c 2 2 u x 2

takes the form u ( t , x ) = ϕ ( x + c t ) + ψ ( x c t ) , where ϕ and ψ are twice differentiable functions. Notice that at the vertices of these solutions, u ( x , t ) will not be differentiable. Notice also the Lipschitzian condition for the nonlinear operator G which is required for the unique solution to the system (2.5) may not hold for unbounded operators in evolutionary equations. Thus, we need to have a new concept which includes the nondifferentiable solutions for unbodied operators. We are going to demonstrate this study by a linear system of abstract Cauchy problem

t u = α 1 u + f ( u ) (4.1)

for u H 0 1 , f L 2 ( Ω ) , where α 1 may be an unbounded operator in the space X. Assume that the domain of this operator is denoted by D ( α 1 ) X . We are looking for a solution space Y X . One way to to get the solution space Y is to work from A and show that it generates a C0-semigroup.

When the operator is PDE, it may be unbounded, thus the solution in (4.1) may not be well defined.

We use a test function ϕ H 1 such that

t u , ϕ = α 1 u , ϕ + f ( u ) , ϕ (4.2)

We define a weak solution “mild solution” u such that both relations (4.2) and the following are equivalent

u ( t ) = e A t u 0 + 0 t e A ( t s ) f ( u ( s ) ) d s (4.3)

Most of the physical models can be described by a PDE system with evolution equations. One can interpret the solution as an ODE solution in an appropriate infinite dimensional space.

Nonlinear Operator Differential Equations(NODE):

Suppose X is a Banach space, A : D ( A ) X X is the generator of a C0-semigroup on X, U R × X is open and f : U X be a continuous function such that x f ( t , x ) is differentiable and ( t , x 0 ) D x f ( t , x 0 ) is a continuous in U.

For ( t 0 , ϕ ( t 0 ) ) U , we denote by x ( t , t 0 , ϕ ) the mild solution to the Cauchy problem

{ x = A x + f ( t , x ( t ) , T ( x ) ( t ) ) , for t > t 0 x ( t ) = ϕ ( t ) , for t t 0 (4.4)

which has not been defined yet. We can define it by employing a similar argument and using the integral form of the system (4.4)

x ( t ) = e A t ϕ ( t ) + 0 t e A ( t s ) f ( s , x ( s ) , T ( x ) ( s ) ) d s (4.5)

Definition 4.1: We define the function x(t) to be a mild solution to the system (4.4) on I = [ 0, a ] if it satisfies (4.5) and x D ( A ) for all t in I.

Lemma 4.1: Every semigroup of operators generated by the operator A is a nonanticipating and Lipschitzian operator.

Proof: Assume that the semigroup T t generated by A is given. Thus by the definition of semigroup, for every y in D(A)

T t y ( ξ ) = T t T ξ ( y ) = y ( t + ξ ) , for t 0, ξ 0

Suppose that for y and y ¯ in D(A) then y ( ξ ) = y ¯ ( ξ ) , for every ξ t . Thus the equality y ( t + ξ ) = y ¯ ( t + ξ ) implies that T t ( y ) ( ξ ) = T t ( y ¯ ) ( ξ ) , for all ξ t . This proves that the semigroup operator Tt is nonanticipating.

Remarks: 1) The converse is not true. There may be a nonanticipating operator which may not be a semigroup.

2) It would be interesting to find out what conditions we may impose on the nonanticipating operators to generate a semigroup?

Theorem 4.2: (Existence and Uniqueness of the Solution)

Let the operator A be a semigroup operator and T nonanticipating and Lipschitzian. Assume that f L i p ( I , Y , Z ) . Then the system (4.4) has a unique solution in the space of initial domain D ( ϕ , Y ) .

Proof: The homogeneous solution is guaranteed by the semigroup of operators and it will be equal to e A t ϕ ( t ) . The unique solution of the entire system (4.4) will be obtained by the nonanticipating and Lipschitzian properties of T and the Theorem 2.1.

These types of problems arise in a variety of physical models like heat conduction, population dynamics, and chemical reactions.

5. Variation of Parameters for Perturbed Operator Differential Equations

Suppose X is a Banach space, A : D ( A ) X X is a generator of a C0-semigroup on X, U R × X × Y is open and f : U Y be a continuous function such that x f ( t , x , z ) is differentiable and ( t 0 , x 0 , ϕ 0 ) D x f ( t , ϕ 0 ) is continuous in U where z = T ( x ) and z 0 = T ( x 0 ) = ϕ 0 .

For ( t 0 , x 0 , z 0 ) U , we denote by x ( t , t 0 , ϕ 0 ) the mild solution to the following Cauchy problem

{ x t x ( t , s , t 0 , ϕ 0 ) = A ( t ) x + f ( t , s , x ( t , t 0 , ϕ 0 ) ) , foralmostall t > t 0 x ( t ) = ϕ ( t ) , foralmostall t t 0 (5.1)

Assume also that y(t) is a solution to the following perturbed system

{ y ( t ) = A ( t ) x + f ( t , y ( t ) , T ( y ) ( t ) ) + g ( t , y ( t ) ) , t > t 0 y ( t , t 0 , ϕ 0 ) = x ( t , t 0 , ϕ ) , t t 0 (5.2)

These solutions in the system (5.1) are then related by the evolutionary property

x ( t ; t 0 , ϕ 0 ) = x ( t ; s , x ( s ; t 0 , ϕ 0 ) )

for all t 0 s t . The initial function ϕ depends on t, t0, and x0. It is denoted by ϕ ( t , t 0 , x 0 ) . The solution to the system says that the future is determined completely by the present, with the past being involved only in that it determines the present. This is a deterministic version of the Markov property.

We make use of the following theorem in developing the variation formula for nonlinear differential equations. The Alekseev’s formula for C0-semigroups was generalized by Hale 1992 [17] . In addition, F. Bruaer 1966 [18] and 1967 [19] studied the perturbation of Nonlinear Systems of Differential Equations [10] , [11] .

We will use the same approach to develop the Nonlinear Variation of Parameter (NVP) for operator differential equations.

Let X be a Banach space, operator A : D ( A ) X Y is generator of a C0-semigroup on X, f L i p ( I , Y × Z ; Y ) is continuously differentiable with respect to x.

Let us summarize our conditions to present the following hypothesis;

(H1) The operator A t in (5.1) and (5.2) is a Semigroup.

(H2) Assume that functions f and g belong to the following Lip spaces. That is they are Bochner measurable on the first variable and Lipschitzian on the other variables.

f L i p ( J , Y × Z , Y ) , g L i p ( J ; Y , Y )

(H3) Assume that x ( t , t 0 , ϕ 0 ) is a mild solution to the following unperturbed operator differential Equation (5.1).

(H4) also let y ( t , t 0 , ϕ ) be a solution to the following perturbed nonlinear operator differential Equation (5.2).

Lemma (5.1): Assume that all conditions for the existence of the solution to the nonlinear operator system of the unpeturbed equation hold. Then

1) The derivative x 0 x ( t , t 0 , x 0 , ϕ 0 ) U ( t , t 0 , x 0 , ϕ 0 ) exists and it is denoted by

2 x ( t , t 0 , x 0 , ϕ 0 ) as partial derivative on variation with respect to the second parameter x 0 . It satisfies the following nonlinear operator equation

{ U ( t ) = d U d t = f x [ t , x ( t , t 0 , x 0 , ϕ 0 ) ] U for t > t 0 U ( t 0 ) = I for all t t 0 (5.4)

The relation (5.4) shows how fast the unperturbed solution x(t) changes with respect to its initial position x0, and its initial function ϕ . This is a partial derivative with respect the variable x(s) for new initial value x ( s = t 0 ) .

2) Also assume that the function x(t) is Frechet differentiable with respect the first parameter variable t 0

t 0 x ( t , t 0 , x 0 , ϕ 0 ) V ( t , t 0 , x 0 , ϕ 0 )

exists and it is denoted by 1 x ( t , t 0 , x 0 , ϕ 0 ) .

It satisfies the second kind of operator differential equation

{ V ( t ) = f x [ t , x ( t , t 0 , x 0 , ϕ 0 ) ] V for t > t 0 V ( t 0 ) = f ( t 0 , x 0 , ϕ 0 ) for all t t 0 (5.6)

Furthermore

V ( t , t 0 , x 0 , ϕ 0 ) = U ( t , t 0 , x 0 , ϕ 0 ) f ( t 0 , x 0 , ϕ 0 ) (5.7)

Proof:

Part 1): We are assuming that the transformation T will be applied on the solution function x(t) and will produce a function at ( t 0 , x 0 ) which will be the initial function ϕ 0 . Though the unperturbed solution can be described by x ( t ) = x ( t , t 0 , x 0 , ϕ 0 ) . Let us take the derivative of both sides of (5.3) w.r.t variable t:

d d t U ( t , t 0 , x 0 , ϕ 0 ) = d d t x 0 x ( t , t 0 , x 0 , ϕ 0 ) = x 0 d d t x ( t , t 0 , x 0 , ϕ 0 ) = x 0 [ f ( t , x ( t ) , T ( x ) ( t ) ] = x f ( t , x ( t ) , T ( x ) ( t ) ) x x 0

Substitute its equivalent from (5.3) then we can conclude:

U ( t , t 0 , x 0 , ϕ 0 ) = f x [ t , x ( t , t 0 , x 0 , ϕ 0 ) ] U for t > t 0

where the second part of the relation (5.4) can be interpreted as an identity matrix:

U ( t 0 ) U ( t 0 , t 0 , x 0 , ϕ 0 ) = I d d t U ( t 0 ) = 0 (5.8)

2) Notice that, at the starting point t = t 0 we can re-evluate the rate of change of the solution with respect to the initial moment

t 0 x ( t , t 0 , x 0 , ϕ 0 ) V ( t , t 0 , x 0 , ϕ 0 )

A few notes are important: For a vector solution

x ( t 0 , t 0 , x 0 , ϕ 0 ) = ϕ ( t 0 , x 0 ) = ϕ 0 x 0 x ( t , t 0 , x 0 , ϕ 0 ) | t t 0 = I (5.9)

Similar to (5.8)

V ( t 0 , t 0 , x 0 , ϕ 0 ) = I d d t V ( t 0 , t 0 , x 0 , ϕ 0 ) = 0

Notice that

d d t V ( t , t 0 , x 0 , ϕ 0 ) = d d t t 0 x ( t , t 0 , x 0 , ϕ 0 ) = t 0 d d t x ( t , t 0 , x 0 , ϕ 0 ) = t 0 f ( t , t 0 , x 0 , ϕ 0 ) = f x d x d t 0 = f x ( t , t 0 , x 0 , ϕ 0 ) V ( t , t 0 , x 0 , ϕ 0 )

for all t t 0 .

d d t V ( t , t 0 , x 0 , ϕ 0 ) = f x ( t , t 0 , x 0 , ϕ 0 ) V ( t , t 0 , x 0 , ϕ 0 )

This completes the proof of the first part of (b).

To prove the second part of (b), we can assume that t 0 ( s ) has a variation on s [ t 0 , t ] .

Let us take the derivative of both sides of (5.9) with respect t0:

t 0 x ( t , t 0 , x 0 , ϕ 0 ) = [ t x ( t ) + x 0 x ( t ) x 0 t ] | t t 0 = 0

t x ( t ) | t t 0 + x 0 x ( t 0 ) x 0 t 0 = 0

x ( t 0 ) + I V ( t 0 ) = 0 V ( t 0 ) = x ( t 0 , t 0 , u 0 )

d d t x ( t , t 0 , x 0 , ϕ 0 ) | t t 0 = V ( t 0 ) = f ( t 0 , x 0 , ϕ 0 ) (5.10)

This completes the second part of the result in (2).

Proof of the last part of (2): using the definition of operators U and V:

V ( t , t 0 , x 0 , ϕ 0 ) = t 0 x ( t , t 0 , x 0 , ϕ 0 ) = x 0 x ( t , t 0 , x 0 , ϕ 0 ) t 0 x 0 = U ( t , t 0 , x 0 , ϕ 0 ) d x ( t ) d t | t t 0

Substiting (5.10) yields

V ( t , t 0 , x 0 , ϕ 0 ) = U ( t , t 0 , x 0 , ϕ 0 ) f ( t 0 , x 0 , ϕ 0 )

Theorem (5.1): Alekseev Type Variation of Parameters Theorem for NODE Systems:

Let x ( t , t 0 , x 0 , ϕ 0 ) and y ( t , t 0 , x 0 , ϕ 0 ) be solutions of the NODE systems: (5.1) and (5.2) through the initial conditions ( t 0 , x 0 , ϕ 0 ) respectively. Then for t t 0

y ( t , t 0 , x 0 ) = x ( t , t 0 , x 0 , ϕ 0 ) + t 0 t U [ t , s , y ( s , t 0 , x 0 ) ] g [ s , y ( s , t 0 , x 0 ) ] d s

y ( t , t 0 , x 0 ) = x ( t , t 0 , x 0 ) + t 0 t x 0 x [ t , s , y ( s , t 0 , x 0 , ϕ 0 ) ] g [ s , y ( s , t 0 , x 0 , ϕ 0 ) ] d s (5.11)

Notice: As we see in Equations (5.1) and (5.2), the perturbation causes the changes on the initial conditions at t = t 0 and x ( t 0 ) = x 0 and on the initial function ϕ ( t ) . Up to the initial condition both functions x ( t ) and y ( t ) have the past history and they will be identical at t = t 0 .

Proof: Variations of unperturbed solution x(t) and perturbed solution y(t) when the initial conditions of the moving object change with respect to the independent variable s [ t 0 , t ] can be demonstrated by the following chain rule formula

d d s x [ t , s , y ( s ) ] = x s + x y d y d s = s x [ t , s , y ( s ) ] + v x [ t , s , y ( s ) ] y ( s ) = s x [ t , s , y ( s ) ] + y x [ t , s , v ( s ) ] y ( S )

Substitute (5.3) and the perturbed solution y'(s) from (5.2)

= V [ t , s , y ( s ) ] + v x [ t , s , y ( s ) ] f ( s , y ( s ) ) + g ( s , y ( s ) )

Substitute for V(s) by (5.7)

= U [ t , s , x ( s ) ] f ( s , x ( s ) ) + U [ t , s , y ( s ) ] [ f ( s , y ( s ) ) + g ( s , y ( s ) ) ]

As a result of these substitutions we can integrate the following relation on s [ t 0 , t ] :

d d s x [ t , s , y ( s ) ] = U [ t , s , y ( s ) ] g ( s , y ( s ) )

Now integrate

t 0 t x [ t , s , y ( s ) ] d s = t 0 t U [ t , s , y ( s ) ] g ( s , y ( s ) ) d s

x [ t , t , y ( t ) ] x [ t , t 0 , x 0 ] = t 0 t x 0 x [ t , s , y ( s , t 0 , u 0 ) ] g ( s , y ( s ) ) d s

x [ t , t , y ( t ) ] = x [ t , t 0 , x 0 , ϕ 0 ] + t 0 t x 0 x [ t , s , y ( s , t 0 , x 0 , ϕ 0 ) ] g ( s , y ( s ) ) d s

Now the question is this: what is x [ t , t , y ( t ) ] ? The unperturbed solution x(t) with the perturbed solution as the initial conditions t 0 = t and x ( t 0 ) = u 0 = y ( t ) .

Thus x [ t , t , y ( t ) ] = y ( t , t 0 , u 0 ) , and the above relation will be concluded as follows:

y [ t , t 0 , x 0 , ϕ 0 ] = x [ t , t 0 , x 0 , ϕ 0 ] + t 0 t x 0 x [ t , s , y ( s , t 0 , x 0 , ϕ 0 ) ] g ( s , y ( s ) ) d s

This is a conclusion of the Alekseev type Theorem for Nonlinear Operator Differential Equations.

6. Generalized Alekseev’s VOP of NODE with Initial Functions

When the operator A is unbounded, one cannot expect to derive the same result for any x 0 X since x ( t , t 0 , x 0 , ϕ 0 ) in general is not differentiable with respect to t 0 . We also need the differentiability of the solution x ( t , t 0 , x 0 , ϕ 0 ) with respect to the parameters ( t 0 , x 0 , ϕ 0 ) . The variation of parameters was investigated with respect to the parameters ( t 0 , x 0 ) in the previous section and it will be investigated in this section with respect to ϕ 0 .

The relation (5.2) has been generalized in Hale 1992 for infinite dimensional variational operator when f C 1

1 x ( t , t 0 , x 0 , ϕ 0 ) = 3 x ( t , t 0 , x 0 , ϕ 0 ) [ A x 0 + f ( t 0 , x 0 , T ( x 0 ) ) ] (6.1)

Assume that ϕ y ( t , t 0 , x 0 , ϕ ) W ( t , t 0 , x 0 , ϕ ) exists, then

W = d W d t = d d t ( y ϕ ) = ϕ d y d t = ϕ ( A x 0 + f ( t , y , T ( y ) ) ) = f y y ϕ = f y W

This argument can lead to the fact that if the operator f y L i p ( I , Y × Z ; Y ) , then the solution to the system

{ W = f y [ t , y ( t , t 0 , ϕ ) ] W , t t 0 W ( t 0 ) = I , for t t 0 (6.2)

has a unique solution. The system (6.2) is called the variational equation. Notice that for all t t 0 , y ( t , t 0 , ϕ ) = ϕ ( t ) then

U ( t , t 0 , ϕ ) = ( / ϕ ) y ( t , t 0 , ϕ ) = ( / ϕ ) ϕ ( t ) = I

Using the chain rule for abstract functions, we get

d d s y [ t , t 0 , ϕ 0 + s ( ψ 0 ϕ 0 ) ] = U ( t , t 0 , ϕ 0 + s ( ψ 0 ϕ 0 ) ) ( ψ 0 ϕ 0 )

Thus by integrating the system,

y ( t , t 0 , ψ ) y ( t , t 0 , ϕ ) = 0 1 U ( t , t 0 , ϕ 0 + s ( ψ 0 ϕ 0 ) ) ( ψ 0 ϕ 0 ) d s (6.3)

Proposition 6.1 (Alekseev's Theorem for Operator Differential Equations): Suppose f : U R × X X and g : U R × X X are of class C 1 . If x ( t , t 0 , ϕ 0 ) is the solution of Equation (5.1) through the initial state, ( t 0 , x 0 , ϕ 0 ) and y ( t , t 0 , x 0 , ψ 0 ) is the perturbed solution of

{ y ( t ) = A y + f ( t , y ( t ) , T y ( t ) ) + g ( t , y ( t ) ) , t > t 0 y ( t ) = ϕ ( t ) , for t t 0 (6.4)

through ( t 0 , ϕ 0 ) , then, for any ϕ 0 D ( A ) D ( ϕ , Y ) we have

y ( t , t 0 , x 0 , ϕ 0 ) = x ( t , t 0 , x 0 , ϕ 0 ) + t 0 t ϕ 0 x ( t , s , y ( s , t 0 , ϕ 0 ) ) g ( s , y ( s , t 0 , ϕ 0 ) ) d s (6.5)

Proof: For ϕ 0 D ( A ) D ( ϕ , Y ) assume y ( s ) = x ( t , s , ϕ ( s ) ) . Differentiating with respect to the first parameter ( s , ϕ ( s ) ) ,

y ' ( s ) = x ( t , s , ϕ ( s ) ) s + x ( t , s , ϕ ( s ) ) v ϕ ( s ) s = 1 x ( t , s , ϕ ( s ) ) + 2 x ( t , s , ϕ ( s ) ) x ( s )

Using the relation (5.3)

x ' ( t , s , y ( s ) ) = 1 x ( t , s , y ( s ) ) + 2 x ( t , s , y ( s ) ) y ( s ) = 2 x ( t , s , y ( s ) ) [ A y ( s ) + f ( s , y ( s ) , T ( y ) ( s ) ) ] + 2 x ( t , s , y ( s ) ) [ A y + f ( s , y ( s ) , T ( y ) ( s ) ) + g ( s , y ( s ) ) ] = 2 x ( t , s , y ( s ) ) g ( s , y ( s ) )

Integrating from t0 to t, we will conclude that

x ( t , t , y ( t ) ) x ( t , t 0 , y ( t 0 ) ) = t 0 t 2 x ( t , s , y ( s ) ) g ( s , y ( s ) ) d s

Therefore,

y ( t , t 0 , ϕ ) = x ( t , t 0 , ϕ ( t 0 ) ) + t 0 t 2 x ( t , s , y ( s ) ) g ( s , y ( s ) ) d s

This proves the theorem for ϕ 0 D ( A ) D ( ϕ , Y ) .

Assume that for the initial function ϕ 0 Y the maximal interval is [ t 0 , a ) for the solution y ( t , t 0 , ϕ ) .

For t 0 t τ let us define ξ B δ ( τ ) = { y Y : y ϕ δ ( τ ) } and operators

F 1 ( . ) : B δ ( τ ) C ( [ t 0 , τ ] , Y ) , F 2 ( . ) : B δ ( τ ) C ( [ t 0 , τ ] , Y )

by the following relations

F 1 ( ϕ 0 ) ( t ) = x ( t , t 0 , ϕ ( t 0 ) ) + t 0 t 2 x ( t , s , y ( s ) ) g ( s , y ( s ) ) d s

F 2 ( ϕ 0 ) ( t ) = e A ( t t 0 ) ϕ 0 + t 0 t e A ( t s ) [ f ( s , y ( s ) , T ( y ) ( s ) ) + g ( s , y ( s ) ) ] d s

Since both operators are well defined and continuous on B δ ( τ ) and coincide on D ( A ) B δ ( τ ) , they must coincide on B δ ( τ ) . This proves the theorem.

The next theorem will provide the variation of parameters formula for operator differential equations.

Theorem 6.1 (Variation of Parameters for NODE): The solution of the systems (5.1) and (5.2) satisfy the following

y ( t , t 0 , x 0 , ϕ ( t ) ) = x ( t , t 0 , x 0 , ϕ ( t ) ) + t 0 t W ( t , s , y ( s , t 0 , ϕ ( t ) ) ) g ( s , y ( s , t 0 , ϕ ( t ) ) ) d s (6.6)

where W ( t , s , y ( s , t 0 , ϕ ( t ) ) ) = x ( t , t 0 , ϕ ( t ) ) ϕ and assume the inverse matrix W 1 ( t , t 0 , v ( t ) ) exists.

Proof: In a variation of parameters, we will determine a function v ( t ) to satisfy the differential equation for perturbed solution y such that

{ y ( t , t 0 , ϕ ( t ) ) = x ( t , t 0 , v ( t ) ) , for t > t 0 v ( t ) = ϕ 0 ( t ) , for t t 0 (6.7)

is a solution process for the system (5.7). From the system (5.7) and differentia- tion of (5.9) we will get

y ( t ) = x ( t , t 0 , v ( t ) ) t + x ( t , t 0 , v ( t ) ) v v ( t ) t = A y + f ( t , y ( t ) , T ( y ) ( t ) ) + g ( t , y ( t ) ) , t > t 0 (6.8)

Since x ( t , t 0 , v ( t ) ) is a solution of (5.5), then

g ( t , y ( t ) ) = x ( t , t 0 , v ( t ) ) ϕ v ( t ) (6.9)

It can be observed that the inverse matrix W 1 ( t , t 0 , v ( t ) ) exists, then

v ( t ) = W 1 ( t , t 0 , v ( t ) ) g ( t , x ( t , t 0 , v ( t ) ) ) (6.10)

By integrating we will obtain

v ( t ) = ϕ ( t 0 ) + t 0 t W 1 ( s , t 0 , v ( s ) ) g ( s , x ( s , t 0 , v ( s ) ) ) d s (6.11)

Differentiation with respect to the second independent variable s when t 0 s t implies that

d x ( t , t 0 , v ( s ) ) d s = x ( t , t 0 , v ( s ) ) v v ( s ) s = x ( t , t 0 , v ( t ) ) ϕ v ( t ) = W ( t , t 0 , v ( s ) ) v ( t )

Substituting (6.10) for v ' ( t ) we get the following for the right hand side

= W ( t , t 0 , v ( s ) ) W 1 ( s , t 0 , v ( s ) ) g ( s , x ( s , t 0 , v ( s ) ) )

which implies

x ( t , t 0 , v ( s ) ) = x ( t , t 0 , ϕ ( t ) ) + t 0 t W ( t , t 0 , v ( s ) ) W 1 ( s , t 0 , v ( s ) ) g ( s , x ( s , t 0 , v ( s ) ) d s

Using variation definition (6.7) in the above relation, we will now get the variation of parameters for nonlinear operator differential Equation (6.5)

y ( t , t 0 , x 0 ) = x ( t , t 0 , ϕ ( t ) ) + t 0 t W ( t , t 0 , v ( s ) ) W 1 ( s , t 0 , v ( s ) ) g ( s , y ( s , t 0 , ϕ ( t ) ) ) d s (6.12)

The operator T in the differential Equation (5.1) and (5.7) could be any delay, integral, composition, or Cartesian product of nonanticipating and Lipschitzian operators which will affect the nonperturbed solution x ( t , t 0 , x 0 , ϕ ( t ) ) . The variation formula (5.14) will be effected by the operator T through these changes.

Assuming that the variation of parameters is given, we will investigate some of the properties of this formula through the following conclusions for particular cases.

Corollary 6.1: Suppose that the conditions of Theorem 6.1 satisfy and guarantee the existence and uniqueness of the solution of the system (5.2). Assume also ϕ ( t 0 ) = ϕ 0 is the initial state of the system x = A x . Then the relation (5.7) will be

y ( t , t 0 , ϕ 0 ) = x ( t , t 0 , ϕ 0 ) + t 0 t f ( s , x ( s ) , T ( x ) ( s ) ) d s + t 0 t x 0 x ( t , s , y ( s , t 0 , x 0 , ϕ 0 ) ) g ( s , y ( s , t 0 , x 0 , ϕ 0 ) ) d s (6.13)

Proof: Assuming that x ( t , t 0 , ϕ 0 ) is a solution to the homogeneous equation x A x = 0 , then by the direct integration of the system (5.1) and applying the variation of parameters formula (5.3) to the nonlinear system (5.2), we will get the formula (6.13).

Corollary 6.2: Suppose that the conditions of H1 through H4 guarantee the existence and uniqueness of the solution of (5.1) and (5.6). Assume also a particular case when f 0 and g ( t , x ( t ) ) g ( t ) , then the Alekseev’s formula (5.7) deduces the variation of parameters formula

x ( t ) = Φ ( t , t 0 ) x 0 + t 0 t Φ ( t , s ) g ( s ) d s (6.16)

for linear differential equation: x ' ( t ) = A ( t ) x ( t ) + g ( t ) .

Proof: Assuming that x ( t , t 0 , x 0 ) is a solution to the homogeneous x ( t ) A ( t ) x ( t ) = 0 , then the fundamental matrix of the homogeneous system will be

x ( t , t 0 , x 0 ) = Φ ( t , t 0 ) x 0

By considering the following

x ( t 0 , t 0 , x 0 ) = Φ ( t 0 , t 0 ) x 0 = x 0 and x ( t , s , y ( s , t 0 , x 0 ) ) = Φ ( t , s ) y ( s , t 0 , x 0 )

we conclude that

x 0 x ( t , s , y ( s , t 0 , x 0 ) ) = Φ ( t , s ) x 0 y ( s , t 0 , x 0 ) = Φ ( t , s ) x 0 x 0 = Φ ( t , s ) (6.17)

{It can be verified that y ( s , t 0 , x 0 ) = x 0 }

Notice that the deterministic function f is identically equal to zero f 0 . This concludes the variation of constants for linear system (6.17).

Corollary 6.3: Suppose that in the differential Equation (5.1) A = 0 , then the general solution of (6.16) about the equilibrium solution y 0 will be

y ( t , t 0 , x 0 ) = x 0 + t 0 t f [ s , x ( s ) ] d s + t 0 t g ( s , y ( s , t 0 , x 0 ) ) d s (6.18)

Proof: Since the operator A = 0 , then the solution x ( t , t 0 , x 0 ) = x 0 is a constant function. Therefore x 0 x ( t , t 0 , x 0 ) = I . To find the perturbed solution

of the system (5.6), we use the conclusion of the Proposition 5.1 for unperturbed solution of the system (5.1) to obtain the relation (6.18).

We will study the variation of parameters for operator differential equations disturbed force operator functions. These nonlinear operators can involve the following types: delay, integrals, composition, or cartesian products of all nonanticipating and Lipschitzian operators.

7. Conclusions

Assume that A ( t ) is a matrix function on I × Y into the space M ( I , Z ) . Suppose that Φ ( t ) represents the fundamental matrix solution process of a differential equation

x ( t ) = A ( t ) x ( t ) , x ( t 0 ) = x 0 (7.1)

Then

Φ ( t ) = A ( t ) Φ ( t ) , Φ ( t 0 ) = unitmatrix (7.2)

d e t Φ ( t ) = e x p [ t 0 t t r A ( s ) d s ] , t I (7.3)

A method of variation of parameters for the systems (7.1) - (7.3) is presented by G. S. Ladde and V. Lakshmikantham, 1980.

Suppose A ( t ) is Lebesgue summable from I into M ( I , Z ) and let f L i p ( I , Y ; Z ) be a perturbation in the system (6.1) then the solution process y ( t ) = y ( t , t 0 , y 0 ) of the following nonlinear system

y ( t ) = A ( t ) y ( t ) + f ( t , y ( t ) ) , y ( t 0 ) = y 0 (7.4)

will satisfy the following integral equation

y ( t ) = x ( t , t 0 , x 0 ) + t 0 t Φ ( t ) Φ 1 ( s ) f ( s , y ( s ) ) d s (7.5)

for all t t 0 . Further study of this general form of the variation of parameters for nonlinear operator differential equations should be very interesting. These nonlinear operators can involve varieties of many types of operators like: delay, integrals, composition, or Cartesian products of all nonanticipating and Lipschitzian operators.

A classical nonlinear system type y ' ( t ) = f ( t , y ( t ) ) for t > t 0 and y ( t 0 ) = y 0 in (1.1) is well known and extensively studied. The variation of parameters discovered by Alekseeve is a great tools to study this kind of nonlinear system and use this conclusion for stability and asymptotic behavior of a nonlinear system. The solutions to a nonlinear operator differential equations of type (1.2) which include all operators T satisfying nonanticipating and lipschitzian conditions also reviewed here, have a huge range of application.

For operator in this paper we proved and demonstrated a general form of Alekseeve Theorem when a non linear system (5.1) includes a C0-semigroup of opeartor A.

All important conditions in (H1) through (H4) are connecting the nonanticipating property of T, semigroup property of At, and Lipschitzian property of f. The variation of parameters helped us to find the solution to the purturbed system. This perturbed solution for nonanticipating dynamic systems will help us in the future to study the stability and asymptotic behavior of the system. Two major issues related to the Variatiion of Parameters can be developed for Nonlinear Operator Differential Equations.

First, is the numerical algorithm and computational program to produce the solution to such a general form of nonlinear variational of parameters method. Second, generalize the stability application to nonlinear system to operator differential equations.

Cite this paper

Ahangar, R.R. (2017) Variation of Parameters for Causal Operator Differential Equations. Applied Mathematics, 8, 1883-1902. https://doi.org/10.4236/am.2017.812134

References

  1. 1. Ahangar, R.R. (1989) Nonanticipating Dynamical Model and Optimal Control. Applied Mathematics Letter, 2, 15-18. https://doi.org/10.1016/0893-9659(89)90106-7

  2. 2. Ahangar, R.R. (1986) Existence of Optimal Controls for Generalized Dynamical Systems Satisfying Nonanticipating-Operator Differential Equations. Dissertation, Department of Mathematics, The Catholic University of America, Washington DC.

  3. 3. Ahangar, R.R. and Salehi, E. (2002) Automatic Controls for Nonlinear Dynamical Systems with Lipschitzian Trajectories. Journal of Mathematical Analysis and Application, 268, 400-405. https://doi.org/10.1006/jmaa.2000.7060

  4. 4. Ahangar, R.R. (2005) Optimal Control Solution to Operator Differential Equations using Dynamic Programming. Proceedings of the 2005 International Conference on Scientific Computing, Las Vegas, 20-23 June 2005, 16-22.

  5. 5. Ahangar, R.R. and Salehi, E. (2006) Optimal Automatic Controls Solution to Nonlinear Operator Dynamical Systems. Proceeding of the International Conference on Scientific Computing, Las Vegas, 26-29 June 2006.

  6. 6. Ahangar, R.R. (2008) Optimal Control Solution to Nonlinear Causal Operator Systems with Target State. FCS (Foundations of Computer Science), WORLD COMP, 218-223.

  7. 7. Naylor Arch, W. and Sell, G.R. (1982) Linear Operator Theory in Engineering and Science. Applied Mathematical Sciences, Vol. 40, Springer-Verlag, Berlin.

  8. 8. Hale Jack, J. (1977) Theory of Functional Differential Equations. Spring Verlag, Berlin.

  9. 9. Driver, R.D. (1977) Ordinary and Delay Differential Equations. Applied Mathematical Sciences, Vol. 20, Springer-Verlag, Berlin. https://doi.org/10.1007/978-1-4684-9467-9

  10. 10. Yang, K. (1996) Delay Differential Equations with Applications in Population Dynamics. Mathematics in Science and Engineering, Vol. 191, Academic Press, Inc., Cambridge.

  11. 11. Bogdan, V.M. (1981) Existence and Uniqueness of Solution for a Class of Nonlinear Operator Differential Equations Arising in Automatic Spaceship Navigation. NASA Technical Administration, Springfield.

  12. 12. Bogdan, V.M. (1982) Existence and Uniqueness of Solution to Nonlinear Operator Differential Equations Generalizing Dynamical Systems of Automatic Spaceship Navigation. Nonlinear Phenomena in Mathematical Sciences, Academic Press, Cambridge, 123-136.

  13. 13. Alekseev, V.M. (1961) An Estimate for the Perturbations of the Solution of Ordinary Differential Equations. Vestn. Mosk. Univ. Ser. 1, Math. Mek., No. 2, 28-36.

  14. 14. Lakshmikantham, V. and Leela, S. (1981) Nonlinear Differential Equations in Abstract Spaces. Pergamon Press, Oxford.

  15. 15. Lakshmikantham, V. and Ladas, G.E. (1972) Differential Equations in Abstract Space. Academic Press, Cambridge.

  16. 16. Lashmikantham, V. and Ladde, G.S. (1980) Random Differential Equations. Academic Press, Cambridge.

  17. 17. Hale, J., Arrieta, J. and Carvalho, A.N. (1992) A Damped Hyperbolic Equation with Critical Exponent. Communications in Partial Differential Equations, 17, 841-866.

  18. 18. Brauer, F. (1966) Perturbations of Nonlinear Systems of Differential Equations. Journal of Mathematical Analysis and Applications, 14, 198-206. https://doi.org/10.1016/0022-247X(66)90021-7

  19. 19. Brauer, F. (1967) Perturbation of Nonlinear Systems of Differential Equations II. Journal of Mathematical Analysis and Applications, 17, 418-434. https://doi.org/10.1016/0022-247X(67)90132-1