Applied Mathematics
Vol.10 No.04(2019), Article ID:91699,23 pages
10.4236/am.2019.104014

Optimal Control of Cancer Growth

Jens Christian Larsen

Vanløse Alle 50 2. mf. tv, 2720 Vanløse, Copenhagen, Denmark

Copyright © 2019 by author(s) and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: February 15, 2019; Accepted: April 8, 2019; Published: April 11, 2019

ABSTRACT

The purpose of the present paper is to apply the Pontryagin Minimum Principle to mathematical models of cancer growth. In [1] , I presented a discrete affine model T of cancer growth in the variables C for cancer, GF for growth factors and GI for growth inhibitors. One can sometimes find an affine vector field X on 3 whose time one map is T. It is to this vector field we apply the Pontryagin Minimum Principle. We also apply the Discrete Pontryagin Minimum Principle to the model T. So we prove that maximal chemo therapy can be optimal and also that it might not depending on the spectral properties of the matrix A, (see below). In section five we determine an optimal strategy for chemo or immune therapy.

Keywords:

Cancer, Models of Cancer Growth, Pontryagin Minimum Principle

1. Introduction

Our reference to optimal control theory of ODEs is [2] and to control theory of discrete systems [3]. There is a review of papers in optimal control theory applied to cancer [4]. The model we consider here is from [1] and is defined by

A = ( 1 + γ α β δ 1 + μ F 0 σ 0 1 + μ I ) , T ( y ) = A y + g (1)

where y = ( C , G F , G I ) T , g = ( g C , g F , g I ) T 3 , and T denotes transpose. Here α + , β , δ , σ , μ F , μ I < 0 . We assume that α δ + β σ 0 .

The matrix A has characteristic polynomial p ( λ ) where

p ( λ ) = λ 3 λ 2 ( 3 + γ + μ F + μ I ) λ ( α δ + β σ ( 1 + μ F ) ( 1 + μ I ) (2)

( 1 + γ ) ( 2 + μ F + μ I ) + α δ ( 1 + μ I ) + β σ ( 1 + μ F ) ( 1 + γ ) ( 1 + μ F ) ( 1 + μ I ) (3)

and this polynomial can have (i) three real roots or (ii) one real root and two imaginary roots. It turns out that asking μ = μ F = μ I simplifies matters considerably. Then

p ( λ ) = ( 1 + μ λ ) ( λ 2 ( 2 + γ + μ ) λ + ( 1 + γ ) ( 1 + μ ) α δ β σ ) (4)

So the eigenvalues are 1 + μ and

λ ± = 1 + γ + μ 2 ± 1 2 ( γ μ ) 2 + 4 ( α δ + β σ ) (5)

We assume that 1 + μ > 0 , μ < 0 . In case (i) if the eigenvalues of A, λ + , λ , λ ˜ are positive and distinct and we assume this, then you can find an affine vector field X on 3 such that the time one map is d

Φ 1 X = T (6)

see [1] and below. In case (ii) if the eigenvalues of A are 1 + μ , a ± i b , a > 0 , 1 + μ > 0 , b 0 and we assume this, you can also find an affine vector field X on 3 such that

Φ 1 X = T (7)

In case (i) define a matrix of eigenvectors

D = ( 1 + μ λ + 1 + μ λ 0 δ δ β σ σ α ) (8)

and in case (ii) define

U = ( 1 + μ a b 0 δ 0 β σ 0 α ) (9)

Then in case (i)

Λ = D 1 A D = ( λ + 0 0 0 λ 0 0 0 λ ˜ ) (10)

if the eigenvalues are distinct and positive and in case (ii)

Λ = U 1 A U = ( a b 0 b a 0 0 0 1 + μ ) (11)

see [1]. Here D 1 denotes the inverse to D. To find an X in case (i) define when the eigenvalues are real, positive and distinct, an affine vector field

Y ( x ) = ( ln λ + 0 0 0 ln λ 0 0 0 ln λ ˜ ) x + d (12)

Then with the right choice of d

Φ 1 Y ( x ) = D 1 A D x + D 1 g (13)

hence if we let

X ( y ) = D Y ( D 1 ( y ) ) (14)

we get

Φ 1 X = T (15)

And to find an X in case (ii) when a > 0 , b 0 , 1 + μ > 0 , define

Y ( x ) = ( a 1 b 1 0 b 1 a 1 0 0 0 ln ( 1 + μ ) ) + d ˜ (16)

Then with the right choice of a 1 , b 1 , d ˜ 3 we get

Φ 1 Y ( x ) = U 1 A U ( x ) + U 1 g (17)

hence with

X ( y ) = U Y ( U 1 ( y ) ) (18)

we have that the time one map is

Φ 1 X ( y ) = T ( y ) (19)

See [1] for details of the above and also below.

In section two we solve the problem:

minmize C ( T ) , T > 0 (fixed) subject to

( C G F G I ) = X ( C G F G I ) + e 3 u ( t ) (20)

u ( t ) [ 0 , g I 0 ] , g I 0 > 0 , by first solving it for Y and then infer the solution for X. Here

e 3 = B = ( 0 , 0 , 1 ) T (21)

and u ( t ) is piecewise continuous. In section three we apply the discrete Pontryagin minimum principle to the difference equation

x k + 1 = A x k + B u k + g

where

x k = ( C ( k ) , G F ( k ) , G I ( k ) ) T (22)

u k [ 0 , g I 0 ] , g I 0 > 0 . k = 0 , 1 , , N 1 , where N , N 2 , x 0 = x , with the objective to minimize C ( N ) . There are again two cases to consider (i) and (ii) above. If μ = μ F = μ I and the eigenvalues are positive and distinct, maximal chemo therapy is optimal. But in case (ii) it is not always optimal.

When μ F μ I and the eigenvalues are real and distinct, we produce a counter example to maximal chemo therapy being optimal, see section four. Some solid tumors grow like Gompertz functions, see [5]. There are several important monographs in mathematics and medicine, see [6] - [11]. [12] - [19] are my latest papers on cancer and mathematics. [20] is our reference to roots of cubic polynomials. [21] proves continuous dependence of roots of a polynomial on the coefficients of the polynomial.

In section five we consider optimality of the discrete model T when μ F μ I . Here we also determine optimal control of the map T.

2. Optimal Control of X

The purpose of this section is to minimize C ( T ) , subject to

( C G F G I ) = X ( C G F G I ) + e 3 u ( t ) (23)

T > 0 fixed and with μ = μ F = μ I . Let us consider (ii) first. We assume that there are a real eigenvalue 1 + μ and two imaginary eigenvalues a ± i b , a > 0 , 1 + μ > 0 , b 0 .

Now define the two by two matrix

L 1 = ( a 1 b 1 b 1 a 1 ) (24)

where

b 1 = tan 1 ( b a ) (25)

a 1 = ln a 2 + b 2 (26)

This will imply, that

exp ( a 1 b 1 b 1 a 1 ) = ( a b b a ) = L (27)

Also let

B ˜ = ( a 1 b 1 0 b 1 a 1 0 0 0 ln ( 1 + μ ) ) (28)

and define the vector field

Y ( x , v ) = B ˜ x + d + U 1 e 3 v = Y ˜ ( x ) + U 1 e 3 v (29)

which is affine when v = 0 , where e 1 , e 2 , e 3 is the canonical basis in 3 . Also x , d 3 , v [ 0 , g I 0 ] , g I 0 > 0 . Put

X ˜ ( y ) = U Y ˜ U 1 ( y ) (30)

Let

X ( y , v ) = X ˜ ( y ) + e 3 v = U Y ( U 1 ( y ) , v ) (31)

Define d 1 , d 2 , d 3 by

U 1 ( g ) 1,2 = L 1 1 ( L id ) d 1,2 (32)

and

μ ln ( 1 + μ ) d 3 = ( U 1 g ) 3 (33)

Then

Φ 1 X = T (34)

when v = 0 . To this vector field with v [ 0, g I 0 ] associate the Hamiltonian

H ( x , p , v , t ) = p T Y ( x , v ) (35)

Then we have the adjoint equations

p 1 = H x 1 = p 1 a 1 + p 2 b 1 (36)

p 2 = H x 2 = p 1 b 1 p 2 a 1 (37)

p 3 = H x 3 = p 3 ln ( 1 + μ ) (38)

So

p 1 , 2 = ( a 1 b 1 b 1 a 1 ) ( p 1 p 2 ) (39)

which has flow

p 1,2 ( t ) = exp ( a 1 t ) ( cos ( b 1 t ) sin ( b 1 t ) sin ( b 1 t ) cos ( b 1 t ) ) p 1,2 ( 0 ) (40)

Define

d ( t ) = L 1 1 ( exp ( L 1 t ) id ) d 1,2 (41)

Then the flow of Y is for v = 0

Φ Y ( t , x ) 1,2 = exp ( L 1 t ) ( x 1 x 2 ) + d ( t ) (42)

Φ Y ( t , x ) 3 = exp ( ln ( 1 + μ ) t ) x 3 + exp ( ln ( 1 + μ ) t ) 1 ln ( 1 + μ ) d 3 (43)

Define

S 1 ( x ) = ( 1 + μ a ) x 1 b x 2 (44)

Then we have the transversality conditions

p 1 ( T ) = S 1 x 1 = 1 + μ a (45)

p 2 ( T ) = S 1 x 2 = b (46)

p 3 ( T ) = S 1 x 3 = 0 (47)

This means that p 3 ( t ) = 0 and

p 1 , 2 ( T ) = exp ( a 1 T ) ( cos ( b 1 T ) sin ( b 1 T ) sin ( b 1 T ) cos ( b 1 T ) ) p 1 , 2 ( 0 ) = ( 1 + μ a b ) (48)

The two by two matrix in this equation has inverse

( cos ( b 1 T ) sin ( b 1 T ) sin ( b 1 T ) cos ( b 1 T ) ) (49)

So

p 1 , 2 ( 0 ) = exp ( a 1 T ) ( cos ( b 1 T ) sin ( b 1 T ) sin ( b 1 T ) cos ( b 1 T ) ) ( 1 + μ a b ) (50)

But then we have

p 1 , 2 ( t ) = exp ( a 1 ( T t ) ) ( cos ( b 1 ( t T ) ) sin ( b 1 ( t T ) ) sin ( b 1 ( t T ) ) cos ( b 1 ( t T ) ) ) ( 1 + μ a b ) (51)

Notice that we have

U 31 = b β (52)

U 32 = ( 1 + μ a ) β (53)

If

H ( x * ( t ) , p ( t ) , u * ( t ) , t ) H ( x * ( t ) , p ( t ) , u , t ) (54)

for all u U = [ 0, g I 0 ] , then ( x * ( t ) , u * ( t ) ) is optimal, see Equations (55) to (60). But this amounts to the inequality

( p 1 ( t ) b β u * + p 2 ( t ) ( 1 + μ a ) β u * ) 1 det ( U ) (55)

= exp ( a 1 ( T t ) ) ( ( cos ( b 1 ( t T ) ) ( 1 + μ a ) b sin ( b 1 ( t T ) ) ) b β u * (56)

+ ( sin ( b 1 ( t T ) ) ( 1 + μ a ) b cos ( b 1 ( t T ) ) ) ( ( 1 + μ a ) β u * ) ) 1 det ( U ) (57)

= exp ( a 1 ( T t ) ) sin ( b 1 ( t T ) ) ( b 2 ( 1 + μ a ) 2 ) β u * 1 det ( U ) (58)

= exp ( a 1 ( T t ) ) ( α δ + β σ ) β u * sin ( b 1 ( t T ) ) 1 det ( U ) (59)

exp ( a 1 ( T t ) ) sin ( b 1 ( t T ) ) β u b (60)

where det ( U ) denotes the determinant of U. We have used that

1 + μ a = 1 + μ ( 1 + γ + μ 2 ) = μ γ 2 (61)

and

λ + = 1 + γ + μ 2 + 1 2 ( γ μ ) 2 + 4 ( α δ + β σ ) = a + i b (62)

so that

( 1 + μ a ) 2 + b 2 = ( α δ + β σ ) (63)

and also that det U = b ( α δ + β σ ) . So if

sin ( b 1 ( t T ) ) b > 0 (64)

then

u * ( t ) = g I 0 (65)

is optimal and if the reverse inequality holds then

u * ( t ) = 0 (66)

is optimal. We shall now consider the case where all eigenvalues λ + , λ , λ ˜ are real, positive and distinct and that μ = μ F = μ I . Here we let

Y ( x , u ) = Λ ˜ x + d + D 1 e 3 u (67)

where

Λ ˜ = diag ( ln λ + , ln λ , ln ( 1 + μ ) ) (68)

Also define d in

λ + 1 ln λ + d 1 = ( D 1 g ) 1 (69)

λ 1 ln λ + d 2 = ( D 1 g ) 2 (70)

μ ln ( 1 + μ ) d 3 = ( D 1 g ) 3 (71)

d = ( d 1 , d 2 , d 3 ) T 3 . Now define the Hamiltonian

H ( x , p , u , t ) = p T Y ( x , u ) (72)

Then we get the adjoint equations

p 1 = H x 1 = ln λ + p 1 (73)

p 2 = H x 2 = ln λ p 2 (74)

p 3 = H x 3 = ln ( 1 + μ ) p 3 (75)

see [2]. Now define

S 1 ( x ) = ( 1 + μ λ + ) x 1 + ( 1 + μ λ ) x 2 (76)

Then we have the transversality conditions

p ( T ) = S 1 x = ( 1 + μ λ + 1 + μ λ 0 ) (77)

by [2] and see below. Now observe, that

det ( D ) = ( λ + λ ) ( α δ + β σ ) (78)

Because

1 + μ λ ± = μ γ 2 1 2 ( μ γ ) 2 + 4 ( α δ + β σ ) (79)

we find

( 1 + μ λ + ) ( 1 + μ λ ) = ( α δ + β σ ) (80)

We shall need

D 1 = ( α δ β σ α ( 1 + μ λ ) β ( 1 + μ λ ) α δ + β σ α ( 1 + μ λ + ) β ( 1 + μ λ + ) 0 σ ( λ + λ ) δ ( λ + λ ) ) 1 det D (81)

We have

( C ( T ) 0 0 ) = D ( x 1 ( T ) x 2 ( T ) x 3 ( T ) ) (82)

hence the definition of S 1 . Now

p ( T ) = S 1 x (83)

thus

p 1 ( T ) = p 1 ( 0 ) exp ( ln λ + t ) | t = T = p 1 ( 0 ) exp ( ln λ + T ) = 1 + μ λ + (84)

p 2 ( T ) = p 2 ( 0 ) exp ( ln λ t ) | t = T = p 2 ( 0 ) exp ( ln λ T ) = 1 + μ λ (85)

p 3 ( T ) = p 3 ( 0 ) exp ( ln ( 1 + μ ) t ) | t = T = p 3 ( 0 ) exp ( ln ( 1 + μ ) T ) = 0 (86)

So

p 1 ( t ) = ( 1 + μ λ + ) exp ( ln λ + ( T t ) ) (87)

p 2 ( t ) = ( 1 + μ λ ) exp ( ln λ ( T t ) ) (88)

p 3 ( t ) = 0 (89)

If

H ( x * ( t ) , p ( t ) , u * ( t ) , t ) H ( x * ( t ) , p ( t ) , u , t ) (90)

for all u U , ( x * ( t ) , u * ( t ) ) is optimal. And this is equivalent to

p ( t ) T D 1 e 3 u * ( t ) p ( t ) T D 1 e 3 u (91)

which again is equivalent to

( 1 + μ λ + ) exp ( ln λ + ( T t ) ) ( β ) ( 1 + μ λ ) u * ( t ) det ( D ) (92)

+ ( 1 + μ λ ) exp ( ln λ ( T t ) ) β ( 1 + μ λ + ) u * ( t ) det ( D ) (93)

= u * ( t ) β α δ β σ ( λ + λ ) ( α δ + β σ ) ( exp ( ln λ + ( T t ) ) + exp ( ln λ ( T t ) ) ) (94)

u β ( exp ( ln λ + ( T t ) ) exp ( ln λ ( T t ) ) ) 1 λ + λ (95)

for all u U . It follows that

u * ( t ) = g I 0 (96)

is optimal. We have the two Hamiltonians

H Y ( x , p , u , t ) = p T Y ( x , u ) = p , Y ( x , u ) (97)

H X ( y , q , v , t ) = q T X ( y , v ) = q , X ( y , v ) (98)

where , denotes the canonical inner product. It follows that when the eigenvalues are a ± i b , 1 + μ > 0 , a > 0 , b 0

y ( t ) = U x ( t ) (99)

p ( t ) = U T q ( t ) (100)

if

y ( 0 ) = U x ( 0 ) (101)

p ( T ) = U T q ( T ) (102)

where y ( t ) is an integral curve of X and x ( t ) an integral curve of Y. Because

q = H X y = ( U 1 ) T B ˜ T U T q (103)

hence

U T q = B ˜ T U T q (104)

and

p = B ˜ T p (105)

We now get, that

H X ( y ( t ) , q ( t ) , v ( t ) , t ) = q ( t ) , X ( y ( t ) , v ( t ) ) (106)

= q ( t ) , U Y ( U 1 y ( t ) , v ( t ) ) (107)

= U T ( q ( t ) ) , Y ( U 1 ( y ( t ) ) , v ( t ) ) (108)

= p ( t ) , Y ( x ( t ) , v ( t ) ) (109)

= H Y ( x ( t ) , p ( t ) , v ( t ) , t ) (110)

So

H X ( y ( t ) , q ( t ) , v ( t ) , t ) = H Y ( x ( t ) , p ( t ) , v ( t ) , t ) (111)

When the eigenvalues of A are real, distinct and positive

y ( t ) = D x ( t ) (112)

p ( t ) = D T q ( t ) (113)

if

y ( 0 ) = D x ( 0 ) (114)

p ( T ) = D T q ( T ) (115)

We also have

q = ( D 1 ) T Λ T D T q (116)

and

( D T q ) = Λ T D T q (117)

thus

p = Λ T p (118)

Hence

H X ( y ( t ) , q ( t ) , v ( t ) , t ) = H Y ( x ( t ) , p ( t ) , v ( t ) , t ) (119)

We need the following theorem, which is well known.

Theorem 1 Now The following statements about a C2 function from

f : n (120)

to the reals, where n is a positive integer, n are equivalent:

(i) f ( λ x + ( 1 λ ) y ) λ f ( x ) + ( 1 λ ) f ( y ) where λ [ 0,1 ] ;

(ii) f ( y ) f ( x ) + f x ( x ) ( y x ) ;

(iii) i , j = 1 , , n γ i 2 f x i x j γ j 0

where x , y , γ n .

( x ( t ) , u ( t ) ) is admissible by definition if 0 u ( t ) g I 0 and

x ( t ) = Y ( x ( t ) , u ( t ) ) , x ( 0 ) = x 0 3 (121)

To see that

S 1 ( x * ( T ) ) S 1 ( x ( T ) ) (122)

for ( x * ( t ) , u * ( t ) ) optimal candidate and ( x ( t ) , u ( t ) ) admissible argue as in [2]

Δ = S 1 ( x * ( T ) ) S 1 ( x ( T ) ) (123)

= S 1 ( x * ( T ) ) S 1 ( x ( T ) ) (124)

+ 0 T ( H Y ( x * ( t ) , p ( t ) , u * ( t ) , t ) p ( t ) T x * ( t ) ) ( H Y ( x ( t ) , p ( t ) , u ( t ) , t ) p ( t ) T x ( t ) ) d t (125)

We have the following inequality, which follows from theorem 1.

H Y ( x * , p , u * , t ) H Y ( x , p , u , t ) H Y x ( x * , p , u * , t ) ( x * x ) + H Y u ( x * , p , u * , t ) ( u * u ) (126)

We also have

p = H Y x ( x * , p , u * , t ) (127)

We can now estimate

Δ S 1 ( x * ( T ) ) S 1 ( x ( T ) ) + 0 T p T ( x x * ) + p T ( x x * ) d t (128)

+ 0 T H Y u ( x * , p , u * , t ) ( u * u ) d t 0 T d d t ( p T ( x x * ) ) ( t ) d t + S 1 ( x * ( T ) ) S 1 ( x ( T ) ) (129)

= p ( T ) T ( x ( T ) x * ( T ) ) + S 1 ( x * ( T ) ) S 1 ( x ( T ) ) (130)

= S 1 x ( x * ( T ) ) ( x ( T ) x * ( T ) ) + S 1 ( x * ( T ) ) S 1 ( x ( T ) ) (131)

0 (132)

because S 1 is convex, by theorem 1. We have used, that we have arranged, that

H u ( x * , p , u * , t ) ( u * u ) 0 (133)

for all u U , by the mean value theorem.

We have optimality.

3. Optimal Control of T

In this section we consider the problem: minimize C ( N ) subject to

y k + 1 = A y k + B u k + g = f ( y k , u k ) (134)

k = 1 , , N 1 , N , N 2 , u ( k ) U = [ 0 , g I 0 ] , g i 0 > 0 , μ = μ F = μ I where A is as in the introduction.

Also

y k = ( C ( k ) , G F ( k ) , G I ( k ) ) , g 3 (135)

Here

B = ( 0 , 0 , 1 ) T , g = ( g C , g F , g I ) T (136)

Assume (i) the eigenvalues of A are real and distinct.

In the Discrete Pontryagin Minimal Principle applied to T you define the Hamiltonian by (138) and then you find

H u k ( x k * , u k * , λ k ) ( u k * u k ) (137)

and minimize it to find the optimal control u k * . It is optimal due to computations (157) to (163) below.

Define then the Hamiltonian

H ( x k , u k , λ k ) = λ k T ( A ˜ x k + B ^ u k + D 1 g ) (138)

where λ k 3 and (139)

B ^ = D 1 B (140)

A ˜ = D 1 A D (141)

Then we have the adjoint equation

λ k 1 = H x k ( x k , u k , λ k ) = A ˜ T λ k (142)

Inductively

λ N k 1 = ( A ˜ T ) k λ N 1 (143)

In particular

λ 0 = ( A ˜ T ) N 1 λ N 1 (144)

For k = 0 we have

H ( x 0 * , u 0 * , λ 0 ) = ( λ 0 ) T ( A ˜ x 0 * + B ^ u 0 * + D 1 g ) H ( x 0 * , u , λ 0 ) = ( λ 0 ) T ( A ˜ x 0 * + B ^ u + D 1 g ) (145)

which is equivalent to

( λ 0 ) T B ^ u 0 * ( λ 0 ) T B ^ u (146)

Define

S 1 ( x ) = F ( x ) = ( 1 + μ λ + ) x 1 + ( 1 + μ λ ) x 2 (147)

Now

λ 0 = ( A ˜ T ) N 1 λ N 1 (148)

where

λ N 1 = F x ( x N ) (149)

Thus

( λ 0 ) T B ^ u 0 * = ( ( A ˜ T ) N 1 λ N 1 ) T B ^ u 0 * = ( λ N 1 ) T A ˜ N 1 B ^ u 0 * = F x T ( x N ) A ˜ N 1 B ^ u 0 * (150)

Assume that (i) holds and λ + , λ , 1 + μ are distinct, when μ = μ F = μ I . Then

B ^ = ( D 31 D 32 D 33 ) 1 det ( D ) = ( β ( 1 + μ λ ) β ( 1 + μ λ + ) δ ( λ + λ ) ) 1 det ( D ) = D 1 e 3 (151)

So now we get, that (146) amounts to

( 1 + μ λ + ,1 μ λ ,0 ) ( λ + N 1 0 0 0 λ N 1 0 0 0 ( 1 + μ ) N 1 ) B ^ (152)

= β ( α δ β σ ) λ + N 1 det ( D ) + β ( α δ β σ ) λ N 1 det ( D ) (153)

= α δ + β σ ( λ + λ ) ( α δ + β σ ) β ( λ + N 1 λ N 1 ) (154)

= β λ + λ ( λ + N 1 λ N 1 ) 0 (155)

Similarly for k = 1 , , N 2

λ k T B ^ u k * = ( 1 + μ λ + , 1 + μ λ , 0 ) ( λ + N k 1 0 0 0 λ N k 1 0 0 0 ( 1 + μ ) N k 1 ) B ^ u k * < 0 (156)

For k = N 1 we have, that ( 1 , 0 , 0 ) B ^ = 0 . This means that maximal chemo therapy is optimal. Because similar to (128) to (132), we get

Δ = S 1 ( x N * ) S 1 ( x N ) (157)

= S 1 ( x N * ) S 1 ( x N ) + k = 0 N 1 ( H ( x k * , u k * , λ k ) H ( x k , u k , λ k ) λ k T x k + 1 * + λ k T x k + 1 ) (158)

S 1 ( x N * ) S 1 ( x N ) + k = 0 N 1 H x k ( x k * , u k * , λ k ) ( x k * x k ) (159)

+ H u k ( x k * , u k * , λ k ) ( u k * u k ) λ k T x k + 1 * + λ k T x k + 1 (160)

S 1 ( x N * ) S 1 ( x N ) + k = 0 N 1 λ k 1 T ( x k * x k ) λ k T x k + 1 * + λ k T x k + 1 (161)

= S 1 ( x N * ) S 1 ( x N ) λ N 1 T ( x N * x N ) (162)

= S 1 ( x N * ) S 1 ( x N ) S 1 x ( x N * ) ( x N * x N ) 0 (163)

by theorem 1 and since S 1 is convex.

Then we have

C * ( N ) C ( N ) = S 1 ( x N * ) S 1 ( x N ) 0 (164)

As above we get

H ˜ ( y k , u k , ζ k ) = ζ k T ( A y k + g + e 3 v k ) (165)

and

y k + 1 = A y k + g + e 3 v k (166)

Also

ζ k 1 = A T ζ k (167)

and

λ k 1 = A ˜ T λ k (168)

Thus

λ k 1 = D T ζ k 1 (169)

When

y 0 = D x 0 (170)

then

y k = D x k (171)

Hence

H ˜ ( y k , v k , ζ k ) = H ( x k , v k , λ k ) (172)

Now consider the case where there are imaginary eigenvalues a ± i b , a > 0 , b 0 , 1 + μ > 0 for A. We need the following well known formulas for

( a b b a ) 2 p + 1 = ( A p B p B p A p ) (173)

which are well known, where p 0 and

A p = q = 0 p ( 2 p + 1 2 q ) ( 1 ) q b 2 q a 2 p + 1 2 q (174)

B p = q = 0 p ( 2 p + 1 2 q + 1 ) ( 1 ) q b 2 q + 1 a 2 p 2 q (175)

and p

( a b b a ) 2 p = ( C p D p D p C p ) (176)

where

C p = q = 0 p ( 2 p 2 q ) ( 1 ) q b 2 q a 2 p 2 q (177)

D p = q = 0 p 1 ( 2 p 2 q + 1 ) ( 1 ) q b 2 q + 1 a 2 p 1 2 q (178)

You can prove them by induction. We have

H ( x k , v k , λ k ) = λ k , B ˜ x k + U 1 e 3 v k + U 1 g = H ˜ ( y k , v k , ζ k ) = ζ k , A y k + g + e 3 v k (179)

where

B ˜ = ( a b 0 b a 0 0 0 1 + μ ) (180)

As above

λ k = ( B ˜ T ) N k 1 λ N 1 = ( B ˜ T ) N k 1 F x (181)

where

S 1 ( x ) = F ( x ) = ( 1 + μ a ) x 1 b x 2 (182)

So

λ k T B ^ = ( 1 + μ a , b , 0 ) B ˜ N k 1 ( β b β ( 1 + μ a ) * ) T 1 det ( U ) (183)

For k 0 we find when N k 1 = 2 p + 1 , k N 1

H v k = ( 1 + μ a , b ) ( A p B p B p A p ) β ( b 1 + μ a ) 1 det ( U ) (184)

= β ( ( 1 + μ a ) ( A p b + ( 1 + μ a ) B p ) b ( B p b + ( 1 + μ a ) A p ) ) 1 det ( U ) (185)

= β ( ( 1 + μ a ) 2 + b 2 ) B p 1 det ( U ) (186)

= β ( α δ β σ ) B p 1 det ( U ) (187)

= β b B p (188)

Here det ( U ) = b ( α δ + β σ ) . When N k 1 = 2 p , k N 1

H v k = ( 1 + μ a , b ) ( C p D p D p C p ) β ( b 1 + μ a ) 1 det ( U ) = β b D p (189)

If

β b B p < 0 (190)

let

v k * = g I 0 (191)

and

v k * = 0 (192)

if the reverse inequality holds. If

β b D p < 0 (193)

let

v k * = g I 0 (194)

and

v k * = 0 (195)

if the reverse inequality holds. Then ( x k * , v k * ) is optimal, by (157) to (163). For k = N 1 we have ( 1 , 0 , 0 ) B = 0 .

4. A Counter Example

We shall now present a counter example to optimality of maximal chemo therapy when the eigenvalues are real and μ F μ I for the model

T ( y ) = A y + g (196)

of the introduction.

Remember the definition of the discriminant Δ of a cubic polynomial

p ( λ ) = λ 3 + a 1 λ 2 + a 2 λ + a 3 (197)

Namely

108 Δ = a 1 2 a 2 2 27 a 3 2 4 a 2 3 4 a 1 3 a 3 + 18 a 1 a 2 a 3 (198)

Notice that the degree of 1 + γ is four in a 1 2 a 2 2 and 4 a 1 3 a 3 while it is only two or three in 27 a 3 2 , 4 a 2 3 ,18 a 1 a 2 a 3 . We thus get

108 Δ = a 1 2 a 2 2 4 a 1 3 a 3 + lowerordertermsin ( 1 + γ ) (199)

= ( 3 + γ + μ F + μ I ) 2 ( α δ + β σ ) ( 1 + μ F ) ( 1 + μ I ) ( 1 + γ ) ( 2 + μ F + μ I ) 2 (200)

+ 4 ( 3 + γ + μ F + μ I ) 3 ( α δ ( 1 + μ I ) + β σ ( 1 + μ F ) ( 1 + γ ) ( 1 + μ F ) ( 1 + μ I ) ) (201)

+ lowerordertermsin ( 1 + γ ) (202)

which becomes

( ( 1 + γ ) 2 + ( 1 + μ F ) 2 + ( 1 + μ I ) 2 + 2 ( 1 + γ ) ( 1 + μ F ) + 2 ( 1 + γ ) ( 1 + μ I ) (203)

+ 2 ( 1 + μ F ) ( 1 + μ I ) ) ( α δ + β σ ( 1 + μ F ) ( 1 + μ I ) ( 1 + γ ) ( 2 + μ F + μ I ) ) 2 (204)

+ 4 ( ( 1 + γ ) 3 + lowerordertermsin ( 1 + γ ) ) ( α δ ( 1 + μ I ) + β σ ( 1 + μ F ) ( 1 + γ ) ( 1 + μ F ) ( 1 + μ I ) ) 205)

= ( 1 + γ ) 2 ( 1 + γ ) 2 ( 1 + μ F + 1 + μ I ) 2 4 ( 1 + γ ) 4 ( 1 + μ F ) ( 1 + μ I ) + lowerordertermsin ( 1 + γ ) (206)

and this is

( 1 + γ ) 4 ( ( 1 + μ F ) 2 + ( 1 + μ I ) 2 + 2 ( 1 + μ F ) ( 1 + μ I ) 4 ( 1 + μ F ) ( 1 + μ I ) ) + lowerordertermsin ( 1 + γ ) (207)

= ( 1 + γ ) 4 ( μ F μ I ) 2 + lowerordertermsin ( 1 + γ ) (208)

It follows that for γ large

Δ < 0 (209)

so that there are three real distinct roots, see Uspensky [20]. But

A 13 2 = β ( 1 + γ ) + β ( 1 + μ I ) > 0 (210)

when γ is large. So maximal chemo therapy is not optimal for N = 3 . In fact

x 3 = A 3 x 0 + A 2 ( B u 0 + g ) + A ( B u 1 + g ) + B u 2 + g (211)

So u 0 = 0 , u 1 = g I 0 gives an optimal trajectory.

5. Optimality of T When μ F μ I

Consider the model T from the introduction

T ( y ) = A y + g + u e 3 (212)

where

A = ( 1 + γ α β δ 1 + μ F 0 σ 0 1 + μ I ) (213)

and

y = ( C , G F , G I ) T , g = ( g C , g F , g I ) T 3 (214)

Assume (i) the eigenvalues λ + , λ , λ ˜ = 1 + μ of A are real and distinct, when μ = μ F = μ I .

Theorem 2 There exists a Euclidean open ball B ρ ( μ , μ ) , ρ > 0 in 2 , such that for ( μ F , μ I ) B ρ ( μ , μ ) maximal chemo therapy is optimal.

Also let

D = ( ( 1 + μ F λ + ) ( 1 + μ I λ + ) ( 1 + μ F λ ) ( 1 + μ I λ ) β ( 1 + μ F λ ˜ ) δ ( 1 + μ I λ + ) δ ( 1 + μ I λ ) δ β σ ( 1 + μ F λ + ) σ ( 1 + μ F λ ) α δ + ( 1 + γ λ ˜ ) ( 1 + μ F λ ˜ ) ) (215)

be a matrix with column eigenvectors to the eigenvalues λ + , λ , λ ˜ of A. We have ( δ 0 )

det ( D ) | μ F = μ I = μ = ( 1 + μ λ + ) ( 1 + μ λ ) | 1 + μ λ + 1 + μ λ 0 δ δ β δ σ σ α δ | (216)

= ( 1 + μ λ + ) ( δ 2 α + β σ δ ) ( α δ σ β ) (217)

( 1 + μ λ ) ( δ 2 α + β σ δ ) ( α δ σ β ) (218)

= ( λ + λ ) ( α δ + β σ ) 2 δ (219)

Define the Hamiltonian

H ( x k , u k , λ k ) = λ k T f k ( x k , u k ) = λ k T ( A x k + g + B u k ) (220)

k = 0 , , N 1 and

F ( x ) = ( 1 + μ F λ + ) ( 1 + μ I λ + ) x 1 + ( 1 + μ F λ ) ( 1 + μ I λ ) x 2 β ( 1 + μ F λ ˜ ) x 3 (221)

We have

λ N k 1 = ( A T ) k λ N 1 (222)

In particular

λ 0 = ( A T ) N 1 λ N 1 = ( A T ) N 1 F x ( x N * ) (223)

Now consider the system

y k + 1 = D 1 A D y k + D 1 g + D 1 e 3 u k (224)

which is conjugate to the x k system. Now observe that with

H ˜ ( y k , v k , ζ k ) = ζ k T ( Λ y k + D 1 g + D 1 e 3 v k ) (225)

and

Λ = D 1 A D (226)

we have that

ζ k = ( Λ T ) N k 1 ζ N 1 (227)

and

ζ 0 = ( Λ T ) N 1 ζ N 1 (228)

H ˜ ( y 0 * , v 0 * , ζ k ) H ˜ ( y 0 * , v 0 , ζ 0 ) (229)

is equivalent to

ζ 0 T D 1 B v 0 * ζ 0 T D 1 B v 0 (230)

So if

Δ = ζ 0 T D 1 B < 0 (231)

then maximal chemo therapy is optimal, by a computation like (157) to (163). Here

( ζ N 1 ) T = F x ( x N * ) T = ( ( 1 + μ F λ + ) ( 1 + μ I λ + ) , ( 1 + μ F λ ) ( 1 + μ I λ ) , β ( 1 + μ F λ ˜ ) ) (232)

But this amounts to

Δ = ( 1 + μ F λ + ) ( 1 + μ I λ + ) λ + N 1 D 31 det ( D ) (233)

+ ( 1 + μ F λ ) ( 1 + μ I λ ) λ N 1 D 32 det ( D ) (234)

β ( 1 + μ F λ ˜ ) λ ˜ N 1 D 33 det ( D ) (235)

where D i j are complements in D. Hence

D 31 = ( 1 + μ F λ ) ( 1 + μ I λ ) δ β δ β ( 1 + μ I λ ) ( 1 + μ F λ ˜ ) (236)

D 32 = ( 1 + μ F λ + ) ( 1 + μ I λ + ) δ β + δ β ( 1 + μ I λ + ) ( 1 + μ F λ ˜ ) (237)

D 33 = ( 1 + μ F λ + ) ( 1 + μ I λ + ) ( δ ) ( 1 + μ I λ ) + δ ( 1 + μ I λ + ) ( 1 + μ F λ ) ( 1 + μ I λ ) (238)

Inserted into (233) to (235) we obtain, that Δ becomes

1 det ( D ) ( 1 + μ F λ + ) ( 1 + μ I λ + ) λ + N 1 ( ( 1 + μ F λ ) ( 1 + μ I λ ) δ β δ β ( 1 + μ I λ ) ( 1 + μ F λ ˜ ) ) (239)

+ 1 det ( D ) ( 1 + μ F λ ) ( 1 + μ I λ ) λ N 1 ( ( 1 + μ F λ + ) ( 1 + μ I λ + ) δ β + δ β ( 1 + μ I λ + ) ( 1 + μ F λ ˜ ) ) (240)

+ 1 det ( D ) ( β ) ( 1 + μ F λ ˜ ) λ ˜ N 1 ( ( 1 + μ F λ + ) ( 1 + μ I λ + ) ( δ ) ( 1 + μ I λ ) (241)

+ δ ( 1 + μ I λ + ) ( 1 + μ F λ ) ( 1 + μ I λ ) ) (242)

Now observe that when μ F = μ I = μ

Δ = 1 det ( D ) ( 1 + μ λ + ) ( 1 + μ λ + ) ( 1 + μ λ ) ( 1 + μ λ ) β δ ( λ + N 1 λ N 1 ) (243)

= β ( α δ + β σ ) 2 ( λ + λ ) ( α δ + β σ ) 2 ( λ + N 1 λ N 1 ) (244)

So maximal chemo therapy is optimal in this case as we have seen above. Now compute

Δ = 1 det ( D ) δ β ( 1 + μ F λ + ) ( 1 + μ I λ + ) ( 1 + μ F λ ) ( 1 + μ I λ ) ( λ + N 1 λ N 1 ) (245)

+ 1 det ( D ) δ β ( 1 + μ F λ + ) ( 1 + μ I λ + ) ( ( 1 + μ I λ ) ) ( 1 + μ F λ ˜ ) ( λ + N 1 λ ˜ n 1 ) (246)

+ 1 det ( D ) δ β ( 1 + μ F λ ) ( 1 + μ I λ ) ( 1 + μ I λ + ) ( 1 + μ F λ ˜ ) ( λ N 1 λ ˜ N 1 ) (247)

Notice that for k = 0 , , N 2

c k = 1 det ( D ) δ β ( 1 + μ F λ + ) ( 1 + μ I λ + ) ( 1 + μ F λ ) ( 1 + μ I λ ) ( λ + N k 1 λ N k 1 ) < 0

when μ = μ F = μ I , due to the assumptions. Observe also that (246) = 0, (247) = 0, when μ = μ F = μ I , because then λ ˜ = 1 + μ . Now take ρ > 0 such that

| 1 det ( D ) δ β ( 1 + μ F λ + ) ( 1 + μ I λ + ) ( ( 1 + μ I λ ) ) ( 1 + μ F λ ˜ ) ( λ + N k 1 λ ˜ N k 1 ) | < c k 3 (248)

and

| 1 det ( D ) δ β ( 1 + μ F λ ) ( 1 + μ I λ ) ( 1 + μ I λ + ) ( 1 + μ F λ ˜ ) ( λ N k 1 λ ˜ N k 1 ) | < c k 3 (249)

And finally

1 det ( D ) δ β ( 1 + μ F λ + ) ( 1 + μ I λ + ) ( 1 + μ F λ ) ( 1 + μ I λ ) ( λ + N k 1 λ N k 1 ) > 2 3 c k (250)

But then Δ < 0 , and maximal chemo therapy is optimal. We have used, that the roots of a polynomial depend continuously on the coefficients of the polynomial, see [21]. For k = N 1 notice that ( 1 , 0 , 0 ) B = 0 . So u N 1 = g I 0 is optimal.

When (ii) define the following U below by computing

( ( 1 + μ F a i b ) ( 1 + μ I a i b ) δ ( 1 + μ I a i b ) σ ( 1 + μ F a i b ) ) (251)

= ( ( 1 + μ F a ) ( 1 + μ I a ) b 2 + i ( ( 1 + μ I a ) b ( 1 + μ F a ) b ) δ ( 1 + μ I a ) + i δ b σ ( 1 + μ F a ) + i σ b ) (252)

= v + = v 1 + i v 2 (253)

So define U to be

( ( 1 + μ F a ) ( 1 + μ I a ) b 2 ( 1 + μ I a ) b ( 1 + μ F a ) b β ( 1 + μ F λ ˜ ) δ ( 1 + μ I a ) δ b δ β σ ( 1 + μ F a ) σ b α δ + ( 1 + γ λ ˜ ) ( 1 + μ F λ ˜ ) ) (254)

Then

U 1 A U = ( a b 0 b a 0 0 0 λ ˜ ) (255)

Define

F ˜ ( x ) = ( ( 1 + μ F a ) ( 1 + μ I a ) b 2 ) x 1 + ( b ( a 1 μ I ) b ( a 1 μ F ) ) x 2 β ( 1 + μ F λ ˜ ) x 3 (256)

Theorem 3 Suppose (ii) i.e. eigenvalues of A are a + i b , b 0 . If N k 1 = 2 p + 1 , k = 0 , , N 1 and

e 1 , A N k 1 B = F ˜ x T ( A p B p 0 B p A p 0 0 0 λ ˜ 2 p + 1 ) U 1 B < 0 (257)

let v k * = g I 0 and if the reverse inequality holds let v k * = 0 . If N k 1 = 2 p , k = 0 , , N 1 and

e 1 , A N k 1 B = F ˜ x T ( C p D p 0 D p C p 0 0 0 λ ˜ 2 p ) U 1 B < 0 (258)

let v k * = g I 0 and if the reverse inequality holds v k * = 0 . Then ( x k * , v k * ) is optimal.

Proof. Define the Hamiltonian

H ˜ ( y k , v k , ζ k ) = ζ k T ( U 1 A U y k + U 1 g + U 1 B v k ) (259)

Then with

B ˜ = U 1 A U = ( a b 0 b a 0 0 0 λ ˜ ) (260)

we find

ζ k = ( B ˜ T ) N k 1 ζ N 1 = ( B ˜ T ) N k 1 F ˜ x (261)

So

H ˜ v k = F ˜ x T B ˜ N k 1 U 1 B = F ˜ x T ( A p B p 0 B p A p 0 0 0 λ ˜ 2 p + 1 ) U 1 B (262)

when N k 1 = 2 p + 1 , k = 0 , , N 1 and when N k 1 = 2 p , k = 0 , , N 1

F ˜ x T ( C p D p 0 D p C p 0 0 0 λ ˜ 2 p ) U 1 B (263)

Optimality follows from a computation like (157) to (163).

Theorem 4 Suppose (i) and the eigenvaues of A are real and distinct. If

e 1 , A N k 1 B = F x T ( λ + N k 1 0 0 0 λ N k 1 0 0 0 λ ˜ N k 1 ) D 1 B < 0 (264)

let v k * = g I 0 and if the reverse inequality holds let v k * = 0 . Then ( x k * , v k * ) is optimal.

Proof. Define the Hamiltonian

H ˜ ( y k , v k , ζ k ) = ζ k T ( D 1 A D y k + D 1 g + D 1 B v k ) (265)

Then with

Λ = D 1 A D (266)

we get

ζ k = ( Λ T ) N k 1 ζ N 1 = ( Λ T ) N k 1 F x (267)

Then the partial derivative of H ˜ with respect to v k is

F x T ( λ + N k 1 0 0 0 λ N k 1 0 0 0 λ ˜ N k 1 ) D 1 B (268)

,

Optimality follows from a computation like (157) to (163).

6. Summary

In the present paper we applied the discrete Pontryagin Minimal Principle to a discrete model T of cancer growth and the Pontryagin Minimal Principle to an affine vector field that generates T. When μ = μ F = μ I and the eigenvalues of T are real and distinct, maximal chemo therapy is optimal for the discrete model, while this is not necessarily so when the eigenvalues of A are 1 + μ , a + i b , b 0 , 1 + μ > 0 .

For the affine vector field that generates T, we have proven similar statements, when μ = μ F = μ I . Maximal chemo therapy is optimal, when the eigenvalues of A are real, positive and distinct and this is not necessarily so, when there are imaginary eigenvalues. We finally considered what happens in the discrete model, when μ F μ I . In particular we have derived an optimal strategy to give chemo or immune therapy.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

Cite this paper

Larsen, J.C. (2019) Optimal Control of Cancer Growth. Applied Mathematics, 10, 173-195. https://doi.org/10.4236/am.2019.104014

References

  1. 1. Chr, J. (2016) Larsen Models of Cancer Growth. Journal of Applied Mathematics and Computing, 53, 615-643.

  2. 2. Seierstad, A. and Sydsaeter, K. (1988) Optimal Control Theory with Economic Applications North Holland.

  3. 3. Goodwin, G. (2005) Constrained Control and Estimation. An Optimization Approach. Springer Verlag, Berlin.

  4. 4. Swierniak, A., Kimmel, M. and Smieja, J. (2009) Mathematical Modelling as a Tool for Planning Anti Cancer Therapy. European Journal of Pharmacology, 625, 108-121. https://doi.org/10.1016/j.ejphar.2009.08.041

  5. 5. Laird, A.K. (1964) Dynamics of Cancer Growth. British Journal of Cancer, 18, 490-502. https://doi.org/10.1038/bjc.1964.55

  6. 6. Adam, J.A. and Bellomo, N. (1997) A Survey of Models for Tumor-Induced Immune System Dynamics. Birkhauser, Boston.

  7. 7. Geha, R. and Notarangelo, L. (2012) Case Studies in Immunology: A Clinical Companion. Garland Science, Hamden.

  8. 8. Marks, F., Klingmüller, U. and Müller-Decker, K. (2009) Cellular Signal Processing. Garland Science, Hamden.

  9. 9. Molina-Paris, C. and Lythe, G. (2011) Mathematical Models and Immune Cell Biology. Springer Verlag, Berlin. https://doi.org/10.1007/978-1-4419-7725-0

  10. 10. Murphy, K. (2012) Immunobiology. 8th Edition, Garland Science, Hamden.

  11. 11. Rees, R.C. (2014) Tumor Immunology and Immunotherapy. Oxford University Press, Oxford.

  12. 12. Larsen, J.C. (2016) Hopf Bifurcations in Cancer Models. JP Journal of Applied Mathematics, 14, 1-31.

  13. 13. Larsen, J.C. (2017) A Mathematical Model of Adoptive T Cell Therapy. JP Journal of Applied Mathematics, 15, 1-33.

  14. 14. Larsen, J.C. (2016) Fundamental Concepts in Dynamics.

  15. 15. Larsen, J.C. (2017) The Bistability Theorem in a Cancer Model. International Journal of Biomathematics, 11, Article ID: 1850004.

  16. 16. Larsen, J.C. (2016) The Bistability Theorem in a Model of Metastatic Cancer. Applied Mathematics, 7, 1183-1206. https://doi.org/10.4236/am.2016.710105

  17. 17. Larsen, J.C. (2017) A Study on Multipeutics. Applied Mathematics, 8, 746-773. https://doi.org/10.4236/am.2017.85059

  18. 18. Larsen, J.C. (2017) A Mathematical Model of Immunity. JP Journal of Applied Mathematics.

  19. 19. Larsen, J.C. (2018) Models of Cancer Growth Revisited. Applied Mathematics, 9, Article ID: 84308.

  20. 20. Uspensky (1948) Theory of Equations. McGraw-Hill, New York.

  21. 21. Alexendarian, A. (2013) On Continuous Dependence of Roots of Polynomials on Coefficients.