Open Journal of Optimization
Vol.08 No.01(2019), Article ID:90779,6 pages
10.4236/ojop.2019.81003

Solution of Second-Order Ordinary Differential Equations via Simulated Annealing

Abdulazeez Bilesanmi1, Ashiribo Senapon Wusu2, Akinwale Lewis Olutimo2

1Department of General Studies, Petroleum Training Institute, Delta, Nigeria

2Department of Mathematics, Lagos State University, Lagos, Nigeria

Copyright © 2019 by author(s) and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: December 1, 2018; Accepted: February 24, 2019; Published: February 27, 2019

ABSTRACT

In this paper, we approach the problem of obtaining approximate solution of second-order initial value problems by converting it to an optimization problem. It is assumed that the solution can be approximated by a polynomial. The coefficients of the polynomial are then optimized using simulated annealing technique. Numerical examples with good results show the accuracy of the proposed approach compared with some existing methods.

Keywords:

Simulated Annealing, Second-Order Ordinary Differential Equation, Polynomial, Optimization

1. Introduction

The use of techniques that are based on evolutionary algorithms for solving optimization problems has been gaining interests over the last few years. These algorithms use mechanisms inspired by biological evolution, such as reproduction, recombination, mutation, and selection. Since the work of Isaac Newton and Gottfried Leibniz in the late 17th century, differential equations (DEs) have been an important concept in many branches of science. Differential equations arise in physics, engineering, chemistry, biology, economics and a lot of fields. The idea of solving DEs via evolutionary algorithms has been on the increase recently. Approximate solutions of differential equations are obtained by converting the equations to optimization problems and then solved via optimization techniques. The use of classical genetic algorithm to obtain approximate solutions of second-order initial value problems was considered in [1] . In [2] , the author combined genetic algorithm with the Nelder-Mead method for solving the second-order initial value problem of the form y = f ( x , y ) . In a later work, approximate solutions of first-order initial value problem was computed via the combination of collocation method and genetic algorithms by the author in [3] . Adaptation of neural network for the solution of second-order initial value problems was proposed by the authors in [4] . Continuous genetic algorithm was used to compute the solution of two-point second-order ordinary differential equation in [5] . The adaptation of differential evolution algorithm for the solution of the second-order initial value problem of the form y + p ( t ) y + q ( t ) y = r ( t ) was proposed in [6] . The authors in [7] considered the approach of using differential algorithm to obtain approximate solutions of the second-order two-point boundary value problem u = f ( t , u ) ; u ( a ) = η 1 ; u ( b ) = η 2 with oscillatory/periodic behaviour. In this paper we show that the simulated annealing algorithm can also be used to find very accurate approximate solutions of second-order initial value problems of the form

y = f ( t , y ) ; y ( t 0 ) = y 0 , y ( t 0 ) = y 0 , t [ a , b ] . (1)

2. Basic Notions of Simulated Annealing Algorithm

Simulated annealing is a simple stochastic function minimizer. It is motivated from the physical process of annealing, where a metal object is heated to a high temperature and allowed to cool slowly. The process allows the atomic structure of the metal to settle to a lower energy state, thus becoming a tougher metal. Using optimization terminology, annealing allows the structure to escape from a local minimum, and to explore and settle on a better, hopefully global, minimum.

At each iteration, a new point, x n e w , is generated in the neighborhood of the current point, x. The radius of the neighborhood decreases with each iteration. The best point found so far, x b e s t , is also tracked.

If f ( x n e w ) f ( x b e s t ) , x n e w replaces x b e s t and x. Otherwise, x n e w replaces x with a probability exp ( b ( i , Δ f , f 0 ) ) . Here b is the function defined by Boltzmann Exponent-exponent of the probability function, i is the current iteration, Δ f is the change in the objective function value, and f 0 is the value of the objective function from the previous iteration. The default definition of

the function for b is given as b ( i , Δ f , f 0 ) : = Δ f log ( i + 1 ) 10 .

Simulated annealing uses multiple starting points, and finds an optimum starting from each of them. The default number of starting points, given by the parameter SearchPoints, is min ( 2 d ,50 ) , where d is the number of variables and in this case d = 1 , since the number of independent variable is one.

3. Proposed Method

Consider the second-order initial value problem (1), assume a solution of the form

y ( t ) = i = 0 k ψ i t i , k + (2)

where ψ i are parameters to be determined. Substituting (2) and its second derivative into (1) gives

i = 2 k i ( i 1 ) ψ i t i 2 = f ( t , y ( t ) ) (3)

Using the initial conditions we have the constraint that

[ i = 0 k ψ i t i ] t = t 0 = y 0 and [ i = 1 k i ψ i t i 1 ] t = t 0 = y 0 (4)

At each node point t n , we require that

E n ( t ) = [ i = 2 k i ( i 1 ) ψ i t i 2 f ( t , y ( t ) ) ] t = t n 0 (5)

To solve the above problem, we need to find the set { ψ i | i = 0 ( 1 ) k } , which minimizes the expression

n = 1 b a h E n 2 ( t ) (6)

where h is the steplength. We now formulate the problem as an optimization problem in the following way:

Minimize: n = 1 b a h E n 2 ( t ) (7)

Subject to: [ i = 0 k ψ i t i ] t = t 0 = y 0 and [ i = 1 k i ψ i t i 1 ] t = t 0 = y 0 (8)

Using the simulated annealing algorithm we are able to obtain the set { ψ i | i = 0 , 1 , , k } which minimizes the expression n = 1 b a h E n 2 ( t ) .

4. Numerical Experiments

We now perform some numerical experiments confirming the theoretical expectations regarding the method we have proposed. Our proposed algorithm shall be compared with the Runge-Kutta Nystrom method in this section. The following parameters needed to implement the simulated annealing are set as follows:

exponent of the probability function (Boltzmann Exponent = 1).

set of initial points (Initial Points = 1000).

maximum number of iterations to stay at a given point (Level Iterations = 50).

scale for the random jump (Perturbation Scale = 1.0).

starting value for the random number generator (Random Seed = 0).

number of initial points (Search Points = 0).

tolerance for accepting constraint violations (Tolerance = 0.000001).

4.1. Example 1

We examine the following linear equation

y ( t ) y ( t ) = t 1 ; y ( 0 ) = 2 , y ( 0 ) = 2 (9)

with the exact solution y ( t ) = 1 t + exp ( t ) .

Implementing the proposed scheme with k = 10 , we obtain { ψ i | i = 0 ( 1 ) 10 } as

{ 2 , 2 , 416995243 834315644 , 105164777 636059757 , 69800031 1811256752 , 6535788 1275358859 } .

Using a steplength of h = 0.01 , the absolute errors obtained by our proposed algorithm are compared with those produced by the well-known Runge-Kutta Nystrom method as shown in Table 1. The comparison shows that our approach gave better result compared with the Runge-Kutta Nystrom method.

4.2. Example 2

Consider the equation

y ( t ) = ( 1 + t 2 ) y ( t ) ; y ( 0 ) = 1 , y ( 0 ) = 0 (10)

with the exact solution y ( t ) = exp ( t 2 2 ) .

Implementing the proposed scheme with k = 11 , we obtain { ψ i | i = 0 ( 1 ) 11 } as

{ 1,0, 1306409430 2612828131 , 29397245 1713857114397 , 3187969586 25524507753 , 172091099 436085951591 , 313833621 15857243966 , 382909153 201794651238 , 117010789 496614906383 , 766119929 389265107664 , 130287162 172534329575 , 125796527 456658410146 }

Table 1. The absolute values of error of y(t) in Problem 9 using the proposed scheme compare with the Runge-Kutta Nystom method.

Table 2. The absolute values of error of y(t) in Problem 10 using the proposed scheme compare with the Runge-Kutta Nystom method.

Table 2 shows the absolute errors of the results obtained by our algorithm compared with the Runge-Kutta Nystrom method. Again, our approach gave minimal absolute errors.

5. Conclusion

In this paper, we have shown how the problem of obtaining approximate solution to (1) can be converted to an optimization problem, and then solved using simulated annealing. The results obtained compete favourably with the Runge-Kutta Nystrom method.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Cite this paper

Bilesanmi, A., Wusu, A.S. and Olutimo, A.L. (2019) Solution of Second-Order Ordinary Differential Equations via Simulated Annealing. Open Journal of Optimization, 8, 32-37. https://doi.org/10.4236/ojop.2019.81003

References

  1. 1. George, D.M. (2006) On the Application of Genetic Algorithms to Differential Equations. Romanian Journal of Economic Forecasting, 3, 5-9.

  2. 2. Mastorakis, N.E. (2005) Numerical Solution of Non-Linear Ordinary Differential Equations via Collocation Method (Finite Elements) and Genetic Algorithms. Proceedings of the 6th WSEAS International Conference on Evolutionary Computing, Lisbon, 16-18 June 2005, 36-42.

  3. 3. Mastorakis, N.E. (2006) Unstable Ordinary Differential Equations: Solution via Genetic Algorithms and the Method of Nelder-Mead. Proceedings of the 6th WSEAS International Conference on Systems Theory & Scientific Computation, Elounda, 21-23 August 2006, 1-6.

  4. 4. Junaid, A., Raja, A.Z. and Qureshi, I.M. (2009) Evolutionary Computing Approach for the Solution of Initial Value Problems in Ordinary Differential Equations. World Academic of Science, Engineering and Technology, 55, 578-581.

  5. 5. Omar, A.A., Zaer, A., Shaher, M. and Nabil, S. (2012) Solving Singular Two-Point Boundary Value Problems Using Continuous Genetic Algorithm. Abstract and Applied Analysis, 2012, Article ID: 205391.

  6. 6. Bakre, O.F., Wusu, A.S. and Akanbi, M.A. (2015) Solving Ordinary Differential Equations with Evolutionary Algorithms. Open Journal of Optimization, 4, 69-73. https://doi.org/10.4236/ojop.2015.43009

  7. 7. Wusu, A.S. and Akanbi, M.A. (2016) Solving Oscillatory/Periodic Ordinary Differential Equations with Differential Evolution Algorithms. Communications in Optimization Theory, 2016, Article ID: 7.