American Journal of Operations Research
Vol.07 No.05(2017), Article ID:78659,9 pages
10.4236/ajor.2017.75018

Optimal Designs Technique for Locating the Optimum of a Second Order Response Function

Idorenyin Etukudo

Department of Mathematics & Statistics, Akwa Ibom State University, Ikot Akpaden, Mkpat Enin, Akwa Ibom State, Nigeria

Copyright © 2017 by author and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: June 2, 2017; Accepted: August 20, 2017; Published: August 23, 2017

ABSTRACT

A more efficient method of locating the optimum of a second order response function was of interest in this work. In order to do this, the principles of optimal designs of experiment is invoked and used for this purpose. At the end, it was discovered that the noticeable pitfall in response surface methodology (RSM) was circumvented by this method as the step length was obtained by taking the derivative of the response function rather than doing so by intuition or trial and error as is the case in RSM. A numerical illustration shows that this method is suitable for obtaining the desired optimizer in just one move which compares favourably with other known methods such as Newton-Raphson method which requires more than one iteration to reach the optimizer.

Keywords:

Optimal Designs of Experiment, Unconstrained Optimization, Response Surface Methodology, Modified Super Convergent Line Series Algorithm, Newton-Raphson Method

1. Introduction

The problem of locating the optimum of a second order response function has already been addressed by a method known as response surface methodology (RSM). RSM is simply a collection of mathematical and statistical techniques useful for analyzing problems where several independent variables influence a dependent variable or response. The main objective here is to determine the optimum operating conditions for the system or to determine a region of the factor space in which operating requirements are satisfied [1] . See also [2] [3] [4] [5] and [6] . For instance, the interest of a chemical engineer lies in the optimization of his process yield which is influenced by two variables, reaction time, x1 and reaction temperature, x2. The observed response can be represented as a function of the two independent variables as

y = f ( x 1 , x 2 ) + ϵ (1)

where ϵ is the random error term while the expected response function is

η = E ( y ) = f ( x 1 , x 2 ) (2)

When the mathematical form of Equation (2) is not known, the expected response function can be approximated within the experimental region by a first order or a second order response function [7] .

According to [1] , the initial estimate of the optimum operating conditions for the system is frequently far from the actual optimum. When this happens, the objective of the experimenter is to move rapidly to the general vicinity of the optimum and the actual step size or step length is determined by the experimenter based on experience. The determination of the step length that could guarantee rapid movement to the vicinity of the optimum by experience or trial and error is a pitfall. In order to advance the existing RSM procedure, [8] proposed a modification which utilized the fusion of the Newton-Raphson and Mean-centre algorithms for obtaining the optimum and the exploration of near optimal settings within the optimal region. The problem with this modification is that it uses over 90% of the steps of the previous method and then introduces several other steps, thereby increasing computer time and computer storage space, only to obtain the selection of near-optimal factors settings which is iterative in nature. In order to circumvent this pitfall, this article seeks to solve this problem by making use of the principles of optimal designs of experiment. To design an experiment optimally, we mean a selection of N support points within the experimental region so that the aim of the experimenter could be realized. Unlike RSM where the step length is obtained by trial and error, [9] had already modified an algorithm by [10] to solve an unconstrained optimization problems using the principle of optimal designs of experiment where the step length is obtained by taking the derivative of the response function. As by [9] , a well-defined method to handle interactive effects in the case of quadratic surfaces has been provided. Since this new technique is a line search algorithm, it relies on a well-defined method of determining the direction of search as given by [11] . The algorithmic procedure which is given in the next section requires that the optimal support points that form the initial design matrix obtained from the entire experimental region be partitioned into r groups, r = 2 , 3 , , k . However, [12] has shown that with r = 2, optimal solutions are obtained. This method of locating the optimum of a second order response function is an exact solution method as against iterative solution method used in RSM or any other traditional method.

2. The Algorithm

The sequential steps involved in this algorithm are given below:

Initialization: Let the second order response function, f ( x ) be defined as

f ( x ) = a 0 + a 1 x 1 + a 2 x 2 + b 1 x 1 2 + b 2 x 1 x 2 + b 3 x 2 2

Select N support points such that

3 r N 4 r or 6 N 8

where r = 2 is the number of partitioned groups and by choosing N arbitrarily, make an initial design matrix

X = [ 1 x 11 x 12 1 x 21 x 22 1 x N 1 x N 2 ]

Step 1: Compute the optimal starting point,

x 1 * = m = 1 N u m * x m T , u m * > 0

m = 1 N u m * = 1

u m * = a m 1 a m 1 , m = 1 , 2 , , N

a m = x m x m T , m = 1 , 2 , , N

Step 2: Partition X into r = 2 groups and calculate

1) M i = X i T X i , i = 1 , 2

2) M i 1

Step 3: Calculate the following:

1) The matrices of the interaction effect of the variables, X1I and X2I

2) Interaction vector of the response parameter,

g = [ b 1 b 2 b 3 ]

3) Interaction vectors for the groups are

I i = M i 1 X i T X i I g

4) Matrices of mean square error for the groups are

M ¯ i = M i 1 + I i I i T

5) The Hessian matrices, Hi and normalized Hessian matrices, H i *

6) The average information matrix, M (ξN)

Step 4: Obtain the response vector, z and the direction vector, d.

Normalize d to have d*.

Step 5: Make a move to the point

x 2 * = x 1 * ρ 1 d *

for a minimization problem or

x 2 * = x 1 * + ρ 1 d *

for a maximization problem where ρ 1 is the step length obtained from

d f ( x 2 * ) d ρ 1 = 0

Step 6: Termination criteria. Is | f ( x 2 * ) f ( x 1 * ) | < ε where ε = 0.0001?

1) Yes. Stop and set x 2 * = x min or x max as the case may be.

2) No. Replace x 1 * by x 2 * and return to Step 5. If ρ 2 0 , then implement Step 6(1).

3. Numerical Illustration

In this section, we give a numerical illustration of the optimal designs technique for locating the optimum of a second order response function.

min f ( x ) = ( x 1 1 ) 2 + ( x 2 2 ) 2 by optimal designs technique.

Solution

Initialization: Given the response function, f ( x ) = ( x 1 1 ) 2 + ( x 2 2 ) 2 , select N support points such that

3 r N 4 r or 6 N 8

where r = 2 is the number of partitioned groups and by choosing N arbitrarily, make an initial design matrix

X = [ 1 2 1.5 1 1 1 1 0.5 0.5 1 0 0 1 0.5 0.5 1 2 1 ]

Step 1: Compute the optimal starting point,

x 1 * = m = 1 6 u m * x m T , u m * > 0

m = 1 6 u m * = 1

u m * = a m 1 a m 1 , m = 1 , 2 , , 6

a m = x m x m T , m = 1 , 2 , , 6

a 1 = x 1 x 1 T = [ 1 2 1.5 ] [ 1 2 1.5 ] = 7.25 , a 1 1 = 0.1379

a 2 = x 2 x 2 T = [ 1 1 1 ] [ 1 1 1 ] = 3 , a 2 1 = 0.3333

a 3 = x 3 x 3 T = [ 1 0.5 0.5 ] [ 1 0.5 0.5 ] = 1.5 , a 3 1 = 0.6667

a 4 = x 4 x 4 T = [ 1 0 0 ] [ 1 0 0 ] = 1 , a 4 1 = 1.0000

a 5 = x 5 x 5 T = [ 1 0.5 0.5 ] [ 1 0.5 0.5 ] = 1.5 , a 5 1 = 0.6667

a 6 = x 6 x 6 T = [ 1 2 1 ] [ 1 2 1 ] = 6 , a 6 1 = 0.1667

m = 1 6 a m 1 = 2.9713

Since

u m * = a m 1 a m 1 , m = 1 , 2 , , 6

then

u 1 * = 0.1379 2.9713 = 0.0464 , u 2 * = 0.3333 2.9713 = 0.1122 , u 3 * = 0.6667 2.9713 = 0.2244 ,

u 4 * = 1.0000 2.9713 = 0.3366 , u 5 * = 0.6667 2.9713 = 0.2244 , u 6 * = 0.1667 2.9713 = 0.0561

Hence, the optimal starting point is

x 1 * = m = 1 6 u m * x m T = 0.0464 [ 1 2 1.5 ] + 0.1122 [ 1 1 1 ] + 0.2244 [ 1 0.5 0.5 ] + 0.3366 [ 1 0 0 ] + 0.2244 [ 1 0.5 0.5 ] + 0.0561 [ 1 2 1 ] = [ 1.0001 0.0928 ¯ 0.1257 ]

That is,

x 1 * = [ 0.0928 0.1257 ]

Step 2: Partitioning X into 2 groups of equal number of support points, we obtain the design matrices,

X 1 = [ 1 2 1.5 1 1 1 1 0.5 0.5 ] and X 2 = [ 1 0 0 1 0.5 0.5 1 2 1 ]

The respective information matrices are

M 1 = X 1 T X 1 = [ 3 3.5 3 3.5 5.25 4.25 3 4.25 3.5 ] and

M 2 = X 2 T X 2 = [ 3 2.5 1.5 2.5 4.25 2.25 1.5 2.25 1.25 ]

and their inverses are

M 1 1 = [ 5 8 14 8 24 36 14 36 56 ] and M 2 1 = [ 1 1 3 1 6 12 3 12 26 ]

Step 3: Calculate the following:

1) The matrices of the interaction effect of the variables for the groups as

X 1 I = [ x 11 2 x 11 x 12 x 12 2 x 21 2 x 21 x 22 x 22 2 x 31 2 x 31 x 32 x 32 2 ] = [ 4 3 2.25 1 1 1 0.25 0.25 0.25 ]

X 2 I = [ x 41 2 x 41 x 42 x 42 2 x 51 2 x 51 x 52 x 52 2 x 61 2 x 61 x 62 x 62 2 ] = [ 0 0 0 0.25 0.25 0.25 4 2 1 ]

2) Interaction vector of the response parameter,

g = [ b 1 b 2 b 3 ] = [ 1 0 1 ]

3) Interaction vectors for the groups are

I 1 = M 1 1 X 1 T X 1 I g = [ 1 5.5 2.5 ]

I 2 = M 2 1 X 2 T X 2 I g = [ 0 4 3 ]

4) Matrices of mean square error for the groups are

M ¯ 1 = M 1 1 + I 1 I 1 T = [ 6 2.5 11.5 2.5 54.25 49.75 11.5 49.75 62.25 ]

M ¯ 2 = M 2 1 + I 2 I 2 T = [ 1 1 3 1 22 24 3 24 35 ]

5) Matrices of coefficient of convex combinations of the matrices of mean square error are

H 1 = d i a g { 6 6 + 1 , 54.25 54.25 + 22 , 62.25 62.25 + 35 } = d i a g { 0.8571 , 0.7115 , 0.6401 }

H 2 = I H 1 = d i a g { 0.1429 , 0.2885 , 0.3599 }

and by normalizing Hi such that Σ H i * H i * T = I , we have

H 1 * = d i a g { h 11 Σ h i 1 2 , h 12 Σ h i 2 2 , h 13 Σ h i 3 2 } = d i a g { 0.8571 0.8571 2 + 0.1429 2 , 0.7115 0.7115 2 + 0.2885 2 , 0.6401 0.6401 2 + 0.3599 2 } = d i a g { 0.9864 , 0.9267 , 0.8717 }

H 2 * = d i a g { 0.1429 0.8571 2 + 0.1429 2 , 0.2885 0.7115 2 + 0.2885 2 , 0.3599 0.6401 2 + 0.3599 2 } = d i a g { 0.1645 , 0.3758 , 0.4901 }

6) The average information matrix is

M ( ξ N ) = Σ H i * M i H i * T = [ 3.0001 3.0448 2.4586 3.0448 5.1088 3.8476 2.4586 3.8476 2.9598 ]

Step 4: Obtain the response vector

z = [ z 0 z 1 z 2 ]

where

z 0 = f ( 3.0448 , 2.4586 ) = 16.5707

z 1 = f ( 5.1088 , 3.8476 ) = 11.0346

z 2 = f ( 3.8476 , 2.9598 ) = 24.4204

and hence, the direction vector

d = [ d 0 _ d 1 d 2 ] = M 1 ( ξ N ) z = [ 86.2355 _ 521.3935 757.6757 ]

and by normalizing d such that d T d = 1 , we have

d * = [ d 1 d 2 ] = [ 521.3935 521.3935 2 + 757.6757 2 757.6757 521.3935 2 + 757.6757 2 ] = [ 0.5669 0.8238 ]

Step 5: Obtain the step length, ρ 1 from

x 2 * = x 1 * ρ 1 d * = [ 0.0928 0.1257 ] ρ 1 [ 0.5669 0.8238 ] = [ 0.0928 0.5669 ρ 1 , 0.1257 0.8238 ρ 1 ] .

That is,

f ( x 2 * ) = f [ 0.0928 0.5669 ρ 1 , 0.1257 0.8238 ρ 1 ] = 4.7072 + 4.3271 ρ 1 + ρ 1 2

and

d f ( x 2 * ) d ρ 1 = 4.3271 + 2 ρ 1 = 0

Hence,

ρ 1 = 2.1636

and by making a move to the next point, we have

x 2 * = [ 0.0928 0.1257 ] + 2.1636 [ 0.5669 0.8238 ] = [ 1.1337 1.9081 ]

Step 6: Since | f ( x 2 * ) f ( x 1 * ) | = | 0.0263 4.7072 | = 4.6809 , we make another move and replace x 1 * by x 2 * .

The new step length, ρ 2 is obtained as follows:

x 3 * = x 2 * ρ 2 d * = [ 1.1337 1.9081 ] ρ 2 [ 0.5669 0.8238 ] = [ 1.1337 0.5669 ρ 2 , 1.9081 0.8238 ρ 2 ] .

That is,

f ( x 3 * ) = f [ 1.1337 0.5669 ρ 2 , 1.9081 0.8238 ρ 2 ] = 0.0263 0.0002 ρ 2 + ρ 2 2

d f ( x 3 * ) d ρ 2 = 2 ρ 2 0.0002 = 0

which gives

ρ 2 0

Since the new step length, ρ 2 is zero, then an optimizer had been located at the first move and hence

x 2 * = [ 1.1337 1.9081 ] and f ( x 2 * ) = 0.0263

4. Conclusion

By using optimal designs technique, we have been able to locate the optimum of a second order response function in just one move. This method circumvented the noticeable pitfall in RSM by taking the derivative of the response function to obtain the step length rather than doing so by intuition or trial and error as is

the case in RSM. A numerical illustration which gives x 2 * = [ 1.1337 1.9081 ] and

f ( x 2 * ) = 0.0263 in just one move compares favourably with other known methods such as Newton-Raphson method which requires more than one iteration to reach the optimizer.

Cite this paper

Etukudo, I. (2017) Optimal Designs Technique for Locating the Optimum of a Second Order Response Function. American Journal of Operations Research, 7, 263-271. https://doi.org/10.4236/ajor.2017.75018

References

  1. 1. Montgomery, D.C. (2001) Design and Analysis of Experiments. John Wiley & Sons, Inc., New York.

  2. 2. Dayananda, R., Shrikantha, R., Raviraj, S. and Rajesh, N. (2010) Application of Response Surface Methodology on Surface Roughness in Grinding of Aerospace Materials. ARPN Journal of Engineering and Applied Science, 5, 23-28.

  3. 3. Adinarayana, K. and Ellaiah, O. (2002) Response Surface Optimization of the Critical Medium Components for the Production of Alkaline Phosphate by Newly Isolated Basilus. Journal of Pharmaceutical Science, 5, 272-278.

  4. 4. Amayo, T. (2010) Response Surface Methodology and its Application to Automative Suspension Designs.

  5. 5. Arokiyamany, A. and Sivakumaar, R.K. (2011) The Use of Response Surface Methodology in Optimization Process for Bacteriocin. International Journal of Biomedical Research, 2.

  6. 6. Bradley, D.N. (2007) The Response Surface Methodology. Master's Thesis, Indiana University of South Bend, South Bend. http://www.cs.iusb.edu/thesis/Nbradley.thesis.pdf

  7. 7. Cochran, W.G. and Cox, G.M. (1992) Experimental Designs. John Wiley & Sons, Inc., New York.

  8. 8. Akpan, S.S., Usen, J. and Ugbe, T.A. (2013) An Alternative Procedure for Solving Second Order Response Design Problems. International Journal of Scientific & Engineering Research, 4, 2233-2245.

  9. 9. Umoren, M.U. and Etukudo, I.A. (2010) A Modified Super Convergent Line Series Algorithm for Solving Unconstrained Optimization Problems. Journal of Modern Mathematics and Statistics, 4, 115-122. https://doi.org/10.3923/jmmstat.2010.115.122

  10. 10. Onukogu, I.B. (2002) Super Convergent Line Series in Optimal Design on Experimental and Mathematical Programming. AP Express Publisher, Nsukka.

  11. 11. Umoren, M.U. and Etukudo, I.A. (2009) A Modified Super Convergent Line Series Algorithm for Solving Quadratic Programming Problems. Journal of Mathematical Sciences, 20, 55-66.

  12. 12. Chigbu, P.E. and Ugbe, T.A. ((2002) On the Segmentation of the Response Surfaces for Super Convergent Line Series Optimal Solutions of Constrained linear and Quadratic Programming Problems. Global Journal of Mathematical Sciences, 1, 27-34. https://doi.org/10.4314/gjmas.v1i1.21319