American Journal of Operations Research
Vol.07 No.06(2017), Article ID:79673,8 pages
10.4236/ajor.2017.76024

The Optimum of a Quadratic Univariate Response Function Is Located at the Origin

Idorenyin Etukudo

Department of Mathematics & Statistics, Akwa Ibom State University, Ikot Akpaden, Nigeria

Copyright © 2017 by author and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: August 28, 2017; Accepted: October 15, 2017; Published: October 18, 2017

ABSTRACT

This work has successfully shown that the optimum of a quadratic response function with zero coefficients except that of the quadratic term lies at the origin. This was achieved by using optimal designs technique for solving unconstrained optimization problems with quadratic surfaces. In just one move, the objective of the work, that is, xmin = 0 was realized.

Keywords:

Fibonacci Search Technique, Golden Section Search Technique, Optimal Designs, Response Surface, Unconstrained Optimization

1. Introduction

This paper seeks to show that given a quadratic univariate response function with zero coefficients except that of the quadratic term, the optimum lies at the origin. [1] and [2] stated that even though very few problems exist in real life where managers are concerned with taking decisions involving only one decision variable, this kind of study is justified since it forms the basis of simple extensions which plays a cardinal role to the development of a general multivariate algorithm (see [3] ).

Traditional solution techniques for solving unconstrained optimization problems with single variable abound. These techniques require many iterations involving very tedious computations [4] . Some of the line search techniques in this group include Fibonacci and Golden Section Search techniques. These techniques simply identify the interval of uncertainty containing the optimum and seek to minimize this interval, without actually locating the exact optimum point and the computational efforts in achieving this are enormous. For instance, the procedure in Fibonacci Search technique follows a numerical sequence known as Fibonacci numbers as shown by [1] .

As stated by [5] and [6] , Golden Section Search technique is another efficient method of determining the interval of uncertainty where the desired optimum must lie. While [7] shows the superiority of the Golden Section technique over Fibonacci Search technique since a priori specification of the resolution factor as well as the number of iterations are needed before the later technique is used which are not necessary in the former, [8] and [9] posited and actually proved that the later technique is the best traditional technique of solving the problem under consideration.

However, [10] presented a new technique for obtaining an exact optimum of unconstrained optimization problems with univariate quadratic surfaces. This new technique was brought about from super convergent line series algorithm which uses the principles of optimal designs of experiment [11] [12] and as modified by [13] (see also [14] and [15] ). The algorithmic procedure used in realizing our objective in this work is as given by [10] .

2. The Optimum of a Quadratic Univariate Response Function with Zero Coefficients except That of the Quadratic Term Is Located at the Origin

This section seeks to prove that the optimum of a quadratic univariate response function with zero coefficients except that of the quadratic term is located at the origin.

Let the quadratic univariate response function, f(x) having zero response parameters except that of the quadratic term be

f ( x ) = b x 2

We are required to show that x min * = 0 . This is done using the algorithm as given by [10] .

Initialization: Select N support points such that 3r ≤ N ≤ 4r or 6 ≤ N ≤ 8 where r = 2 is the number of partitioned groups and by choosing N arbitrarily, make an initial design matrix

X = [ 1 x 1 1 x 2 1 x N ]

Step 1: Let the optimal starting point computed from X be x 1 * .

Step 2: Partitioning X into r = 2 groups to obtain the design matrices, Xi, i = 1, 2 as well as the information matrices M i = X i T X i and their inverses, M i 1 .

Step 3: Obtain the following:

1) The matrices of the interaction effect of the univariate for the groups

X 1 I = [ x 11 2 x 12 2 x 1 k 2 ] and X 2 I = [ x 2 ( k + 1 ) 2 x 2 ( k + 2 ) 2 x 2 N 2 ] where k = N 2 .

2) Interaction vector of the response parameter,

g = [ b ]

3) Interaction vectors for the groups,

I i = M i 1 X i T X i I g

4) Matrices of mean square error for the groups

M ¯ i = M i 1 + I i I i T = [ v ¯ i 11 v ¯ i 21 v ¯ i 12 v ¯ i 22 ]

5) Matrices of coefficient of convex combinations of the matrices of mean square error

H i = d i a g { v ¯ i 11 Σ v ¯ i 11 , v ¯ i 22 Σ v ¯ i 22 } = d i a g { h i 1 , h i 2 }

and by normalizing Hi such that Σ H i * H i * T = I , we have

H i * = d i a g { h i 1 Σ h i 1 2 , h i 2 Σ h i 2 2 }

6) The average information matrix

M ( ξ N ) = Σ H i * M i H i * T = [ m ¯ 11 m ¯ 12 m ¯ 21 m ¯ 22 ]

Step 4: Obtain the response vector

z = [ z 0 z 1 ]

where z 0 = f ( m ¯ 21 ) and z 1 = f ( m ¯ 22 ) and hence, the direction vector

d = [ d 0 _ d 1 ] = M 1 ( ξ N ) z

which gives d * = d 1 .

Step 5: We now make a move to the point

x 2 * = x 1 * ρ 1 d 1

where ρ 1 is the step length. The value of the response function at this point is

f ( x 2 * ) = b ( x 1 * ρ 1 d 1 ) 2 = b [ x 1 * 2 2 x 1 * ρ 1 d 1 + ρ 1 2 d 1 2 ]

d f ( x 2 * ) d ρ 1 = 2 b x 1 * d 1 + 2 b ρ 1 d 1 2 = 0

which gives

ρ 1 = x 1 * d 1

and hence

x 2 * = x 1 * x 1 * d 1 ( d 1 ) = 0

Step 6: Since the true value of x 1 * in | f ( x 2 * ) f ( x 1 * ) | = | 0 b x 1 * 2 | = b x 1 * 2 is unknown, we assume that b x 1 * 2 > ε and hence, we make a second move as follows:

x 3 * = x 2 * ρ 2 d 2 = 0 ρ 2 d 2 = ρ 2 d 2

and

f ( x 3 * ) = b ρ 2 2 d 2 2

d f ( x 3 * ) d ρ 2 = 2 b ρ 2 d 2 2 = 0

But b and d2 cannot be zero, which means that ρ 2 = 0 . Since ρ 2 = 0 , there was no need for the second move showing that the optimal solution was obtained at the first move.

Therefore,

x 2 * = x min = 0 and f ( x min ) = 0

3. Numerical Illustration

Consider the quadratic univariate response function,

f ( x ) = 4 x 2

We are required to show that x min * = 0 . This is done as follows:

Initialization: Select N support points such that 6 ≤ N ≤ 8 and by choosing N = 6, we make an initial design matrix

X = [ 1 1 1 2 1 3 1 4 1 5 1 6 ]

Step 1: Compute the optimal starting point,

x 1 * = m = 1 6 u m * x m T , u m * > 0

m = 1 6 u m * = 1

u m * = a m 1 a m 1 , m = 1 , 2 , , 6

a m = x m x m T , m = 1 , 2 , , 6

a 1 = [ 1 1 ] [ 1 1 ] = 2 , a 1 1 = 0.5 , a 2 = [ 1 2 ] [ 1 2 ] = 5 , a 2 1 = 0.2

a 3 = [ 1 3 ] [ 1 3 ] = 10 , a 3 1 = 0.1 , a 4 = [ 1 4 ] [ 1 4 ] = 17 , a 4 1 = 0.0588

a 5 = [ 1 5 ] [ 1 5 ] = 26 , a 5 1 = 0.0385 , a 6 = [ 1 6 ] [ 1 6 ] = 37 , a 6 1 = 0.027

m = 1 6 a m 1 = 0.9243

Since

u m * = a m 1 a m 1 , m = 1 , 2 , , 6

then

u 1 * = 0.5 0.9243 = 0.5409 , u 2 * = 0.2 0.9243 = 0.2164 , u 3 * = 0.1 0.9243 = 0.1082 ,

u 4 * = 0.0588 0.9243 = 0.0636 , u 5 * = 0.0385 0.9243 = 0.0417 , u 6 * = 0.027 0.9243 = 0.0292

Hence, the optimal starting point is

x 1 * = m = 1 6 u m * x m T = 0.5409 [ 1 1 ] + 0.2164 [ 1 2 ] + 0.1082 [ 1 3 ] + 0.0636 [ 1 4 ] + 0.0417 [ 1 5 ] + 0.0292 [ 1 6 ] = [ 1.0000 1.9364 ]

That is,

x 1 * = 1.9364

Step 2: By partitioning X into 2 we obtain the design matrices

X 1 = [ 1 1 1 2 1 3 ] and X 2 = [ 1 4 1 5 1 6 ]

The respective information matrices are

M 1 = X 1 T X 1 = [ 3 6 6 14 ] and M 2 = X 2 T X 2 = [ 3 15 15 77 ]

and their inverses are

M 1 1 = [ 2.3333 1 1 0.5 ] and M 2 1 = [ 12.8333 2.5 2.5 0.5 ]

Step 3: Obtain the following:

1) The matrices of the interaction effect for the groups

X 1 I = [ 1 4 9 ] and X 2 I = [ 16 25 36 ]

2) Interaction vector of the response parameter,

g = [ 4 ]

3) Interaction vectors for the groups,

I 1 = [ 13.3333 16.0000 ]

I 2 = [ 97.3333 40.0000 ]

4) Matrices of mean square error for the groups

M ¯ 1 = [ 180.1111 214.3333 214.3333 256.5000 ]

M ¯ 2 = [ 9486.6 3895.8 3895.8 1600.5 ]

5) Matrices of coefficient of convex combinations of the matrices of mean square error

H 1 = d i a g { 180.1111 180.1111 + 9486.6 , 256.5 256.5 + 1600.5 } = d i a g { 0.0186 , 0.1381 }

H 2 = I H 1 = d i a g { 0.9814 , 0.8619 }

and by normalization, we have

H 1 * = d i a g { 0.0186 0.0186 2 + 0.9814 2 , 0.1381 0.1381 2 + 0.8619 2 } = d i a g { 0.0189 , 0.1582 }

H 2 * = d i a g { 0.9814 0.0186 2 + 0.9814 2 , 0.8619 0.1381 2 + 0.8619 2 } = d i a g { 0.9998 , 0.9874 }

6) The average information matrix

M ( ξ N ) = [ 2.9999 14.8260 14.8260 75.4222 ]

Step 4: Obtain the response vector

z = [ f ( 14.8260 ) f ( 75.4222 ) ] = [ 879.2411 22754.0330 ]

and hence, the direction vector

d = [ 42039 _ 8565 ]

which gives d * = 8565 .

Step 5: We now make a move to the point

x 2 * = x 1 * ρ 1 d *

where ρ 1 is the step length. The value of the response function at this point is

f ( x 2 * ) = b ( x 1 * ρ 1 d * ) 2 = b [ x 1 * 2 2 x 1 * ρ 1 d * + ρ 1 2 d * 2 ]

d f ( x 2 * ) d ρ 1 = 2 b x 1 * d * + 2 b ρ 1 d * 2 = 0

which gives

ρ 1 = x 1 * d * = 0.0002260828

since d * = 8565 and x 1 * = 1.9364 .

Hence

x 2 * = x 1 * ρ 1 d * = 1.9364 0.0002260828 ( 8565 ) 0

Step 6: Since | f ( x 2 * ) f ( x 1 * ) | = | 0 14.9986 | = 14.9986 > ε = 0.0001 we make a second move as follows:

x 3 * = x 2 * 8565 ρ 2 = 0 8565 ρ 2 = 8565 ρ 2

and

f ( x 3 * ) = 293436900 ρ 2 2

d f ( x 3 * ) d ρ 2 = 586873800 ρ 2 = 0

Which gives ρ 2 = 0 . Since ρ 2 = 0 , there was no need for the second move showing that the optimal solution was obtained at the first move.

Therefore,

x 2 * = x min = 0 and f ( x min ) = 0

4. Conclusion

We set out to show in this work that the optimum of a quadratic univariate response function with zero coefficients except that of the quadratic term is located at the origin. By using optimal designs technique for solving unconstrained optimization problems with univariate quadratic surfaces, this primary objective has been successfully achieved. In the course of the proof, we saw that the optimum, x 2 * = x min = 0 was obtained in just one move and f ( x min ) = 0 .

Cite this paper

Etukudo, I. (2017) The Optimum of a Quadratic Univariate Response Function Is Located at the Origin. American Journal of Operations Research, 7, 323-330. https://doi.org/10.4236/ajor.2017.76024

References

  1. 1. Eiselt, H.A., Pederzoli, G. and Sandblom, C.L. (1987) Continuous Optimization Models. Walter de Gruyter & Co., Berlin.

  2. 2. Taha, H.A. (2005) Operations Research: An Introduction. 7th Edition, Pearson Education, Singapore Pte. Ltd., Indian Branch, Delhi.

  3. 3. Etukudo, I. (2017) Optimal Designs Technique for Locating the Optimum of a Second Order Response Function. American Journal of Operations Research, 7, 263-271. https://doi.org/10.4236/ajor.2017.75018

  4. 4. Singh, S.K., Yadav, P. and Mukherjee. (2015) Line Search Techniques by Fibonacci Search. International Journal of Mathematics and Statistics Invention, 3, 27-29.

  5. 5. Winston, W.L. (1994) Operations Research: Applications and Algorithms. 3rd Edition, Duxbury Press, Wadsworth Publishing Company, Belmont, CA.

  6. 6. Gerald, C.F. and Wheatley, P. (2004) Applied Numerical Analysis. 7th Edition, Addison-Wesley, Boston.

  7. 7. Taha, H.A. (2007) Operations Research: An Introduction. 8th Edition, Asoke K. Ghosh, Prentice Hall of India, Delhi.

  8. 8. Subasi, M., Yildirim, N. and Yildirim, B. (2004) An Improvement on Fibonacci Search Method in Optimization Theory. Applied Mathematics and Computation, Elsevier, 147, 893-901.

  9. 9. Hassin, R. (1981) On Maximizing Functions by Fibonacci Search.

  10. 10. Etukudo, I.A. (2017) Optimal Designs Technique for Solving Unconstrained Optimization Problems with Univariate Quadratic Surfaces. American Journal of Computational and Applied Mathematics, 7, 33-36.

  11. 11. Onukogu, I.B. (2002) Super Convergent Line Series in Optimal Design on Experimental and Mathematical Programming. AP Express Publisher, Nigeria.

  12. 12. Onukogu, I.B. (1997) Foundations of Optimal Exploration of Response Surfaces. Ephrata Press, Nsukka.

  13. 13. Etukudo, I.A. and Umoren, M.U. (2008) A Modified Super Convergent Line Series Algorithm for Solving Linear Programming Problems. Journal of Mathematical Sciences, 19, 73-88.

  14. 14. Umoren, M.U. and Etukudo, I.A. (2010) A Modified Super Convergent Line Series Algorithm for Solving Unconstrained Optimization Problems. Journal of Modern Mathematics and Statistics, 4, 115-122. https://doi.org/10.3923/jmmstat.2010.115.122

  15. 15. Umoren, M.U. and Etukudo, I.A. (2009) A Modified Super Convergent Line Series Algorithm for Solving Quadratic Programming Problems. Journal of Mathematical Sciences, 20, 55-66.