**American Journal of Operations Research**

Vol.07 No.05(2017), Article ID:78659,9 pages

10.4236/ajor.2017.75018

Optimal Designs Technique for Locating the Optimum of a Second Order Response Function

Idorenyin Etukudo^{ }

Department of Mathematics & Statistics, Akwa Ibom State University, Ikot Akpaden, Mkpat Enin, Akwa Ibom State, Nigeria

Copyright © 2017 by author and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: June 2, 2017; Accepted: August 20, 2017; Published: August 23, 2017

ABSTRACT

A more efficient method of locating the optimum of a second order response function was of interest in this work. In order to do this, the principles of optimal designs of experiment is invoked and used for this purpose. At the end, it was discovered that the noticeable pitfall in response surface methodology (RSM) was circumvented by this method as the step length was obtained by taking the derivative of the response function rather than doing so by intuition or trial and error as is the case in RSM. A numerical illustration shows that this method is suitable for obtaining the desired optimizer in just one move which compares favourably with other known methods such as Newton-Raphson method which requires more than one iteration to reach the optimizer.

**Keywords:**

Optimal Designs of Experiment, Unconstrained Optimization, Response Surface Methodology, Modified Super Convergent Line Series Algorithm, Newton-Raphson Method

1. Introduction

The problem of locating the optimum of a second order response function has already been addressed by a method known as response surface methodology (RSM). RSM is simply a collection of mathematical and statistical techniques useful for analyzing problems where several independent variables influence a dependent variable or response. The main objective here is to determine the optimum operating conditions for the system or to determine a region of the factor space in which operating requirements are satisfied [1] . See also [2] [3] [4] [5] and [6] . For instance, the interest of a chemical engineer lies in the optimization of his process yield which is influenced by two variables, reaction time, x_{1} and reaction temperature, x_{2}. The observed response can be represented as a function of the two independent variables as

$y=f\left({x}_{1},{x}_{2}\right)+\u03f5$ (1)

where $\u03f5$ is the random error term while the expected response function is

$\eta =E\left(y\right)=f\left({x}_{1},{x}_{2}\right)$ (2)

When the mathematical form of Equation (2) is not known, the expected response function can be approximated within the experimental region by a first order or a second order response function [7] .

According to [1] , the initial estimate of the optimum operating conditions for the system is frequently far from the actual optimum. When this happens, the objective of the experimenter is to move rapidly to the general vicinity of the optimum and the actual step size or step length is determined by the experimenter based on experience. The determination of the step length that could guarantee rapid movement to the vicinity of the optimum by experience or trial and error is a pitfall. In order to advance the existing RSM procedure, [8] proposed a modification which utilized the fusion of the Newton-Raphson and Mean-centre algorithms for obtaining the optimum and the exploration of near optimal settings within the optimal region. The problem with this modification is that it uses over 90% of the steps of the previous method and then introduces several other steps, thereby increasing computer time and computer storage space, only to obtain the selection of near-optimal factors settings which is iterative in nature. In order to circumvent this pitfall, this article seeks to solve this problem by making use of the principles of optimal designs of experiment. To design an experiment optimally, we mean a selection of N support points within the experimental region so that the aim of the experimenter could be realized. Unlike RSM where the step length is obtained by trial and error, [9] had already modified an algorithm by [10] to solve an unconstrained optimization problems using the principle of optimal designs of experiment where the step length is obtained by taking the derivative of the response function. As by [9] , a well-defined method to handle interactive effects in the case of quadratic surfaces has been provided. Since this new technique is a line search algorithm, it relies on a well-defined method of determining the direction of search as given by [11] . The algorithmic procedure which is given in the next section requires that the optimal support points that form the initial design matrix obtained from the entire experimental region be partitioned into r groups, $r=2,3,\cdots ,k$ . However, [12] has shown that with r = 2, optimal solutions are obtained. This method of locating the optimum of a second order response function is an exact solution method as against iterative solution method used in RSM or any other traditional method.

2. The Algorithm

The sequential steps involved in this algorithm are given below:

Initialization: Let the second order response function, $f\left(x\right)$ be defined as

$f\left(x\right)={a}_{0}+{a}_{1}{x}_{1}+{a}_{2}{x}_{2}+{b}_{1}{x}_{1}^{2}+{b}_{2}{x}_{1}{x}_{2}+{b}_{3}{x}_{2}^{2}$

Select N support points such that

$3r\le N\le 4r\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{or}\text{\hspace{0.17em}}\text{\hspace{0.17em}}6\le N\le 8$

where r = 2 is the number of partitioned groups and by choosing N arbitrarily, make an initial design matrix

$X=\left[\begin{array}{ccc}1& {x}_{11}& {x}_{12}\\ 1& {x}_{21}& {x}_{22}\\ \vdots & \vdots & \vdots \\ 1& {x}_{N1}& {x}_{N2}\end{array}\right]$

Step 1: Compute the optimal starting point,

${x}_{1}^{*}={\displaystyle {\sum}_{m=1}^{N}{u}_{m}^{*}{x}_{m}^{\text{T}}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}{u}_{m}^{*}>0$

${\sum}_{m=1}^{N}{u}_{m}^{*}}=1$

${u}_{m}^{*}=\frac{{a}_{m}^{-1}}{{\displaystyle \sum {a}_{m}^{-1}}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}m=1,2,\cdots ,N$

${a}_{m}={x}_{m}{x}_{m}^{\text{T}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}m=1,2,\cdots ,N$

Step 2: Partition X into r = 2 groups and calculate

1) ${M}_{i}={X}_{i}^{\text{T}}{X}_{i},\text{\hspace{0.17em}}i=1,2$

2) ${M}_{i}^{-1}$

Step 3: Calculate the following:

1) The matrices of the interaction effect of the variables, X_{1I} and X_{2I}

2) Interaction vector of the response parameter,

$g=\left[\begin{array}{c}{b}_{1}\\ {b}_{2}\\ {b}_{3}\end{array}\right]$

3) Interaction vectors for the groups are

${I}_{i}={M}_{i}^{-1}{X}_{i}^{\text{T}}{X}_{iI}g$

4) Matrices of mean square error for the groups are

${\overline{M}}_{i}={M}_{i}^{-1}+{I}_{i}{I}_{i}^{\text{T}}$

5) The Hessian matrices, H_{i} and normalized Hessian matrices,
${H}_{i}^{*}$

6) The average information matrix, $M(\xi N)$

Step 4: Obtain the response vector, z and the direction vector, d.

Normalize d to have d*.

Step 5: Make a move to the point

${x}_{2}^{*}={x}_{1}^{*}-{\rho}_{1}{d}^{*}$

for a minimization problem or

${x}_{2}^{*}={x}_{1}^{*}+{\rho}_{1}{d}^{*}$

for a maximization problem where ${\rho}_{1}$ is the step length obtained from

$\frac{\text{d}f\left({x}_{2}^{*}\right)}{\text{d}{\rho}_{1}}=0$

Step 6: Termination criteria. Is $\left|f\left({x}_{2}^{*}\right)-f\left({x}_{1}^{*}\right)\right|<\epsilon $ where ε = 0.0001?

1) Yes. Stop and set ${x}_{2}^{*}={x}_{\mathrm{min}}$ or ${x}_{\mathrm{max}}$ as the case may be.

2) No. Replace ${x}_{1}^{*}$ by ${x}_{2}^{*}$ and return to Step 5. If ${\rho}_{2}\cong 0$ , then implement Step 6(1).

3. Numerical Illustration

In this section, we give a numerical illustration of the optimal designs technique for locating the optimum of a second order response function.

$\mathrm{min}\text{}f\left(x\right)={\left({x}_{1}-1\right)}^{2}+{\left({x}_{2}-2\right)}^{2}$ by optimal designs technique.

Solution

Initialization: Given the response function, $f\left(x\right)={\left({x}_{1}-1\right)}^{2}+{\left({x}_{2}-2\right)}^{2}$ , select N support points such that

$3r\le N\le 4r\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{or}\text{\hspace{0.17em}}\text{\hspace{0.17em}}6\le N\le 8$

where r = 2 is the number of partitioned groups and by choosing N arbitrarily, make an initial design matrix

$X=\left[\begin{array}{ccc}1& -2& 1.5\\ 1& -1& 1\\ 1& -0.5& 0.5\\ 1& 0& 0\\ 1& 0.5& -0.5\\ 1& 2& -1\end{array}\right]$

Step 1: Compute the optimal starting point,

${x}_{1}^{*}={\displaystyle {\sum}_{m=1}^{6}{u}_{m}^{*}{x}_{m}^{\text{T}}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}{u}_{m}^{*}>0$

${\sum}_{m=1}^{6}{u}_{m}^{*}}=1$

${u}_{m}^{*}=\frac{{a}_{m}^{-1}}{{{\displaystyle \sum}}^{\text{}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{a}_{m}^{-1}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}m=1,2,\cdots ,6$

${a}_{m}={x}_{m}{x}_{m}^{\text{T}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}m=1,2,\cdots ,6$

${a}_{1}={x}_{1}{x}_{1}^{\text{T}}=\left[\begin{array}{ccc}1& -2& 1.5\end{array}\right]\left[\begin{array}{c}1\\ -2\\ 1.5\end{array}\right]=7.25,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{1}^{-1}=0.1379$

${a}_{2}={x}_{2}{x}_{2}^{\text{T}}=\left[\begin{array}{ccc}1& -1& 1\end{array}\right]\left[\begin{array}{c}1\\ -1\\ 1\end{array}\right]=3,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{2}^{-1}=0.3333$

${a}_{3}={x}_{3}{x}_{3}^{\text{T}}=\left[\begin{array}{ccc}1& -0.5& 0.5\end{array}\right]\left[\begin{array}{c}1\\ -0.5\\ 0.5\end{array}\right]=1.5,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{3}^{-1}=0.6667$

${a}_{4}={x}_{4}{x}_{4}^{\text{T}}=\left[\begin{array}{ccc}1& 0& 0\end{array}\right]\left[\begin{array}{c}1\\ 0\\ 0\end{array}\right]=1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{4}^{-1}=1.0000$

${a}_{5}={x}_{5}{x}_{5}^{\text{T}}=\left[\begin{array}{ccc}1& 0.5& -0.5\end{array}\right]\left[\begin{array}{c}1\\ 0.5\\ -0.5\end{array}\right]=1.5,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{5}^{-1}=0.6667$

${a}_{6}={x}_{6}{x}_{6}^{\text{T}}=\left[\begin{array}{ccc}1& 2& -1\end{array}\right]\left[\begin{array}{c}1\\ 2\\ -1\end{array}\right]=6,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{6}^{-1}=0.1667$

${\sum}_{m=1}^{6}{a}_{m}^{-1}}=2.9713$

Since

${u}_{m}^{*}=\frac{{a}_{m}^{-1}}{{{\displaystyle \sum}}^{\text{}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{a}_{m}^{-1}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}m=1,2,\cdots ,6$

then

${u}_{1}^{*}=\frac{0.1379}{2.9713}=0.0464,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{u}_{2}^{*}=\frac{0.3333}{2.9713}=0.1122,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{u}_{3}^{*}=\frac{0.6667}{2.9713}=0.2244,$

${u}_{4}^{*}=\frac{1.0000}{2.9713}=0.3366,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{u}_{5}^{*}=\frac{0.6667}{2.9713}=0.2244,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{u}_{6}^{*}=\frac{0.1667}{2.9713}=0.0561$

Hence, the optimal starting point is

$\begin{array}{l}{x}_{1}^{*}={\displaystyle {\sum}_{m=1}^{6}{u}_{m}^{*}{x}_{m}^{\text{T}}}=0.0464\left[\begin{array}{c}1\\ -2\\ 1.5\end{array}\right]+0.1122\left[\begin{array}{c}1\\ -1\\ 1\end{array}\right]+0.2244\left[\begin{array}{c}1\\ -0.5\\ 0.5\end{array}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\text{}0.3366\left[\begin{array}{c}1\\ 0\\ 0\end{array}\right]+0.2244\left[\begin{array}{c}1\\ 0.5\\ -0.5\end{array}\right]+0.0561\left[\begin{array}{c}1\\ 2\\ -1\end{array}\right]=\left[\begin{array}{c}1.0001\\ \overline{-0.0928}\\ 0.1257\end{array}\right]\end{array}$

That is,

${x}_{1}^{*}=\left[\begin{array}{c}-0.0928\\ 0.1257\end{array}\right]$

Step 2: Partitioning X into 2 groups of equal number of support points, we obtain the design matrices,

${X}_{1}=\left[\begin{array}{ccc}1& -2& 1.5\\ 1& -1& 1\\ 1& -0.5& 0.5\end{array}\right]$ and ${X}_{2}=\left[\begin{array}{ccc}1& 0& 0\\ 1& 0.5& -0.5\\ 1& 2& -1\end{array}\right]$

The respective information matrices are

${M}_{1}={X}_{1}^{\text{T}}{X}_{1}=\left[\begin{array}{ccc}3& -3.5& 3\\ -3.5& 5.25& -4.25\\ 3& -4.25& 3.5\end{array}\right]$ and

${M}_{2}={X}_{2}^{\text{T}}{X}_{2}=\left[\begin{array}{ccc}3& 2.5& -1.5\\ 2.5& 4.25& -2.25\\ -1.5& -2.25& 1.25\end{array}\right]$

and their inverses are

${M}_{1}^{-1}=\left[\begin{array}{ccc}5& -8& -14\\ -8& 24& 36\\ -14& 36& 56\end{array}\right]$ and ${M}_{2}^{-1}=\left[\begin{array}{ccc}1& 1& 3\\ 1& 6& 12\\ 3& 12& 26\end{array}\right]$

Step 3: Calculate the following:

1) The matrices of the interaction effect of the variables for the groups as

${X}_{1I}=\left[\begin{array}{ccc}{x}_{11}^{2}& {x}_{11}{x}_{12}& {x}_{12}^{2}\\ {x}_{21}^{2}& {x}_{21}{x}_{22}& {x}_{22}^{2}\\ {x}_{31}^{2}& {x}_{31}{x}_{32}& {x}_{32}^{2}\end{array}\right]=\left[\begin{array}{ccc}4& -3& 2.25\\ 1& -1& 1\\ 0.25& -0.25& 0.25\end{array}\right]$

${X}_{2I}=\left[\begin{array}{ccc}{x}_{41}^{2}& {x}_{41}{x}_{42}& {x}_{42}^{2}\\ {x}_{51}^{2}& {x}_{51}{x}_{52}& {x}_{52}^{2}\\ {x}_{61}^{2}& {x}_{61}{x}_{62}& {x}_{62}^{2}\end{array}\right]=\left[\begin{array}{ccc}0& 0& 0\\ 0.25& -0.25& 0.25\\ 4& -2& 1\end{array}\right]$

2) Interaction vector of the response parameter,

$g=\left[\begin{array}{c}{b}_{1}\\ {b}_{2}\\ {b}_{3}\end{array}\right]=\left[\begin{array}{c}1\\ 0\\ 1\end{array}\right]$

3) Interaction vectors for the groups are

${I}_{1}={M}_{1}^{-1}{X}_{1}^{\text{T}}{X}_{1I}g=\left[\begin{array}{c}-1\\ -5.5\\ -2.5\end{array}\right]$

${I}_{2}={M}_{2}^{-1}{X}_{2}^{\text{T}}{X}_{2I}g=\left[\begin{array}{c}0\\ 4\\ 3\end{array}\right]$

4) Matrices of mean square error for the groups are

${\overline{M}}_{1}={M}_{1}^{-1}+{I}_{1}{I}_{1}^{\text{T}}=\left[\begin{array}{ccc}6& -2.5& -11.5\\ -2.5& 54.25& 49.75\\ -11.5& 49.75& 62.25\end{array}\right]$

${\overline{M}}_{2}={M}_{2}^{-1}+{I}_{2}{I}_{2}^{\text{T}}=\left[\begin{array}{ccc}1& 1& 3\\ 1& 22& 24\\ 3& 24& 35\end{array}\right]$

5) Matrices of coefficient of convex combinations of the matrices of mean square error are

${H}_{1}=diag\left\{\frac{6}{6+1},\frac{54.25}{54.25+22},\frac{62.25}{62.25+35}\right\}=diag\left\{0.8571,0.7115,0.6401\right\}$

${H}_{2}=I-{H}_{1}=diag\left\{0.1429,0.2885,0.3599\right\}$

and by normalizing H_{i} such that
$\Sigma {H}_{i}^{*}{H}_{i}^{*\text{T}}=I$
, we have

$\begin{array}{c}{H}_{1}^{*}=diag\left\{\frac{{h}_{11}}{\sqrt{\Sigma {h}_{i1}^{2}}},\frac{{h}_{12}}{\sqrt{\Sigma {h}_{i2}^{2}}},\frac{{h}_{13}}{\sqrt{\Sigma {h}_{i3}^{2}}}\right\}\\ =diag\left\{\frac{0.8571}{\sqrt{{0.8571}^{2}+{0.1429}^{2}}},\frac{0.7115}{\sqrt{{0.7115}^{2}+{0.2885}^{2}}},\frac{0.6401}{\sqrt{{0.6401}^{2}+{0.3599}^{2}}}\right\}\\ =diag\left\{0.9864,\text{}0.9267,\text{}0.8717\right\}\end{array}$

$\begin{array}{c}{H}_{2}^{*}=diag\left\{\frac{0.1429}{\sqrt{{0.8571}^{2}+{0.1429}^{2}}},\frac{0.2885}{\sqrt{{0.7115}^{2}+{0.2885}^{2}}},\frac{0.3599}{\sqrt{{0.6401}^{2}+{0.3599}^{2}}}\right\}\\ =\text{}diag\left\{0.1645,0.3758,0.4901\right\}\end{array}$

6) The average information matrix is

$M\left({\xi}_{N}\right)=\Sigma {H}_{i}^{*}{M}_{i}{H}_{i}^{*\text{T}}=\left[\begin{array}{ccc}3.0001& -3.0448& 2.4586\\ -3.0448& 5.1088& -3.8476\\ 2.4586& -3.8476& 2.9598\end{array}\right]$

Step 4: Obtain the response vector

$z=\left[\begin{array}{c}{z}_{0}\\ {z}_{1}\\ {z}_{2}\end{array}\right]$

where

${z}_{0}=f\left(-3.0448,2.4586\right)=16.5707$

${z}_{1}=f\left(5.1088,-3.8476\right)=11.0346$

${z}_{2}=f\left(-3.8476,2.9598\right)=24.4204$

and hence, the direction vector

$d=\left[\begin{array}{c}\underset{\_}{{d}_{0}}\\ {d}_{1}\\ {d}_{2}\end{array}\right]={M}^{-1}\left({\xi}_{N}\right)z=\left[\begin{array}{c}\underset{\_}{-86.2355}\\ 521.3935\\ 757.6757\end{array}\right]$

and by normalizing d such that ${d}^{\ast \text{T}}{d}^{\ast}=1$ , we have

${d}^{*}=\left[\begin{array}{c}{d}_{1}\\ {d}_{2}\end{array}\right]=\left[\begin{array}{c}\frac{521.3935}{\sqrt{{521.3935}^{2}+{757.6757}^{2}}}\\ \frac{757.6757}{\sqrt{{521.3935}^{2}+{757.6757}^{2}}}\end{array}\right]=\left[\begin{array}{c}0.5669\\ 0.8238\end{array}\right]$

Step 5: Obtain the step length, ${\rho}_{1}$ from

$\begin{array}{c}{x}_{2}^{*}={x}_{1}^{*}-{\rho}_{1}{d}^{*}\\ =\left[\begin{array}{c}-0.0928\\ 0.1257\end{array}\right]-{\rho}_{1}\left[\begin{array}{c}0.5669\\ 0.8238\end{array}\right]\\ =\left[-0.0928-0.5669{\rho}_{1},0.1257-0.8238{\rho}_{1}\right].\end{array}$

That is,

$\begin{array}{c}f\left({x}_{2}^{*}\right)=f\left[-0.0928-0.5669{\rho}_{1},0.1257-0.8238{\rho}_{1}\right]\\ =4.7072+4.3271{\rho}_{1}+{\rho}_{1}^{2}\end{array}$

and

$\frac{\text{d}f\left({x}_{2}^{*}\right)}{\text{d}{\rho}_{1}}=4.3271+2{\rho}_{1}=0$

Hence,

${\rho}_{1}=-2.1636$

and by making a move to the next point, we have

${x}_{2}^{*}=\left[\begin{array}{c}-0.0928\\ 0.1257\end{array}\right]+2.1636\left[\begin{array}{c}0.5669\\ 0.8238\end{array}\right]=\left[\begin{array}{c}1.1337\\ 1.9081\end{array}\right]$

Step 6: Since $\left|f\left({x}_{2}^{*}\right)-f\left({x}_{1}^{*}\right)\right|=\left|0.0263-4.7072\right|=4.6809$ , we make another move and replace ${x}_{1}^{*}$ by ${x}_{2}^{*}$ .

The new step length, ${\rho}_{2}$ is obtained as follows:

$\begin{array}{c}{x}_{3}^{*}={x}_{2}^{*}-{\rho}_{2}{d}^{*}\\ =\left[\begin{array}{c}1.1337\\ 1.9081\end{array}\right]-{\rho}_{2}\left[\begin{array}{c}0.5669\\ 0.8238\end{array}\right]\\ =\left[1.1337-0.5669{\rho}_{2},1.9081-0.8238{\rho}_{2}\right].\end{array}$

That is,

$\begin{array}{c}f\left({x}_{3}^{*}\right)=f\left[1.1337-0.5669{\rho}_{2},1.9081-0.8238{\rho}_{2}\right]\\ =0.0263-0.0002{\rho}_{2}+{\rho}_{2}^{2}\end{array}$

$\frac{\text{d}f\left({x}_{3}^{*}\right)}{\text{d}{\rho}_{2}}=2{\rho}_{2}-0.0002=0$

which gives

${\rho}_{2}\approx 0$

Since the new step length, ${\rho}_{2}$ is zero, then an optimizer had been located at the first move and hence

${x}_{2}^{*}=\left[\begin{array}{c}1.1337\\ 1.9081\end{array}\right]$ and $f\left({x}_{2}^{*}\right)=0.0263$

4. Conclusion

By using optimal designs technique, we have been able to locate the optimum of a second order response function in just one move. This method circumvented the noticeable pitfall in RSM by taking the derivative of the response function to obtain the step length rather than doing so by intuition or trial and error as is

the case in RSM. A numerical illustration which gives ${x}_{2}^{*}=\left[\begin{array}{c}1.1337\\ 1.9081\end{array}\right]$ and

$f\left({x}_{2}^{*}\right)=0.0263$ in just one move compares favourably with other known methods such as Newton-Raphson method which requires more than one iteration to reach the optimizer.

Cite this paper

Etukudo, I. (2017) Optimal Designs Technique for Locating the Optimum of a Second Order Response Function. American Journal of Operations Research, 7, 263-271. https://doi.org/10.4236/ajor.2017.75018

References

- 1. Montgomery, D.C. (2001) Design and Analysis of Experiments. John Wiley & Sons, Inc., New York.
- 2. Dayananda, R., Shrikantha, R., Raviraj, S. and Rajesh, N. (2010) Application of Response Surface Methodology on Surface Roughness in Grinding of Aerospace Materials. ARPN Journal of Engineering and Applied Science, 5, 23-28.
- 3. Adinarayana, K. and Ellaiah, O. (2002) Response Surface Optimization of the Critical Medium Components for the Production of Alkaline Phosphate by Newly Isolated Basilus. Journal of Pharmaceutical Science, 5, 272-278.
- 4. Amayo, T. (2010) Response Surface Methodology and its Application to Automative Suspension Designs.
- 5. Arokiyamany, A. and Sivakumaar, R.K. (2011) The Use of Response Surface Methodology in Optimization Process for Bacteriocin. International Journal of Biomedical Research, 2.
- 6. Bradley, D.N. (2007) The Response Surface Methodology. Master's Thesis, Indiana University of South Bend, South Bend. http://www.cs.iusb.edu/thesis/Nbradley.thesis.pdf
- 7. Cochran, W.G. and Cox, G.M. (1992) Experimental Designs. John Wiley & Sons, Inc., New York.
- 8. Akpan, S.S., Usen, J. and Ugbe, T.A. (2013) An Alternative Procedure for Solving Second Order Response Design Problems. International Journal of Scientific & Engineering Research, 4, 2233-2245.
- 9. Umoren, M.U. and Etukudo, I.A. (2010) A Modified Super Convergent Line Series Algorithm for Solving Unconstrained Optimization Problems. Journal of Modern Mathematics and Statistics, 4, 115-122. https://doi.org/10.3923/jmmstat.2010.115.122
- 10. Onukogu, I.B. (2002) Super Convergent Line Series in Optimal Design on Experimental and Mathematical Programming. AP Express Publisher, Nsukka.
- 11. Umoren, M.U. and Etukudo, I.A. (2009) A Modified Super Convergent Line Series Algorithm for Solving Quadratic Programming Problems. Journal of Mathematical Sciences, 20, 55-66.
- 12. Chigbu, P.E. and Ugbe, T.A. ((2002) On the Segmentation of the Response Surfaces for Super Convergent Line Series Optimal Solutions of Constrained linear and Quadratic Programming Problems. Global Journal of Mathematical Sciences, 1, 27-34. https://doi.org/10.4314/gjmas.v1i1.21319