American Journal of Operations Research
Vol.07 No.03(2017), Article ID:76408,14 pages
10.4236/ajor.2017.73015

On Optimal Non-Overlapping Segmentation and Solutions of Three-Dimensional Linear Programming Problems through the Super Convergent Line Series

Thomas Ugbe1, Polycarp Chigbu2

1Department of Statistics, University of Calabar, Calabar, Nigeria

2Deparment of Statistics, University of Nigeria, Nsukka, Nigeria

Copyright © 2017 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: January 28, 2017; Accepted: May 21, 2017; Published: May 24, 2017

ABSTRACT

The solutions of Linear Programming Problems by the segmentation of the cuboidal response surface through the Super Convergent Line Series methodologies were obtained. The cuboidal response surface was segmented up to four segments, and explored. It was verified that the number of segments, S, for which optimal solutions are obtained is two (S = 2). Illustrative examples and a real-life problem were also given and solved.

Keywords:

Average Information Matrix, Experimental Space, Line Search Algorithm, Support Points, Optimal Solution

1. Introduction

Linear Programming (LP) problems belong to a class of constrained convex optimization problems which have been widely discussed by several authors: see [1] [2] [3] . The commonly used algorithms for solving Linear Programming problems are: the Simplex method which requires the use of artificial variables and surplus or slack variables, and the active set method which requires the use of artificial constraints and variables. Over the years, a variety of line search algorithms have been employed in locating the local optimizer of response surface methodology (RSM) problems: see [4] and [5] . Similarly, the active set and simplex methods which are available for solving linear programming problems also belong to the class of line search exchange algorithms.

The line search algorithm, which is built around the concept of super convergence, has several points of departure from the classical, gradient-based line series. These gradient-based line series do often times fail to converge to the optimum but the Super Convergent Line Series (SCLS) which are also gradient- based techniques locate the global optimum of response surfaces with certainty. Super Convergent Line Series (SCLS) was introduced by [6] , and later used by [7] and [8] . [9] modified the Super Convergent Line Series (SCLS) and used it to solve Linear Programming Problems, [10] applied Quick Convergent Inflow Algorithm to solve Constrained Linear Programming Problems on Segmented region, and [11] modified the “Quick Convergent Inflow Algorithm” and used it to solve Linear Programming Problems based on variance of predicted response. In [12] , it was verified and established that the best number of segments is two (S = 2) for Linear Programming Problems, four (S = 4) for Quadratic Programming Problems, and eight (S = 8) for Cubic Programming Problems, for non-over- lapping segmentation of the response surface. The above algorithms compared favourably with other Line Search algorithms that utilize the principles of experimental design.

Other recent studies on line search algorithms for optimization problems are: [13] in which a modified version of line search for global optimization was proposed. The line search here uses a technique for the determination of random- generated values for the direction and step-length of the search. Some numerical experiments were performed using popular optimization functions involving fifty dimensions; comparison with standard line search, genetic algorithms and differential evolution were performed. Empirical results illustrate that the modified line search algorithm performs better than the standard line search and other techniques for three or four test functions considered. [14] focused on line search algorithms for solving large-scale unconstrained optimization problems such as quasi-Newton methods, truncated Newton and conjugate gradient. [15] proposed a line search algorithm based on the Majorize-Minimum principle; here, a tangent majorant function is built to approximate a scalar criterion containing a barrier function, which leads to a simple line search ensuring the convergence of several classical descent optimization strategies, including the most classical variants of non-linear conjugate gradient. [16] presented the fundamental ideas, concepts and theorems of basic line search algorithm for solving linear programming problems which can be regarded as an extension of the Simplex method. The basic line search algorithm can be used to find an optimal solution with only one iteration. [17] presented a performance of a one-dimensional search algorithm for solving general high-dimensional optimization problems which uses line search algorithm as subroutines.

In all the aforementioned works, none has gone beyond solving problems in two-dimensional spaces with segmentation. This paper is basically on obtaining optimal solutions and segmentation of Linear Programming Problems in three dimensional spaces of a cuboidal region.

2. Preliminaries

2.1. Three Dimensional Non-Overlapping Segmentation of the Response Surface

The space, X ^ , (the shape of a cube) is partitioned into subspaces called segments. These segments are non-overlapping with common boundaries. The space, X ^ , is partitioned into S non-overlapping segments as follows:

In Figure 1(a), the cube (experimental space) is partitioned into two segments, S1 and S2, while in Figure 1(b) and Figure 1(c), the cubes are partitioned into three and four segments, respectively. From the above Figures and their respective segments, support points will be picked to form their respective design matrices. The number of support points per segment, according to [18] , should not exceed 1 / 2 p ( p + 1 ) + 1 , where p is the number of parameters of the regression model under consideration. Therefore, p n 1 / 2 p ( p + 1 ) + 1 , where n is

(a) (b) (c)

Figure 1. (a): A vertical line, Ƨ, drawn through the middle of a Cube [Two segments (S = 2)]. (b): A vertical line, Ʈ, and a horizontal line, ƥ, draw through the middle of a cube [Three Segments (S = 3)]. (c): A vertical line, Δ, and a horizontal line, Ԓ, drawn through the middle of a cube [Four Segments (S = 4)].

the maximum number of support points per segment. The number of support points per segment as given by [6] is n + 1 N k 1 / 2 n ( n + 1 ) + 1 , where n is the number of variables in the model, Nk is the number of support points in Nk segment. The support points per segment are arbitrarily chosen provided they satisfy constraint equations and do not lie outside the feasible region.

2.2. Rationale of the Segmentation

Design matrices are formed from the support points obtained from each of the segments created above. The segmentation of the response surface according to [6] is a rapid way of improving the average information matrix and obtaining the optimum direction vector. This is achieved by obtaining the linear combination of the information matrices from the different segments. The improved average information matrix (resultant matrix) is used to compute the optimum direction vector, which locates the optimum direction and the optimizer in a very short period or with one iteration. Without segmentation, information leading to the optimizer would have been obtained from only a fraction of the entire response surface.

With segmentation, more support points are available at the boundary of the feasible region. [18] [19] [20] have shown that a design formed with support points taken at the boundary of the feasible region is better than any other design with support points taken at the interior of the feasible region.

Theorem: The average information matrix resulting from pooling the segments using matrices of coefficients of convex combination is

M ( ζ n ) = k = 1 s H k X k T X k H k T ,

Proof:

X _ T X _ = diag { X 1 T X 1 , X 2 T X 2 , , X S T X S } = [ X 1 T X 1 0 0 0 X 2 T X 2 0 0 0 X S T X S ] ,

where Hk is the matrix of coefficient of convex combination, X K T X K is the information matrix

H K = [ h 0 k 0 0 0 h 1 k 0 0 0 h n k ] , H K T = [ h 0 k 0 0 0 h 1 k 0 0 0 h n k ] and

H K H K T = [ h 0 k 2 0 0 0 h 1 k 2 0 0 0 h n k 2 ] .

Thus,

k = 1 s H K H K T = H 1 H 1 T + H 2 H 2 T + + H S H S T = [ h 01 2 0 0 0 h 11 2 0 0 0 h n 1 2 ] + [ h 02 2 0 0 0 h 12 2 0 0 0 h n 2 2 ] + + [ h 0 s 2 0 0 0 h 1 s 2 0 0 0 h n s 2 ] = [ h 01 2 + h 02 2 + + h 0 s 2 0 0 0 h 11 2 + h 12 2 + + h 1 s 2 0 0 0 h n 1 2 + h n 2 2 + + h n s 2 ]

Therefore, k = 1 s H K H K T = [ 1 0 0 0 1 0 0 0 1 ] = 1 , since i = 0 n k = 1 s h i k 2 = 1

Therefore, M ( ζ n ) = k = 1 s H K X K T X K H K T

3. Methodology

3.1. The Theory of Super Convergent Line Series

3.1.1. Definitions and Preliminaries

The Super Convergent Line Series (SCLS) is defined by [6] as

X _ = X _ ¯ ρ d _ (1.1)

X _ is the vector of the optimal values,

X _ ¯ = m = 1 N w m x m is the optimal starting points, where w m > 0 ; m = 1 N w m = 1 , w m = a m 1 m = 1 N a m 1 ,

a m = X m T X m , m = 1 , 2 , , N .

d _ is the direction vector defined as d _ = M A 1 ( ζ N ) Z _ ( . ) , where Z _ ( . ) = ( Z 0 , Z 1 , , Z n ) T is an n-component vector of responses; Z i = f ( m i ) , is the ith row of the average information matrix, M A ( ζ N ) , where M A 1 ( ζ N ) is the inverse of the average information matrix;

ρ is the step-length defined as ρ = min { C _ i T X ¯ _ b i C _ i T d _ } , where d _ is the di-

rection vector; C _ i T is the vector which represents the parameter of linear inequalities; X ¯ _ is the starting point and b i is a scalar of the linear inequalities;

ζ N is an N-point design measure whose support points may or may not have equal weights;

Support points are pairs of points marked on the boundary and interior of the partitioned space which are picked to form design matrices;

X ˜ is the experimental space of the response surface that can be partitioned into segments such that every pair of support points in the segment is a subset of X ˜ ;

M ( ζ n k ) = ( X k T X k ) is the information matrix, M 1 ( ζ n k ) = ( X k T X k ) 1 is the inverse information matrix;

S1 is segment 1, S2 is segment 2,

det M ( ζ n k ) is the determinant of the information matrix;

Hi is the matrix of the coefficients of convex combination and is defined as

H i = diag ( h i 1 , h i 2 , , h i , n + 1 ) , i = 1 , 2 , , k ;

With i = 1, 2 segments, the coefficients of convex combinations, Hi, of the segments are:

H 1 = diag { V 1 11 V 111 + V 211 , V 122 V 122 + V 222 , V 133 V 133 + V 233 } = diag { h 11 , h 12 , h 13 } (1.2)

for the inverse information matrix in segment 1,

H 2 = diag { V 2 11 V 111 + V 211 , V 222 V 122 + V 222 , V 233 V 133 + V 233 } = diag { h 21 , h 22 , h 23 } (1.3)

for the inverse information matrix in segment 2,

where V111, V122, V133 are the variances of the inverse information matrix of segment 1 and V211, V222, V233 are variances of the inverse information matrix of segment 2, respectively.

The average information matrix, M A ( ζ N ) , is the sum of the product of the k information matrices and the k matrices of the coefficients of convex combinations, thus

M A ( ζ N ) = k = 1 s H k X k T X k H k T ; see [6] (1.4)

Segmentation is the partitioning of the experimental space, X ˜ , into segments. Segmentation can be non-overlapping and overlapping, and support points are selected from each segment to form design matrices.

An unbiased response function is defined by

f ( x 1 , x 2 ) = a 00 + a 10 x 1 + a 20 x 2 (1.5)

3.1.2. Algorithm for Super Convergent Line Series

The algorithm follows the following sequence of steps:

1) Partition the experimental space (Cube) into k = 1 , 2 , , s segments and select Nk support points from the kth segment; hence, make up an N-point design,

ζ N ( 1 ) = { x _ 1 , x _ 2 , , x _ n , , x _ n , w 1 , w 2 , , w n , , w n } ; N = k = 1 s N k .

2) Compute the vectors, X _ ¯ , d , ρ .

3) Move to the point, X _ = X _ ¯ ρ d .

4) Is X = X f ? (where X f is the optimizer of f ( ) ).

Yes: stop,

No: then go back to 1) above until the optimal solution is obtained.

5) Identify the segment in which the optimal solution is obtained.

3.2. The Average Information Matrix, the Direction Vector, the Starting Point and the Step-Length

3.2.1. The Average Information Matrix

The average information matrix, M ( ζ n ) , is the sum of the product of the k information matrices, and the k matrices of the coefficients of convex combina-

tions given by M ( ζ n ) = k = 1 s H K X K T X K H K T ,

for two segments, the average information matrix is M A ( ζ N ) = H 1 X 1 T X 1 H 1 T + H 2 X 2 T X 2 H 2 T = ( m 11 m 21 m 31 m 12 m 22 m 32 m 13 m 23 m 33 ) .

3.2.2. The Direction Vector

The direction vector defined in Section 3.1.1 is computed as follows:

If f(x) is the response function, then the response vector, Z, is given by

Z = ( z 0 z 1 z 2 z n ) , where

z 0 = f ( m 12 , m 13 , , m 1 , n + 1 ) , z 1 = f ( m 22 , m 23 , , m 2 , n + 2 ) , z n = f ( m n + 1 , 2 , m n + 1 , 3 , , m 2 , n + 1 ) .

Hence, the direction vector defined in Section 3.1.1 is computed as

d _ = M A 1 ( ζ N ) Z = ( d 0 _ d 1 d 2 d n ) .

By normalizing such that d * T d * = 1 , we have d * = ( d 1 d 1 2 + d 2 2 + + d n 2 d 2 d 1 2 + d 2 2 + + d n 2 d n d 1 2 + d 2 2 + + d n 2 ) ,

where d0 = 1 is discarded.

3.2.3. Optimal Starting Point

The optimal starting point is obtained from the design matrices of the segments considered. The optimal starting point defined in Section 3.1.1 is obtained as follows:

X _ ¯ = m = 1 N w m x m ; w m 0 ; m = 1 N w m = 1. w m = a m 1 m = 1 N w m , m = 1 , 2 , , N .

a m = x m T x m , m = 1 , 2 , , N .

Using a 4-point design matrix, X _ ¯ = m = 1 8 w m x m , w m = a m 1 m = 1 8 a m 1 , m = 1 , 2 , , 8.

3.2.4. The Step-Length

The step-length is defined by

ρ = min { C _ i T X ¯ _ b i C _ i T d _ } , where ρ is the optimal step-length and d _ is the

normalized direction vector, C _ i T is the vector which represent the parameter of linear inequalities, X ¯ _ is the starting point while b i is a scalar of linear inequalities.

4. Results and Discussion

4.1. Comparison of Results Obtained Using the Segmentation Procedure with Existing

Results in the Literature

Problem 1: [ [21] , Problem 7.2B, Question 2b, pp. 304]

Maximize Z = 2 x 1 + x 2 + 2 x 3

Subject to 4 x 1 + 3 x 2 + 8 x 3 12

4 x 1 + x 2 + 12 x 3 8

4 x 1 x 2 + 3 x 3 8

x 1 , x 2 , x 3 0

Support points are picked from the boundaries of the partitioned segments (Figure 2) provided they do not violate the constraint equations.

Thus X 1 = { ( 0 , 1 , 0 ) , ( 0 , 1 , 1 ) , ( 0 , 0 , 1 ) , ( 1 / 2 , 0 , 0 ) , , ( 1 / 4 , 0 , 0 ) } and

X 2 = { ( 1 , 0 , 1 ) , ( 0 , 0 , 1 / 2 ) , ( 1 , 0 , 0 ) , ( 1 , 1 / 2 , 0 ) , ( 1 , 1 , 0 ) , , ( 1 / 2 , 0 , 0 ) } ,

where X1 and X2 is obtained from S1 and S2 respectively.

Thus, the design and inverse matrices are given as follows (from Figure 2):

X 1 = ( 1 0 1 0 1 0 0 1 2 1 0 1 2 0 1 1 2 0 0 ) ; X 2 = ( 1 1 1 0 1 1 1 2 0 1 1 2 0 0 1 0 0 1 2 ) ,

Figure 2. Using 2 segments (S = 2).

( X 1 T X 1 ) 1 = ( 5 10 6 10 10 24 12 20 6 12 8 12 10 20 12 24 ) ; ( X 2 T X 2 ) 1 = ( 9 14 6 18 14 24 12 28 6 12 8 12 18 28 12 40 )

The direction vector, d _ = ( 2.000 1.000 2.000 ) ; by normalizing d _ , we get d _ = ( 0.8944 0.4472 0.8944 ) ,

(See Section 3.2.2)

X _ ¯ = i = 1 N w i x i = ( 0.2990 0.2758 0.1516 ) , the step-length, ρ = 1.1396 , X _ = X ¯ _ ρ d = ( 1.318 0.7854 1.1709 ) .

Therefore, Max Z = 5.46.

With S = 2 (2 Segments), the value of Z is Max. Z = 5.46 (in one iteration) which is close to the optimal value obtained by [21] , problem 7.2b, Question 2b, pp. 304, as Max Z = 5.00 (in 3 iterations). The maximum values of Z for this problem using 3 and 4 segments are: 5.81 for (x1, x2, x3) = (1.265, 0.7891, 1.2431), and 5.77 for (x1, x2, x3) = (1.0008, 0.6606, 1.5523). These values are not optimal because they do not compare favourably with the existing solution got by [21] using the simplex method.

Problem 2: [ [22] , Ex. 2.4, Q. 14(ii), p. 215]

Maximize Z = 5 x 1 + 3 x 2 + 7 x 3

Subject to x 1 + x 2 + 2 x 3 26

3 x 1 + 2 x 2 + x 3 26

x 1 + x 2 + x 3 18

x 1 , x 2 , x 3 0

Support points are picked from the boundaries of the partitioned segments (from Figure 3) provided they do not violate the constraint equations.

Thus X 1 = { ( 0 , 1 , 0 ) , ( 0 , 1 , 1 ) , ( 0 , 0 , 1 ) , ( 1 / 2 , 0 , 0 ) , , ( 1 / 4 , 0 , 0 ) } and

X 2 = { ( 1 , 0 , 1 ) , ( 0 , 0 , 1 / 2 ) , ( 1 , 1 , 1 ) , ( 1 , 1 / 2 , 0 ) , ( 1 , 1 , 0 ) , , ( 1 / 2 , 0 , 0 ) } ,

Thus, the design and inverse matrices are given as follows (from Figure 3):

X 1 = ( 1 0 1 0 1 0 1 1 1 1 4 0 0 1 0 0 1 ) ; X 2 = ( 1 1 1 1 1 1 0 1 1 1 1 2 0 1 1 2 0 0 ) ,

( X 1 T X 1 ) 1 = ( 3 12 2 2 12 64 20 8 2 8 2 1 2 8 1 2 ) ; ( X 2 T X 2 ) 1 = ( 9 14 6 5 14 24 12 10 6 12 8 6 5 10 6 6 )

The direction vector, d _ = ( 5.0002 2.9998 6.9999 ) , by normalizing d _ , we get d _ = ( 0.5488 0.3293 0.7683 )

(See Section 3.2.2) X _ ¯ = i = 1 N w i x i = ( 0.3812 0.3318 0.2787 ) , the step-length ρ = 10.3306 , X _ = X ¯ _ ρ d = ( 6.0506 3.7337 8.2157 ) .

Therefore, Max Z = 98.96.

With S = 2 (2 Segments), the value of Z is Max Z = 98.96 which is close to the

Figure 3. Using 2 Segments (S = 2).

optimal value (in one iteration) obtained by [22] , Ex. 2.4, Q 14(ii), p. 215, as Max Z = 98.80 (in two iterations), using the simplex method. The maximum value of Z for this problem using 3 and 4 segments are: 99.06 for (x1, x2, x3) = (6.0213, 3.725, 8.2537) and 99.15 for (x1, x2, x3) = (6.0746, 3.675, 8.2503).

4.2. Illustrative Problem and Application

A producer of leather shoes makes three types of shoes, X, Y and Z, which are processed on three machines, K1, K2 and K3. The daily capacities of the machines are given in Table 1 as follows.

The profit gained from shoe X is ₦3 per unit, shoe Y is ₦5 per unit and shoe Z is ₦4 per unit. What is the maximum profit for the three types of shoe produced?

Solution: Let X1 be the unit of type X, X2 be the unit of type Y and X3 be the unit of type Z.

Maximize Z = 3 x 1 + 5 x 2 + 4 x 3

Subject to 2 X 1 + 3 X 2 8

2 X 2 + 5 X 3 10

3 X 1 + 2 X 2 + 4 X 3 15

X 1 , X 2 , X 3 0

In a similar manner, the design and inverse matrices are given as follows [from Figure 4].

Table 1. The daily capacity of the machines.

Figure 4. Using 2 segments (S = 2).

X 1 = ( 1 0 1 0 1 0 1 1 1 1 4 0 0 1 0 0 1 ) ; X 2 = ( 1 1 1 1 1 1 0 1 1 1 1 2 0 1 1 2 0 0 ) ,

( X 1 T X 1 ) 1 = ( 3 12 2 2 12 64 8 8 2 8 2 1 2 8 1 2 ) ; ( X 2 T X 2 ) 1 = ( 9 14 6 5 14 24 12 10 6 12 8 6 5 10 6 6 ) .

The direction vector, d _ = ( 3 5 4 ) ; by normalizing d _ , we get d _ = ( 0.4243 0.7071 0.5657 ) , X _ ¯ = i = 1 N w i x i = ( 0.3027 0.2953 0.2483 ) , step-length ρ = 2.1916 , X _ = X ¯ _ ρ d = ( 1.2326 1.845 1.4881 ) . Therefore, the maximum value of Z is

₦18.88. This value in one iteration is close to the optimum value got by using the simplex method approach (in three iterations). When 3 and 4 segments were used, the maximum values of Z for this problem are ₦21.25 with corresponding values (X1, X2, X3) = (1.37, 2.08, 1.68) and ₦21.03 with corresponding values (X1, X2, X3) = (1.43, 2.01, 1.67). These values are not optimal because they do not compare favourably with the simplex method solution which is Max Z = ₦18.66.

5. Conclusion

Three dimensional Linear Programming problems have been solved using the line search equation, X ¯ _ ρ d , of the Super Convergent Line Series, by segmenting the cuboidal response surface into 2, 3 and 4 segments. A real-life problem was also used to achieve the desired result. It was found that the optimal solution is attained at 2 segments (S = 2) and in one iteration or move even though up to 4 segments (S = 4) were considered. But comparing the solution with the simplex method’s result, a close result was obtained in 2 and 3 iterations. Hence, as the name implies, the Super Convergent Line Series (SCLS) locates the optimizer in one iteration and better still with segmentation.

Cite this paper

Ugbe, T. and Chigbu, P. (2017) On Optimal Non-Overlapping Segmentation and Solutions of Three-Dimen- sional Linear Programming Problems through the Super Convergent Line Series. American Journal of Operations Research, 7, 225-238. https://doi.org/10.4236/ajor.2017.73015

References

  1. 1. Gass, S.I. (1958) Linear Programming Methods and Applications. McGraw-Hill, New York.

  2. 2. Dantzig, G.B. (1963) Linear Programming and Extension. Princeton University Press, Princeton. https://doi.org/10.1515/9781400884179

  3. 3. Philip, D.T., Walter, M. and Wright, M.H. (1981) Practical Optimization. Academic Press, London.

  4. 4. Wilde, D.J. and Beightler, C.S. (1967) Foundations of Optimization. Prentice Hall Inc., Upper Saddle River.

  5. 5. Myers, R.H. (1971) Response Surface Methodology. Allyn & Bacon, Boston.

  6. 6. Onukogu, I.B. and Chigbu, P.E. (2002) Super Convergent Line Series (in Optimal Design of Experiment and Mathematical Programming). AP Express Publishing, Nsukka.

  7. 7. Chigbu, P.E. and Ugbe, T.A. (2002) On the Segmentation of the Response Surfaces for Super Convergent Line Series Optimal Solutions of Constrained Linear and Quadratic Programming Problem. Global Journal of Mathematical Sciences, 1, 27-34.

  8. 8. Chigbu, P.E. and Ukaegbu, E.C. (2007) On the Precision and Mean Square Error Matrices Approaches in Obtaining the Average Information Matrices via the Super Convergent Line Series. Journal of Nigerian Statistical Association, 19, 4-18.

  9. 9. Etukudo, I.A. and Umoren, M.U. (2008) A Modified Super Convergent Line Series Algorithm for Solving Linear Programming Problems. Journal of Mathematical Sciences, 19, 73-88.

  10. 10. Iwundu, M.P. and Hezekiah, J.E. (2014) Algorithmic Approach to Solving Linear Programming Problems on Segmented Regions. Asian Journal of Mathematics and Statistics, 7, 40-59. https://doi.org/10.3923/ajms.2014.40.59

  11. 11. Iwundu, M.P. and Ebong, D.W. (2014) Modified Quick Convergent Inflow Algorithm for Solving Linear Programming Problems. International Journal of Probability and Statistics, 3, 54-66. https://doi.org/10.5539/ijsp.v3n4p54

  12. 12. Ugbe, T.A. and Chigbu, P.E. (2014) On Non-Overlapping Segmentation of the Response Surfaces for Solving Constrained Programming Problems through Super Convergent Line Series. Communications in Statistics—Theory and Methods, 43, 306-320. https://doi.org/10.1080/03610926.2012.661510

  13. 13. Grosan, C. and Abraham, A. (2007) Modified Line Search Method for Global Optimization. Proceeding of the 1st Asia International Conference of Modeling and Simulation, Phuket, 27-30 March 2007, 415-420. https://doi.org/10.1109/AMS.2007.68

  14. 14. Andrei, N. (2008) Performance Profiles of Line Search Algorithm for Unconstrained Optimization. Research Institute for Informatics. Centre for Advance Modeling and Optimization, ICI Technical report. https://www.ici.ro/neculai/p12a08

  15. 15. Chouzenoux, E., Moussaoui, S. and Idier, J. (2009) A Majorize-Minimize Line Search Algorithm for Barrier Function Optimization. 17th European Signal Processing Conference, Glasgow, 24-28 August 2009, 1379-1383.

  16. 16. Zhu, S., Ruan, G. and Huang, X. (2010) Some Fundamental Issues of Basic Line Search Algorithm for Linear Programming Problems. Optimization, 59, 1283-1295. https://doi.org/10.1080/02331930903395873

  17. 17. Gardeux, V., Chelouah, R., Siarry, P. and Glover, F. (2011) EM323: A Line Search Algorithm for Solving High Dimensional Continuous Non-Linear Optimization Problems. Soft Computing, 15, 2275-2285. https://doi.org/10.1007/s00500-010-0651-6

  18. 18. Pazman, A. (1987) Foundation of Optimum Experimental Design. D. Riedel Publishing Company, Dordrecht.

  19. 19. Onukogu, I.B. (1997) Foundation of Optimal Exploration of Response Surfaces. Ephrata Press, Nsukka.

  20. 20. Smith, W.F. (2005) Experimental Design for Formulation (ASA-SIAM Series on Statistics and Applied Probability). SIAM, Philadelphia, ASA, Alexandria, VA.

  21. 21. Taha, H.A. (2006) Operations Research: An Introduction. 7th Edition, Macmillan Publishing Company, New York.

  22. 22. Gupta, P.K. and Hira, D.S. (2008) Operations Research. S. Chand and Company Limited, New Delhi.