**American Journal of Operations Research**

Vol.07 No.03(2017), Article ID:76408,14 pages

10.4236/ajor.2017.73015

On Optimal Non-Overlapping Segmentation and Solutions of Three-Dimensional Linear Programming Problems through the Super Convergent Line Series

Thomas Ugbe^{1}, Polycarp Chigbu^{2}^{ }

^{1}Department of Statistics, University of Calabar, Calabar, Nigeria

^{2}Deparment of Statistics, University of Nigeria, Nsukka, Nigeria

Copyright © 2017 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: January 28, 2017; Accepted: May 21, 2017; Published: May 24, 2017

ABSTRACT

The solutions of Linear Programming Problems by the segmentation of the cuboidal response surface through the Super Convergent Line Series methodologies were obtained. The cuboidal response surface was segmented up to four segments, and explored. It was verified that the number of segments, S, for which optimal solutions are obtained is two (S = 2). Illustrative examples and a real-life problem were also given and solved.

**Keywords:**

Average Information Matrix, Experimental Space, Line Search Algorithm, Support Points, Optimal Solution

1. Introduction

Linear Programming (LP) problems belong to a class of constrained convex optimization problems which have been widely discussed by several authors: see [1] [2] [3] . The commonly used algorithms for solving Linear Programming problems are: the Simplex method which requires the use of artificial variables and surplus or slack variables, and the active set method which requires the use of artificial constraints and variables. Over the years, a variety of line search algorithms have been employed in locating the local optimizer of response surface methodology (RSM) problems: see [4] and [5] . Similarly, the active set and simplex methods which are available for solving linear programming problems also belong to the class of line search exchange algorithms.

The line search algorithm, which is built around the concept of super convergence, has several points of departure from the classical, gradient-based line series. These gradient-based line series do often times fail to converge to the optimum but the Super Convergent Line Series (SCLS) which are also gradient- based techniques locate the global optimum of response surfaces with certainty. Super Convergent Line Series (SCLS) was introduced by [6] , and later used by [7] and [8] . [9] modified the Super Convergent Line Series (SCLS) and used it to solve Linear Programming Problems, [10] applied Quick Convergent Inflow Algorithm to solve Constrained Linear Programming Problems on Segmented region, and [11] modified the “Quick Convergent Inflow Algorithm” and used it to solve Linear Programming Problems based on variance of predicted response. In [12] , it was verified and established that the best number of segments is two (S = 2) for Linear Programming Problems, four (S = 4) for Quadratic Programming Problems, and eight (S = 8) for Cubic Programming Problems, for non-over- lapping segmentation of the response surface. The above algorithms compared favourably with other Line Search algorithms that utilize the principles of experimental design.

Other recent studies on line search algorithms for optimization problems are: [13] in which a modified version of line search for global optimization was proposed. The line search here uses a technique for the determination of random- generated values for the direction and step-length of the search. Some numerical experiments were performed using popular optimization functions involving fifty dimensions; comparison with standard line search, genetic algorithms and differential evolution were performed. Empirical results illustrate that the modified line search algorithm performs better than the standard line search and other techniques for three or four test functions considered. [14] focused on line search algorithms for solving large-scale unconstrained optimization problems such as quasi-Newton methods, truncated Newton and conjugate gradient. [15] proposed a line search algorithm based on the Majorize-Minimum principle; here, a tangent majorant function is built to approximate a scalar criterion containing a barrier function, which leads to a simple line search ensuring the convergence of several classical descent optimization strategies, including the most classical variants of non-linear conjugate gradient. [16] presented the fundamental ideas, concepts and theorems of basic line search algorithm for solving linear programming problems which can be regarded as an extension of the Simplex method. The basic line search algorithm can be used to find an optimal solution with only one iteration. [17] presented a performance of a one-dimensional search algorithm for solving general high-dimensional optimization problems which uses line search algorithm as subroutines.

In all the aforementioned works, none has gone beyond solving problems in two-dimensional spaces with segmentation. This paper is basically on obtaining optimal solutions and segmentation of Linear Programming Problems in three dimensional spaces of a cuboidal region.

2. Preliminaries

2.1. Three Dimensional Non-Overlapping Segmentation of the Response Surface

The space, $\widehat{X}$ , (the shape of a cube) is partitioned into subspaces called segments. These segments are non-overlapping with common boundaries. The space, $\widehat{X}$ , is partitioned into S non-overlapping segments as follows:

In Figure 1(a), the cube (experimental space) is partitioned into two segments, S_{1} and_{ }S_{2}, while in Figure 1(b) and Figure 1(c), the cubes are partitioned into three and four segments, respectively. From the above Figures and their respective segments, support points will be picked to form their respective design matrices. The number of support points per segment, according to [18] , should not exceed
$1/2p\left(p+1\right)+1$
, where p is the number of parameters of the regression model under consideration. Therefore,
$p\le n\le 1/2p\left(p+1\right)+1$
, where n is

(a) (b) (c)

Figure 1. (a): A vertical line, Ƨ, drawn through the middle of a Cube [Two segments (S = 2)]. (b): A vertical line, Ʈ, and a horizontal line, ƥ, draw through the middle of a cube [Three Segments (S = 3)]. (c): A vertical line, Δ, and a horizontal line, Ԓ, drawn through the middle of a cube [Four Segments (S = 4)].

the maximum number of support points per segment. The number of support points per segment as given by [6] is
$n+1\le {N}_{k}\le 1/2n\left(n+1\right)+1$
, where n is the number of variables in the model, N_{k} is the number of support points in N_{k} segment. The support points per segment are arbitrarily chosen provided they satisfy constraint equations and do not lie outside the feasible region.

2.2. Rationale of the Segmentation

Design matrices are formed from the support points obtained from each of the segments created above. The segmentation of the response surface according to [6] is a rapid way of improving the average information matrix and obtaining the optimum direction vector. This is achieved by obtaining the linear combination of the information matrices from the different segments. The improved average information matrix (resultant matrix) is used to compute the optimum direction vector, which locates the optimum direction and the optimizer in a very short period or with one iteration. Without segmentation, information leading to the optimizer would have been obtained from only a fraction of the entire response surface.

With segmentation, more support points are available at the boundary of the feasible region. [18] [19] [20] have shown that a design formed with support points taken at the boundary of the feasible region is better than any other design with support points taken at the interior of the feasible region.

Theorem: The average information matrix resulting from pooling the segments using matrices of coefficients of convex combination is

$M\left({\zeta}_{n}\right)={\displaystyle \sum _{k=1}^{s}{H}_{k}{X}_{k}^{\text{T}}{X}_{k}{H}_{k}^{\text{T}},}$

Proof:

$\begin{array}{c}{\underset{\_}{X}}^{\text{T}}\underset{\_}{X}=\text{diag}\left\{{X}_{1}^{\text{T}}{X}_{1},{X}_{2}^{\text{T}}{X}_{2},\cdots ,{X}_{S}^{\text{T}}{X}_{S}\right\}\\ =\left[\begin{array}{cccc}{X}_{1}^{\text{T}}{X}_{1}& 0& \cdots & 0\\ 0& {X}_{2}^{\text{T}}{X}_{2}& \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ 0& 0& \cdots & {X}_{S}^{\text{T}}{X}_{S}\end{array}\text{}\text{}\right],\end{array}$

where H_{k} is the matrix of coefficient of convex combination,
${X}_{K}^{\text{T}}{X}_{K}$
is the information matrix

${H}_{K}=\left[\begin{array}{cccc}{h}_{0k}& 0& \cdots & 0\\ 0& {h}_{1k}& \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ 0& 0& \cdots & {h}_{nk}\end{array}\text{}\text{}\right]$ , ${H}_{K}^{\text{T}}=\left[\begin{array}{cccc}{h}_{0k}& 0& \cdots & 0\\ 0& {h}_{1k}& \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ 0& 0& \cdots & {h}_{nk}\end{array}\text{}\text{}\right]$ and

${H}_{K}{H}_{K}^{\text{T}}=\left[\begin{array}{cccc}{h}_{0k}^{2}& 0& \cdots & 0\\ 0& {h}_{1k}^{2}& \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ 0& 0& \cdots & {h}_{nk}^{2}\end{array}\text{}\text{}\right]$ .

Thus,

$\begin{array}{c}{\displaystyle \sum _{k=1}^{s}{H}_{K}{H}_{K}^{\text{T}}}={H}_{1}{H}_{1}^{\text{T}}+{H}_{2}{H}_{2}^{\text{T}}+\cdots +{H}_{S}{H}_{S}^{\text{T}}\\ =\left[\begin{array}{cccc}{h}_{01}^{2}& 0& \cdots & 0\\ 0& {h}_{11}^{2}& \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ 0& 0& \cdots & {h}_{n1}^{2}\end{array}\text{}\text{}\right]+\left[\begin{array}{cccc}{h}_{02}^{2}& 0& \cdots & 0\\ 0& {h}_{12}^{2}& \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ 0& 0& \cdots & {h}_{n2}^{2}\end{array}\text{}\text{}\right]+\cdots +\left[\begin{array}{cccc}{h}_{0s}^{2}& 0& \cdots & 0\\ 0& {h}_{1s}^{2}& \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ 0& 0& \cdots & {h}_{ns}^{2}\end{array}\text{}\text{}\right]\\ =\left[\begin{array}{cccc}{h}_{01}^{2}+{h}_{02}^{2}+\cdots +{h}_{0s}^{2}& 0& \cdots & 0\\ 0& {h}_{11}^{2}+{h}_{12}^{2}+\cdots +{h}_{1s}^{2}& \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ 0& 0& \cdots & {h}_{n1}^{2}+{h}_{n2}^{2}+\cdots +{h}_{ns}^{2}\end{array}\text{}\text{}\right]\end{array}$

Therefore, $\sum _{k=1}^{s}{H}_{K}{H}_{K}^{\text{T}}}=\left[\begin{array}{cccc}1& 0& \cdots & 0\\ 0& 1& \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ 0& 0& \cdots & 1\end{array}\right]=1$ , since $\sum _{i=0}^{n}{\displaystyle \sum _{k=1}^{s}{h}_{ik}^{2}}}=1$

Therefore, $M\left({\zeta}_{n}\right)={\displaystyle \sum _{k=1}^{s}{H}_{K}{X}_{K}^{\text{T}}{X}_{K}{H}_{K}^{\text{T}}}$

3. Methodology

3.1. The Theory of Super Convergent Line Series

3.1.1. Definitions and Preliminaries

The Super Convergent Line Series (SCLS) is defined by [6] as

$\underset{\_}{X}=\overline{\underset{\_}{X}}-\rho \underset{\_}{d}$ (1.1)

$\underset{\_}{X}$ is the vector of the optimal values,

$\overline{\underset{\_}{X}}={\displaystyle \sum _{m=1}^{N}{w}_{m}{x}_{m}}$ is the optimal starting points, where ${w}_{m}>0$ ; $\sum _{m=1}^{N}{w}_{m}=1$ , ${w}_{m}=\frac{{a}_{m}^{-1}}{{\displaystyle \sum _{m=1}^{N}{a}_{m}^{-1}}}$ ,

${a}_{m}={X}_{m}^{\text{T}}{X}_{m},\text{\hspace{0.17em}}\text{\hspace{0.17em}}m=1,2,\cdots ,N.$

$\underset{\_}{d}$ is the direction vector defined as $\underset{\_}{d}={M}_{A}^{-1}\left({\zeta}_{N}\right)\underset{\_}{Z}(.)$ , where $\underset{\_}{Z}(.)={\left({Z}_{0},{Z}_{1},\cdots ,{Z}_{n}\right)}^{\text{T}}$ is an n-component vector of responses; ${Z}_{i}=f\left({m}_{i}\right)$ , is the ith row of the average information matrix, ${M}_{A}\left({\zeta}_{N}\right)$ , where ${M}_{A}^{-1}\left({\zeta}_{N}\right)$ is the inverse of the average information matrix;

$\rho $ is the step-length defined as $\rho =\mathrm{min}\left\{\frac{{\underset{\_}{C}}_{i}^{\text{T}}\underset{\_}{\overline{X}}-{b}_{i}}{{\underset{\_}{C}}_{i}^{\text{T}}\underset{\_}{d}}\right\}$ , where $\underset{\_}{d}$ is the di-

rection vector; ${\underset{\_}{C}}_{i}^{\text{T}}$ is the vector which represents the parameter of linear inequalities; $\underset{\_}{\overline{X}}$ is the starting point and ${b}_{i}$ is a scalar of the linear inequalities;

${\zeta}_{N}$ is an N-point design measure whose support points may or may not have equal weights;

Support points are pairs of points marked on the boundary and interior of the partitioned space which are picked to form design matrices;

$\tilde{X}$ is the experimental space of the response surface that can be partitioned into segments such that every pair of support points in the segment is a subset of $\tilde{X}$ ;

$M\left({\zeta}_{nk}\right)=\left({X}_{k}^{\text{T}}{X}_{k}\right)$ is the information matrix, ${M}^{-1}\left({\zeta}_{nk}\right)={\left({X}_{k}^{\text{T}}{X}_{k}\right)}^{-1}$ is the inverse information matrix;

S_{1} is segment 1, S_{2} is segment 2,

$\mathrm{det}M\left({\zeta}_{nk}\right)$ is the determinant of the information matrix;

H_{i} is the matrix of the coefficients of convex combination and is defined as

${H}_{i}=\text{diag}\left({h}_{i1},{h}_{i2},\cdots ,{h}_{i,n+1}\right),\text{\hspace{0.17em}}i=1,2,\cdots ,k;$

With i = 1, 2 segments, the coefficients of convex combinations, H_{i}, of the segments are:

${H}_{1}=\text{diag}\left\{\frac{{V}_{1}{}_{11}}{{V}_{111}+{V}_{211}},\frac{{V}_{122}}{{V}_{122}+{V}_{222}},\frac{{V}_{133}}{{V}_{133}+{V}_{233}}\right\}=\text{diag}\left\{{h}_{11},{h}_{12},{h}_{13}\right\}$ (1.2)

for the inverse information matrix in segment 1,

${H}_{2}=\text{diag}\left\{\frac{{V}_{2}{}_{11}}{{V}_{111}+{V}_{211}},\frac{{V}_{222}}{{V}_{122}+{V}_{222}},\frac{{V}_{233}}{{V}_{133}+{V}_{233}}\right\}=\text{diag}\left\{{h}_{21},{h}_{22},{h}_{23}\right\}$ (1.3)

for the inverse information matrix in segment 2,

where V_{111}, V_{122}, V_{133} are the variances of the inverse information matrix of segment 1 and V_{211}, V_{222}, V_{233 }are variances of the inverse information matrix of segment 2, respectively.

The average information matrix, ${M}_{A}\left({\zeta}_{N}\right)$ , is the sum of the product of the k information matrices and the k matrices of the coefficients of convex combinations, thus

${M}_{A}\left({\zeta}_{N}\right)={\displaystyle \sum _{k=1}^{s}{H}_{k}{X}_{k}^{\text{T}}{X}_{k}{H}_{k}^{\text{T}}};$ see [6] (1.4)

Segmentation is the partitioning of the experimental space, $\tilde{X}$ , into segments. Segmentation can be non-overlapping and overlapping, and support points are selected from each segment to form design matrices.

An unbiased response function is defined by

$f\left({x}_{1},{x}_{2}\right)={a}_{00}+{a}_{10}{x}_{1}+{a}_{20}{x}_{2}$ (1.5)

3.1.2. Algorithm for Super Convergent Line Series

The algorithm follows the following sequence of steps:

1) Partition the experimental space (Cube) into
$k=1,2,\cdots ,s$
segments and select N_{k} support points from the kth segment; hence, make up an N-point design,

${\zeta}_{N}^{\left(1\right)}=\left\{\begin{array}{c}{\underset{\_}{x}}_{1},{\underset{\_}{x}}_{2},\cdots ,{\underset{\_}{x}}_{n},\cdots ,{\underset{\_}{x}}_{n},\\ {w}_{1},{w}_{2},\cdots ,{w}_{n},\cdots ,{w}_{n}\end{array}\right\};\text{\hspace{0.17em}}\text{\hspace{0.17em}}N={\displaystyle \sum _{k=1}^{s}{N}_{k}.}$

2) Compute the vectors, ${\overline{\underset{\_}{X}}}^{\ast},{d}^{\ast},{\rho}^{\ast}.$

3) Move to the point, ${\underset{\_}{X}}^{\ast}={\overline{\underset{\_}{X}}}^{\ast}-{\rho}^{\ast}{d}^{\ast}.$

4) Is ${X}^{\ast}=X{f}^{\ast}$ ? (where $X{f}^{\ast}$ is the optimizer of $f(\cdot )$ ).

Yes: stop,

No: then go back to 1) above until the optimal solution is obtained.

5) Identify the segment in which the optimal solution is obtained.

3.2. The Average Information Matrix, the Direction Vector, the Starting Point and the Step-Length

3.2.1. The Average Information Matrix

The average information matrix, $M\left({\zeta}_{n}\right)$ , is the sum of the product of the k information matrices, and the k matrices of the coefficients of convex combina-

tions given by $M\left({\zeta}_{n}\right)={\displaystyle \sum _{k=1}^{s}{H}_{K}{X}_{K}^{\text{T}}{X}_{K}{H}_{K}^{\text{T}}},$

for two segments, the average information matrix is ${M}_{A}\left({\zeta}_{N}\right)={H}_{1}^{\ast}{X}_{1}^{\text{T}}{X}_{1}{H}_{1}^{\ast \text{T}}+{H}_{2}^{\ast}{X}_{2}^{\text{T}}{X}_{2}{H}_{2}^{\ast \text{T}}=\left(\begin{array}{ccc}{m}_{11}& {m}_{21}& {m}_{31}\\ {m}_{12}& {m}_{22}& {m}_{32}\\ {m}_{13}& {m}_{23}& {m}_{33}\end{array}\right)$ .

3.2.2. The Direction Vector

The direction vector defined in Section 3.1.1 is computed as follows:

If f(x) is the response function, then the response vector, Z, is given by

$Z=\left(\begin{array}{c}{z}_{0}\\ {z}_{1}\\ {z}_{2}\\ \vdots \\ {z}_{n}\end{array}\right)$ , where

$\begin{array}{l}{z}_{0}=f\left({m}_{12},\text{}{m}_{13},\cdots ,{m}_{1,n+1}\right),\\ {z}_{1}=f\left({m}_{22},\text{}{m}_{23},\cdots ,{m}_{2,n+2}\right),\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\vdots \\ {z}_{n}=\text{}f\left({m}_{n+1,2},{m}_{n+1,3},\cdots ,{m}_{2,n+1}\right).\end{array}$

Hence, the direction vector defined in Section 3.1.1 is computed as

$\underset{\_}{d}={M}_{A}^{-1}\left({\zeta}_{N}\right)Z=\left(\begin{array}{c}\underset{\_}{{d}_{0}}\\ {d}_{1}\\ {d}_{2}\\ \vdots \\ {d}_{n}\end{array}\right)$
.^{ }

By normalizing such that ${d}^{*\text{T}}{d}^{*}=1$ , we have ${d}^{*}=\left(\begin{array}{c}\frac{{d}_{1}}{\sqrt{{d}_{1}^{2}+{d}_{2}^{2}+\cdots +{d}_{n}^{2}}}\\ \frac{{d}_{2}}{\sqrt{{d}_{1}^{2}+{d}_{2}^{2}+\cdots +{d}_{n}^{2}}}\\ \vdots \\ \frac{{d}_{n}}{\sqrt{{d}_{1}^{2}+{d}_{2}^{2}+\cdots +{d}_{n}^{2}}}\end{array}\right)$ ,

where d_{0} = 1 is discarded.

3.2.3. Optimal Starting Point

The optimal starting point is obtained from the design matrices of the segments considered. The optimal starting point defined in Section 3.1.1 is obtained as follows:

$\overline{\underset{\_}{X}}={\displaystyle \sum _{m=1}^{N}{w}_{m}{x}_{m}};\text{\hspace{0.17em}}\text{\hspace{0.17em}}{w}_{m}\ge 0;\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\displaystyle \sum _{m=1}^{N}{w}_{m}}=1.\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{w}_{m}=\frac{{a}_{m}^{-1}}{{\displaystyle \sum _{m=1}^{N}{w}_{m}}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}m=1,2,\cdots ,N.$

${a}_{m}={x}_{m}^{\text{T}}{x}_{m},\text{\hspace{0.17em}}\text{\hspace{0.17em}}m=1,2,\cdots ,N.$
^{ }

Using a 4-point design matrix, $\overline{\underset{\_}{X}}={\displaystyle \sum _{m=1}^{8}{w}_{m}{x}_{m}},\text{\hspace{0.17em}}{w}_{m}=\frac{{a}_{m}^{-1}}{{\displaystyle \sum _{m=1}^{8}{a}_{m}^{-1}}},\text{\hspace{0.17em}}m=1,2,\cdots ,8.$

3.2.4. The Step-Length

The step-length is defined by

${\rho}^{\ast}=\mathrm{min}\left\{\frac{{\underset{\_}{C}}^{\prime}{}_{i}^{\text{T}}{\underset{\_}{\overline{X}}}^{\ast}-{b}_{i}}{{\underset{\_}{C}}^{\prime}{}_{i}^{\text{T}}{\underset{\_}{d}}^{\ast}}\right\}$ , where ${\rho}^{\ast}$ is the optimal step-length and ${\underset{\_}{d}}^{\ast}$ is the

normalized direction vector, ${\underset{\_}{C}}_{i}^{\text{T}}$ is the vector which represent the parameter of linear inequalities, ${\underset{\_}{\overline{X}}}^{\ast}$ is the starting point while ${b}_{i}$ is a scalar of linear inequalities.

4. Results and Discussion

4.1. Comparison of Results Obtained Using the Segmentation Procedure with Existing

Results in the Literature

Problem 1: [ [21] , Problem 7.2B, Question 2b, pp. 304]

Maximize $Z=2{x}_{1}+{x}_{2}+2{x}_{3}$

Subject to $4{x}_{1}+3{x}_{2}+8{x}_{3}\le 12$

$4{x}_{1}+{x}_{2}+12{x}_{3}\le 8$

$4{x}_{1}-{x}_{2}+3{x}_{3}\le 8$

${x}_{1},{x}_{2},{x}_{3}\ge 0$

Support points are picked from the boundaries of the partitioned segments (Figure 2) provided they do not violate the constraint equations.

Thus ${X}_{1}=\left\{\left(0,1,0\right),\left(0,1,1\right),\left(0,0,1\right),\left(1/2,0,0\right),\cdots ,\left(1/4,0,0\right)\right\}$ and

${X}_{2}=\left\{\left(1,0,1\right),\left(0,0,1/2\right),\left(1,0,0\right),\left(1,1/2,0\right),\left(1,1,0\right),\cdots ,\left(1/2,0,0\right)\right\}$ ,

where X_{1} and X_{2} is obtained from S_{1} and S_{2} respectively.

Thus, the design and inverse matrices are given as follows (from Figure 2):

${X}_{1}=\left(\begin{array}{cccc}1& 0& 1& 0\\ 1& 0& 0& \frac{1}{2}\\ 1& 0& \frac{1}{2}& 0\\ 1& \frac{1}{2}& 0& 0\end{array}\right)$ ; ${X}_{2}=\left(\begin{array}{cccc}1& 1& 1& 0\\ 1& 1& \frac{1}{2}& 0\\ 1& \frac{1}{2}& 0& 0\\ 1& 0& 0& \frac{1}{2}\end{array}\right)$ ,

Figure 2. Using 2 segments (S = 2).

${\left({X}_{1}^{\text{T}}{X}_{1}\right)}^{-1}=\left(\begin{array}{cccc}5& -10& -6& -10\\ -10& 24& 12& 20\\ -6& 12& 8& 12\\ -10& 20& 12& 24\end{array}\right)$ ; ${\left({X}_{2}^{\text{T}}{X}_{2}\right)}^{-1}=\left(\begin{array}{cccc}9& -14& 6& -18\\ -14& 24& -12& 28\\ 6& -12& 8& -12\\ -18& 28& -12& 40\end{array}\right)$

The direction vector, $\underset{\_}{d}=\left(\begin{array}{c}2.000\\ 1.000\\ 2.000\end{array}\right)$ ; by normalizing $\underset{\_}{d}$ , we get ${\underset{\_}{d}}^{\ast}=\left(\begin{array}{c}0.8944\\ 0.4472\\ 0.8944\end{array}\right)$ ,

(See Section 3.2.2)

${\overline{\underset{\_}{X}}}^{\ast}={\displaystyle \sum _{i=1}^{N}{w}_{i}{x}_{i}}=\left(\begin{array}{c}0.2990\\ 0.2758\\ 0.1516\end{array}\right)$ , the step-length, ${\rho}^{\ast}=-1.1396$ , ${\underset{\_}{X}}^{\ast}={\underset{\_}{\overline{X}}}^{\ast}-{\rho}^{\ast}{d}^{\ast}=\left(\begin{array}{c}1.318\\ 0.7854\\ 1.1709\end{array}\right)$ .

Therefore, Max Z = 5.46.

With S = 2 (2 Segments), the value of Z is Max. Z = 5.46 (in one iteration) which is close to the optimal value obtained by [21] , problem 7.2b, Question 2b, pp. 304, as Max Z = 5.00 (in 3 iterations). The maximum values of Z for this problem using 3 and 4 segments are: 5.81 for (x_{1}, x_{2}, x_{3}) = (1.265, 0.7891, 1.2431), and 5.77 for (x_{1}, x_{2}, x_{3}) = (1.0008, 0.6606, 1.5523). These values are not optimal because they do not compare favourably with the existing solution got by [21] using the simplex method.

Problem 2: [ [22] , Ex. 2.4, Q. 14(ii), p. 215]

Maximize $Z=5{x}_{1}+3{x}_{2}+7{x}_{3}$

Subject to ${x}_{1}+{x}_{2}+2{x}_{3}\le 26$

$3{x}_{1}+2{x}_{2}+{x}_{3}\le 26$

${x}_{1}+{x}_{2}+{x}_{3}\le 18$

${x}_{1},{x}_{2},{x}_{3}\ge 0$

Support points are picked from the boundaries of the partitioned segments (from Figure 3) provided they do not violate the constraint equations.

Thus ${X}_{1}=\left\{\left(0,1,0\right),\left(0,1,1\right),\left(0,0,1\right),\left(1/2,0,0\right),\cdots ,\text{}\left(1/4,0,0\right)\right\}$ and

${X}_{2}=\left\{\left(1,0,1\right),\left(0,0,1/2\right),\left(1,1,1\right),\left(1,1/2,0\right),\left(1,1,0\right),\cdots ,\left(1/2,0,0\right)\right\}$ ,

Thus, the design and inverse matrices are given as follows (from Figure 3):

${X}_{1}=\left(\begin{array}{cccc}1& 0& 1& 0\\ 1& 0& 1& 1\\ 1& \frac{1}{4}& 0& 0\\ 1& 0& 0& 1\end{array}\right)$ ; ${X}_{2}=\left(\begin{array}{cccc}1& 1& 1& 1\\ 1& 1& 0& 1\\ 1& 1& \frac{1}{2}& 0\\ 1& \frac{1}{2}& 0& 0\end{array}\right)$ ,

${\left({X}_{1}^{\text{T}}{X}_{1}\right)}^{-1}=\left(\begin{array}{cccc}3& -12& -2& -2\\ -12& 64& 20& 8\\ -2& 8& 2& 1\\ -2& 8& 1& 2\end{array}\right)$ ; ${\left({X}_{2}^{\text{T}}{X}_{2}\right)}^{-1}=\left(\begin{array}{cccc}9& -14& 6& 5\\ -14& 24& -12& -10\\ 6& -12& 8& 6\\ 5& -10& 6& 6\end{array}\right)$

The direction vector, $\underset{\_}{d}=\left(\begin{array}{c}5.0002\\ 2.9998\\ 6.9999\end{array}\right)$ , by normalizing $\underset{\_}{d}$ , we get ${\underset{\_}{d}}^{\ast}=\left(\begin{array}{c}0.5488\\ 0.3293\\ 0.7683\end{array}\right)$

(See Section 3.2.2) ${\overline{\underset{\_}{X}}}^{\ast}={\displaystyle \sum _{i=1}^{N}{w}_{i}{x}_{i}}=\left(\begin{array}{c}0.3812\\ 0.3318\\ 0.2787\end{array}\right)$ , the step-length ${\rho}^{\ast}=-10.3306$ , ${\underset{\_}{X}}^{\ast}={\underset{\_}{\overline{X}}}^{\ast}-{\rho}^{\ast}{d}^{\ast}=\left(\begin{array}{c}6.0506\\ 3.7337\\ 8.2157\end{array}\right)$ .

Therefore, Max Z = 98.96.

With S = 2 (2 Segments), the value of Z is Max Z = 98.96 which is close to the

Figure 3. Using 2 Segments (S = 2).

optimal value (in one iteration) obtained by [22] , Ex. 2.4, Q 14(ii), p. 215, as Max Z = 98.80 (in two iterations), using the simplex method. The maximum value of Z for this problem using 3 and 4 segments are: 99.06 for (x_{1}, x_{2}, x_{3}) = (6.0213, 3.725, 8.2537) and 99.15 for (x_{1}, x_{2}, x_{3}) = (6.0746, 3.675, 8.2503).

4.2. Illustrative Problem and Application

A producer of leather shoes makes three types of shoes, X, Y and Z, which are processed on three machines, K_{1}, K_{2} and K_{3}. The daily capacities of the machines are given in Table 1 as follows.

The profit gained from shoe X is ₦3 per unit, shoe Y is ₦5 per unit and shoe Z is ₦4 per unit. What is the maximum profit for the three types of shoe produced?

Solution: Let X_{1} be the unit of type X, X_{2} be the unit of type Y and X_{3} be the unit of type Z.

Maximize $Z=3{x}_{1}+5{x}_{2}+4{x}_{3}$

Subject to $2{X}_{1}+3{X}_{2}\le 8$

$2{X}_{2}+5{X}_{3}\le 10$

$3{X}_{1}+2{X}_{2}+4{X}_{3}\le 15$

${X}_{1},{X}_{2},{X}_{3}\ge 0$

In a similar manner, the design and inverse matrices are given as follows [from Figure 4].

Table 1. The daily capacity of the machines.

Figure 4. Using 2 segments (S = 2).

${X}_{1}=\left(\begin{array}{cccc}1& 0& 1& 0\\ 1& 0& 1& 1\\ 1& \frac{1}{4}& 0& 0\\ 1& 0& 0& 1\end{array}\right)$ ; ${X}_{2}=\left(\begin{array}{cccc}1& 1& 1& 1\\ 1& 1& 0& 1\\ 1& 1& \frac{1}{2}& 0\\ 1& \frac{1}{2}& 0& 0\end{array}\right)$ ,

${\left({X}_{1}^{\text{T}}{X}_{1}\right)}^{-1}=\left(\begin{array}{cccc}3& -12& -2& -2\\ -12& 64& 8& 8\\ -2& 8& 2& 1\\ -2& 8& 1& 2\end{array}\right)$ ; ${\left({X}_{2}^{\text{T}}{X}_{2}\right)}^{-1}=\left(\begin{array}{cccc}9& -14& 6& 5\\ -14& 24& -12& -10\\ 6& -12& 8& 6\\ 5& -10& 6& 6\end{array}\right)$ .

The direction vector, $\underset{\_}{d}=\left(\begin{array}{c}3\\ 5\\ 4\end{array}\right)$ ; by normalizing $\underset{\_}{d}$ , we get ${\underset{\_}{d}}^{\ast}=\left(\begin{array}{c}0.4243\\ 0.7071\\ 0.5657\end{array}\right)$ , ${\overline{\underset{\_}{X}}}^{\ast}={\displaystyle \sum _{i=1}^{N}{w}_{i}{x}_{i}}=\left(\begin{array}{c}0.3027\\ 0.2953\\ 0.2483\end{array}\right)$ , step-length ${\rho}^{\ast}=-2.1916$ , ${\underset{\_}{X}}^{\ast}={\underset{\_}{\overline{X}}}^{\ast}-{\rho}^{\ast}{d}^{\ast}=\left(\begin{array}{c}1.2326\\ 1.845\\ 1.4881\end{array}\right)$ . Therefore, the maximum value of Z is

₦18.88. This value in one iteration is close to the optimum value got by using the simplex method approach (in three iterations). When 3 and 4 segments were used, the maximum values of Z for this problem are ₦21.25 with corresponding values (X_{1}, X_{2}, X_{3}) = (1.37, 2.08, 1.68) and ₦21.03 with corresponding values (X_{1}, X_{2}, X_{3}) = (1.43, 2.01, 1.67). These values are not optimal because they do not compare favourably with the simplex method solution which is Max Z = ₦18.66.

5. Conclusion

Three dimensional Linear Programming problems have been solved using the line search equation, ${\underset{\_}{\overline{X}}}^{\ast}-{\rho}^{\ast}{d}^{\ast}$ , of the Super Convergent Line Series, by segmenting the cuboidal response surface into 2, 3 and 4 segments. A real-life problem was also used to achieve the desired result. It was found that the optimal solution is attained at 2 segments (S = 2) and in one iteration or move even though up to 4 segments (S = 4) were considered. But comparing the solution with the simplex method’s result, a close result was obtained in 2 and 3 iterations. Hence, as the name implies, the Super Convergent Line Series (SCLS) locates the optimizer in one iteration and better still with segmentation.

Cite this paper

Ugbe, T. and Chigbu, P. (2017) On Optimal Non-Overlapping Segmentation and Solutions of Three-Dimen- sional Linear Programming Problems through the Super Convergent Line Series. American Journal of Operations Research, 7, 225-238. https://doi.org/10.4236/ajor.2017.73015

References

- 1. Gass, S.I. (1958) Linear Programming Methods and Applications. McGraw-Hill, New York.
- 2. Dantzig, G.B. (1963) Linear Programming and Extension. Princeton University Press, Princeton. https://doi.org/10.1515/9781400884179
- 3. Philip, D.T., Walter, M. and Wright, M.H. (1981) Practical Optimization. Academic Press, London.
- 4. Wilde, D.J. and Beightler, C.S. (1967) Foundations of Optimization. Prentice Hall Inc., Upper Saddle River.
- 5. Myers, R.H. (1971) Response Surface Methodology. Allyn & Bacon, Boston.
- 6. Onukogu, I.B. and Chigbu, P.E. (2002) Super Convergent Line Series (in Optimal Design of Experiment and Mathematical Programming). AP Express Publishing, Nsukka.
- 7. Chigbu, P.E. and Ugbe, T.A. (2002) On the Segmentation of the Response Surfaces for Super Convergent Line Series Optimal Solutions of Constrained Linear and Quadratic Programming Problem. Global Journal of Mathematical Sciences, 1, 27-34.
- 8. Chigbu, P.E. and Ukaegbu, E.C. (2007) On the Precision and Mean Square Error Matrices Approaches in Obtaining the Average Information Matrices via the Super Convergent Line Series. Journal of Nigerian Statistical Association, 19, 4-18.
- 9. Etukudo, I.A. and Umoren, M.U. (2008) A Modified Super Convergent Line Series Algorithm for Solving Linear Programming Problems. Journal of Mathematical Sciences, 19, 73-88.
- 10. Iwundu, M.P. and Hezekiah, J.E. (2014) Algorithmic Approach to Solving Linear Programming Problems on Segmented Regions. Asian Journal of Mathematics and Statistics, 7, 40-59. https://doi.org/10.3923/ajms.2014.40.59
- 11. Iwundu, M.P. and Ebong, D.W. (2014) Modified Quick Convergent Inflow Algorithm for Solving Linear Programming Problems. International Journal of Probability and Statistics, 3, 54-66. https://doi.org/10.5539/ijsp.v3n4p54
- 12. Ugbe, T.A. and Chigbu, P.E. (2014) On Non-Overlapping Segmentation of the Response Surfaces for Solving Constrained Programming Problems through Super Convergent Line Series. Communications in Statistics—Theory and Methods, 43, 306-320. https://doi.org/10.1080/03610926.2012.661510
- 13. Grosan, C. and Abraham, A. (2007) Modified Line Search Method for Global Optimization. Proceeding of the 1st Asia International Conference of Modeling and Simulation, Phuket, 27-30 March 2007, 415-420. https://doi.org/10.1109/AMS.2007.68
- 14. Andrei, N. (2008) Performance Profiles of Line Search Algorithm for Unconstrained Optimization. Research Institute for Informatics. Centre for Advance Modeling and Optimization, ICI Technical report. https://www.ici.ro/neculai/p12a08
- 15. Chouzenoux, E., Moussaoui, S. and Idier, J. (2009) A Majorize-Minimize Line Search Algorithm for Barrier Function Optimization. 17th European Signal Processing Conference, Glasgow, 24-28 August 2009, 1379-1383.
- 16. Zhu, S., Ruan, G. and Huang, X. (2010) Some Fundamental Issues of Basic Line Search Algorithm for Linear Programming Problems. Optimization, 59, 1283-1295. https://doi.org/10.1080/02331930903395873
- 17. Gardeux, V., Chelouah, R., Siarry, P. and Glover, F. (2011) EM323: A Line Search Algorithm for Solving High Dimensional Continuous Non-Linear Optimization Problems. Soft Computing, 15, 2275-2285. https://doi.org/10.1007/s00500-010-0651-6
- 18. Pazman, A. (1987) Foundation of Optimum Experimental Design. D. Riedel Publishing Company, Dordrecht.
- 19. Onukogu, I.B. (1997) Foundation of Optimal Exploration of Response Surfaces. Ephrata Press, Nsukka.
- 20. Smith, W.F. (2005) Experimental Design for Formulation (ASA-SIAM Series on Statistics and Applied Probability). SIAM, Philadelphia, ASA, Alexandria, VA.
- 21. Taha, H.A. (2006) Operations Research: An Introduction. 7th Edition, Macmillan Publishing Company, New York.
- 22. Gupta, P.K. and Hira, D.S. (2008) Operations Research. S. Chand and Company Limited, New Delhi.