Journal of Applied Mathematics and Physics
Vol.07 No.01(2019), Article ID:89771,8 pages
10.4236/jamp.2019.71003

Easy Simplex (AHA Simplex) Algorithm

A. H. Ansari

Kejriwal Institute of Management and Development Studies, Ranchi, India

Copyright © 2019 by author(s) and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution-NonCommercial International License (CC BY-NC 4.0).

http://creativecommons.org/licenses/by-nc/4.0/

Received: December 3, 2018; Accepted: January 7, 2019; Published: January 10, 2019

ABSTRACT

The purpose of this research paper is to introduce Easy Simplex Algorithm which is developed by author. The simplex algorithm first presented by G. B. Dantzing, is generally used for solving a Linear programming problem (LPP). One of the important steps of the simplex algorithm is to convert all unequal constraints into equal form by adding slack variables then proceeds to basic solution. Our new algorithm i) solves the LPP without equalize the constraints and ii) leads to optimal solution definitely in lesser time. The goal of suggested algorithm is to improve the simplex algorithm so that the time of solving an LPP will be definitely lesser than the simplex algorithm. According to this Easy Simplex (AHA Simplex) Algorithm the use of Big M method is not required.

Keywords:

Linear Programming, Simplex Algorithm, Optimal Solution, Easy Simplex Algorithm, AHA Simplex Algorithm

1. Introduction

One of the most dynamic areas of applied mathematics is linear programming (LP). The simplex method was first time implemented by George Barnard Dantzig during Second World War, then the method was introduced by [1] , the simplex algorithm is an iterative procedure for solving linear programming problem in finite number of steps. It provides an algorithm which consists of moving from one vertex of feasible region to another vertex of feasible region in such a manner that the value of objective function at the neighboring vertex is near to optimal than at the previous vertex. The procedure is repeated until the optimal solution reached and since the number of vertices are finite, therefore the method leads to an optimal vertex (Optimal solution) in a finite number of repetition of process or indicate about the existence of unbounded or infeasible solution. This method tests adjacent vertices of the feasible set (which is a polytope) in succesion so that each new vertex improves the objective function value or leaves it unchanged. The Simplex method is very efficient in practice. However, in [2] , Klee and Minty gave an example of a linear programming problem in which the polytope is a distortion of an n-dimensional cube. They show the number of arithmetic operations in the simplex algorithm grows exponentially with the number of variables. In [3] , Khachiyan proposed the first polynomial algorithm, the ellipsoid method. In [4] , Karmarkar suggested the second polynomial-time algorithm, which appears to be more efficient than Simplex method, especially when the problem size increases above some thousands of variables [5] . These authors motivate me to think in different procedure by which the time of solving an LPP should take less time with the less number of variables. One of the important steps of the simplex algorithm is of course selection of basis-entering variable. Dantzig simplex method still seems to be the most efficient procedure for majority of practical problems, especially for small problems.

In this research paper, I proposed an easy simplex algorithm and defined it as AHA simplex algorithm (by the short name of author). The algorithm will definitely take less time in solving an LPP with the help of assumption that there is no basic variables at initial stage, the objective function value is zero at this stage with all non basic variables which is same as the simplex method. The idea is to improve the objective function value in each iteration by entering non basic variable in blank set of basic variables. Then, we do the calculation according to the easy simplex algorithm which is explained in the coming sections with illustration.

2. Notation

The general form of an LPP is

Max z = c 1 x 1 + c 2 x 2 + + c n x n

Subject to a 11 x 1 + a 12 x 2 + + a 1 n x n b 1 , a 21 x 1 + a 22 x 2 + + a 2 n x n b 2 , a m 1 x 1 + a m 2 x 2 + + a m n x n b m , LPP (2.1)

Non negative condition x 1 , x 2 , , x n 0

It can be written in Matrix form as:

Max z = c T X

Subject to A X b X 0 LPP (2.2)

where

X = n × 1 Column Vector ,

A = m × n Coefficient Matrix ,

b = m × 1 Column Vector ,

c = 1 × n Row Vector

If the above problem has only two vectors, then it can solve easily by Graphical method. If the LPP has more than two variables, then simplex method is appropriate and powerful method to solve these types of problems.

3. Proposed AHA Simplex Simulation Algorithm

3.1. The Simplex Algorithm

To solve any LPP using simplex algorithm, the existence of an initial basic feasible solution is always assumed. Consider the LPP (2.2). The steps for the computation of an optimum solution are as follows:

Step 1. Check whether the objective function of the given LPP is to be maximized or minimized. If it is to be minimized then we convert it into a problem of maximizing it by using the result Maximum z = −Minimum (−z)

Step 2. Check whether all b i ( i = 1 , 2 , 3 , , m ) are non negative. If any one of b i is negative then multiply the corresponding inequation by −1, so as to get all b i ( i = 1 , 2 , 3 , , m ) must be non negative.

Step 3. Convert all the inequations of the constraints into equations by introducing slack and/or surplus variables in the constraints. Put the costs of these variables equal to zero.

Step 4. Obtain an initial basic feasible solution to the problem in the form x B = B 1 b and put it in the first column of the simplex tableu.

Step 5. Compute the net evaluations z j c j ( j = 1 , 2 , , n ) using the relation z j c j = C B y j C j where y j = B 1 a j examine the sign z j c j

a) If all ( z j c j ) 0 then the initial basic feasible solution is an optimal basic feasible solution.

b) If at least one z j c j < 0 , proceed to the next step.

Step 6. If there is more than one z j c j < 0 then choose the most negative of them. Let it be for j = r

a) If all y i r 0 ( i = 1 , 2 , 3 , , m ) then the problem is an unbounded solution to the given problem.

b) If at least one y i r > 0 ( i = 1 , 2 , 3 , , m ) then corresponding vector y r is known as pivotal column, enter the basis y B .

Step 7. Compute the ratios { x B i y i r ; y i r > 0 } and choose the minimum ratio. Suppose the vector y k have the minimum ratio then it is known as pivotal row, will leave the basis y B . The common element of pivotal column and pivotal row is pivotal element y k r .

Step 8. Convert the pivotal element to unity by dividing its row by pivotal element y k r and all other elements in pivotal column to zero by making use of the relations:

y i j ( N e w ) = y k j ( O l d ) y k r ( O l d ) ; j = 1 , 2 , 3 , , n , i = k

y i j ( N e w ) = y i j ( O l d ) y k j ( O l d ) × y i r ( O l d ) y k r ( O l d ) ; i = 1 , 2 , 3 , , m , i k , j = 1 , 2 , 3 , n ,

Step 9. Go to step 5 and repeat the computational procedure until either an optimum solution is obtained or there is an unbounded solution.

3.2. The Easy Simplex (AHA Simplex) Algorithm

Consider the same LPP (2.2). Without loss of generality the objective function can be written as objective inequation:

Max z c 1 x 1 + c 2 x 2 + + c n x n (3.2.1)

or

Max [ 1 c T ] [ z x ] 0 (3.2.2)

Subject to [ 0 A ] [ z x ] b (3.3.3)

X 0 (3.3.4)

Steps for the computation of an optimum solution are as follows:

Step 1. Check whether the objective function of the given LPP is to be maximized or minimized. If it is to be minimized then we convert it into a problem of maximizing it by using the result Maximum z = −Minimum (−z).

Step 2. Check whether all b i ( i = 1 , 2 , 3 , , m ) are non negative. If any one of b i is negative then multiply the corresponding inequation by −1, so as to get all b i ( i = 1 , 2 , 3 , , m ) must be non negative.

Step 3. Examine the sign of all decision variables in objective inequation c j

a) If all c j 0 then the solution is an optimal solution where z = 0 and x j = 0 ( j = 1 , 2 , , n ) .

b) If at least one c j < 0 , proceed to the next step.

Step 4. If there is more than one c j < 0 then choose the most negative of them. Let it be for j = r

a) If all a i r 0 ( i = 1 , 2 , 3 , , m ) then the problem is an unbounded solution to the given problem.

b) If at least one a i r > 0 ( i = 1 , 2 , 3 , , m ) then corresponding vector x r is known as pivotal column, enter into the basis y B .

Step 5. Compute the ratios { b i a i r ; a i r > 0 , i = 1 , 2 , 3 , , m } and choose the minimum ratio. Suppose { b k a k r ; a k r > 0 } have the minimum ratio then it is known as pivotal row, which is initially blank will fill by entering the variable x r into the basis y B . The common element of pivotal column and pivotal row is pivotal element a k r .

Step 6. Convert the pivotal element to unity by dividing the row by pivotal element a k r ( N e w ) = a k j ( O l d ) a k r ( O l d ) , and all other elements in pivotal column to zero by making use of the relations:

c j ( N e w ) = c j ( O l d ) { c r ( O l d ) } × a k j ( N e w ) ; c j

a i j ( N e w ) = a i j ( O l d ) { a i r ( O l d ) } × a k j ( N e w ) ; a i j , i k

Step 7. Go to step 3 and repeat the computational procedure until either an optimum solution is obtained or there is an unbounded solution.

4. Numerical Illustrations

Illustration 1: Solve the following LPP.

Max z = 12 x 1 + 20 x 2 + 18 x 3 + 40 x 4

Subject to 4 x 1 + 9 x 2 + 7 x 3 + 4 x 4 6000 , x 1 + x 2 + 3 x 3 + 40 x 4 4000 , and x 1 , x 2 , x 3 , x 4 0

Solution: Since there is Maximization type objective function and all constraints are less than type. Then we use the AHA Simplex method as

Max z 12 x 1 20 x 2 18 x 3 40 x 4 0

Subject to 4 x 1 + 9 x 2 + 7 x 3 + 4 x 4 6000 , x 1 + x 2 + 3 x 3 + 40 x 4 4000 , and x 1 , x 2 , x 3 , x 4 0

Initial Table (Table 1)

The most negative coefficients of x 4 in objective inequation is ? 40 in ( R o w 0 ) in Table 1, so the entering variable is x 4 . Minimum positive ratio is 100, so the x 4 enter into ( R o w 2 ) . Therefore pivotal element is 40.

R 2 ( N e w ) = R 2 ( O l d ) ÷ 40

R 0 ( N e w ) = R 0 ( O l d ) + 40 × R 2 ( N e w )

R 1 ( N e w ) = R 1 ( O l d ) + ( 10 ) × R 2 ( N e w )

First Iteration (Table 2)

The most negative coefficients of x 2 in objective inequation is −19 in ( R o w 0 ) in Table 2, so the entering variable is x 2 . Minimum positive ratio is 5000 35 / 4 571.429 , so the x 2 enter into ( R o w 1 ) . Therefore pivotal element is 35/4.

R 1 ( N e w ) = R 1 ( O l d ) ÷ 35 / 4

R 0 ( N e w ) = R 0 ( O l d ) + 19 × R 1 ( N e w )

R 2 ( N e w ) = R 2 ( O l d ) + { ( 1 / 40 ) } × R 1 ( N e w )

Second Iteration (Table 3)

The most negative coefficients of x 1 in objective inequation is −20/7 in ( R o w 0 ) in Table 3, so the entering variable is x 1 . Minimum positive ratio is 4000 / 7 3 / 7 1333.33 , so the x 1 enter into ( R o w 1 ) . Therefore pivotal element is 3/7.

R 1 ( N e w ) = R 1 ( O l d ) ÷ 3 / 7

R 0 ( N e w ) = R 0 ( O l d ) + ( 20 / 7 ) × R 1 ( N e w )

R 2 ( N e w ) = R 2 ( O l d ) + { ( 1 / 70 ) } × R 1 ( N e w )

Third Iteration (Table 4)

Since the all coefficients of x j in objective inequation in ( R o w 0 ) of Table 4 is either zero or positive, means the optimal solution achieved, and optimal solution always lies at extreme point thus the optimal solution is as z = 56000 3 , x 1 = 4000 3 , x 4 = 200 3

Illustration 2: Solve the following LPP.

Max z = 4 x 1 + 3 x 2

Subject to x 1 + x 2 50 , x 1 + 2 x 2 80 , 3 x 1 + 2 x 2 40 and x 1 , x 2 0

Solution: Since there is Maximization type objective function and some constraints are in greater than form, so subtract extra variables as follows.

Max z = 4 x 1 + 3 x 2 + 0 x 3 + 0 x 4

Subject to x 1 + x 2 50 , x 1 + 2 x 2 x 3 80 , 3 x 1 + 2 x 2 x 4 40 and x 1 , x 2 , x 3 , x 4 0

Then we use the AHA Simplex method as

Initial Table (Table 5)

The most negative coefficients of x 1 in objective inequation is―4 in ( R o w 0 ) of Table 5, so the entering variable is x 1 . Minimum positive ratio is 40/3, so the x 1 enter into ( R o w 3 ) . Therefore pivotal element is 3.

R 3 ( N e w ) = R 3 ( O l d ) ÷ 3

R 0 ( N e w ) = R 0 ( O l d ) + 4 × R 3 ( N e w )

R 1 ( N e w ) = R 1 ( O l d ) + ( 1 ) × R 3 ( N e w )

R 2 ( N e w ) = R 2 ( O l d ) + ( 1 ) × R 3 ( N e w )

First Iteration (Table 6)

The most negative coefficients of x 4 in objective inequation is −4/3 in ( R o w 0 ) of Table 6, so the entering variable is x 4 . Minimum positive ratio is 110 / 3 1 / 3 = 110 , so the x 4 enter into ( R o w 1 ) . Therefore pivotal element is 1/3.

R 1 ( N e w ) = R 1 ( O l d ) ÷ 1 / 3

R 0 ( N e w ) = R 0 ( O l d ) + ( 4 / 3 ) × R 1 ( N e w )

R 2 ( N e w ) = R 2 ( O l d ) + { ( 1 / 3 ) } × R 1 ( N e w )

R 3 ( N e w ) = R 3 ( O l d ) + ( 1 / 3 ) × R 1 ( N e w )

Second Iteration (Table 7)

Since the all coefficients of x j in objective inequation ( R o w 0 ) of Table 7 is either zero or positive, means the optimal solution achieved, and optimal solution always lies at extreme point thus the optimal solution is as z = 200 , x 1 = 50 , x 4 = 110 .

5. Conclusion

Since the Simplex Algorithm is an algorithm to solve the Linear Programming Problem (LPP) for more than two variables (also applicable in two variable) in such a manner that the optimal solution is achieved after a fixed number of iteration, the proposed Easy (AHA) Simplex Algorithm is a better algorithm to solve an LPP, because it has lesser calculation because the number of variables are less and steps (in some cases) and take lesser time than the Simplex algorithm.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

Cite this paper

Ansari, A.H. (2019) Easy Simplex (AHA Simplex) Algorithm. Journal of Applied Mathematics and Physics, 7, 23-30. https://doi.org/10.4236/jamp.2019.71003

References

  1. 1. Dantzig, G.B. (1951) Linear Programming. Applied Mathematics Series 15, National Beuro of Standards, 18-21.

  2. 2. Klee, V. and Minty, G.J. (1972) How Good Is the Symplex Algorithm? In: Shisha, O., Ed., Inequalities, III, Academic Press, New York, 159-175.

  3. 3. Khachiyan, L.G. (1979) A Polynomial Algorithm in Linear Programming. Soviet Mathematics Doklady, 20, 191-194.

  4. 4. Kramaker, N. (1984) A New Polynomial-Time Algorithm for Linear Programming. Combinatorica, 4, 373-395. https://doi.org/10.1007/BF02579150

  5. 5. Cichocki, A., Unbehauen, R., Weinzierl, K. and Holzel, R. (1996) A New Neural Network for Solving Linear Programming Problems. European Journal of Operation Research, 93, 244-256. https://doi.org/10.1016/0377-2217(96)00044-6