American Journal of Operations Research
Vol.07 No.01(2017), Article ID:73373,15 pages
10.4236/ajor.2017.71002

Posterior Constraint Selection for Nonnegative Linear Programming

H. W. Corley*, Alireza Noroziroshan, Jay M. Rosenberger

Center on Stochastic Modeling, Optimization, & Statistics (COSMOS), The University of Texas at Arlington, Arlington, TX, USA

Copyright © 2017 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: November 17, 2016; Accepted: January 8, 2017; Published: January 11, 2017

ABSTRACT

Posterior constraint optimal selection techniques (COSTs) are developed for nonnegative linear programming problems (NNLPs), and a geometric interpretation is provided. The posterior approach is used in both a dynamic and non-dynamic active-set framework. The computational performance of these methods is compared with the CPLEX standard linear programming algorithms, with two most-violated constraint approaches, and with previously developed COST algorithms for large-scale problems.

Keywords:

Linear Programming, Nonnegative Linear Programming, Large-Scale Problems, Active Set Methods, Constraint Selection, Posterior Method, COSTs

1. Introduction

1.1. The Nonnegative Linear Programming

Consider the linear programming (LP) problem

( P ) Maximize z = c T x (1)

subject to

A x b (2)

x 0 , (3)

where c and x are n-dimensional column vectors of objective coefficients and variables respectively; A is an m × n matrix [ a i j ] with 1 × n row vectors a i , i = 1 , , m ; b is an m × 1 column vector; and 0 is an n × 1 vector of zeros.

The non-polynomial simplex methods and the polynomial interior-point barrier-function algorithms are currently the principal two-solution approaches for solving problem P , but for either there are problem instances for which it performs poorly [1] . Since the principle use of LP in industrial applications is in binary and integer programming algorithms, however, pivoting algorithms with efficient post-optimality analysis are frequently preferable to interior-point methods. On the other hand, simplex methods often cannot solve large-scale LPs at a speed required by many current applications. The purpose here is to develop an approach for solving a certain class of LPs faster than existing methods.

In this paper we consider the nonnegative linear programming problem (NNLP), which is the special case of P with a i 0 but a i 0 , i = 1 , , m ; b>0 ; and c > 0. NNLPs model a large number of linear programming applications such as determining an optimal driving path for navigation systems using traffic data [2] , updating flight status due to weather conditions [3] , and detecting errors in DNA sequences [4] . NNLPs have the following two important properties:

1) the origin x = 0 is feasible,

2) x j min i = 1 , , m { b i a i j : a i j > 0 } , j = 1 , , n .

Thus NNLPs have a bounded feasible region and bounded objective function if and only if no column of A is a zero vector. It follows that the boundedness of an NNLP objective function is easily verifiable without computation.

1.2. Background

We propose here an active-set method to solve nonnegative linear programming problems faster than current approaches. Our method divides the constraints of problem P into operative and inoperative constraints at each active-set iteration. Operative constraints are those active in the current relaxed subproblem P r , r = 1 , 2 , , of P at iteration r , while the inoperative ones are constraints of the problem P not active in P r . In our active-set method we iteratively solve P r , r = 1 , 2 , , of P after adding one or more violated inoperative constraints from (2) to P r 1 until the solution x r * to P r is a solution to P .

Active-set methods have been studied by Stone [5] , Thompson et al. [6] , Adler et al. [7] , Zeleny [8] , Myers and Shih [9] , Curet [10] , and Bixby et al. [11] , among others. The term ‘‘constraint selection technique’’ was introduced in [9] , while the approaches of [7] and [8] illustrate two distinct classes of active-set methods. When the constraint selection metric for choosing violated inoperative constraints to be added to P r does not depend on the solution x r * , the associated active-set method is called a prior method. On the other hand, if the constraint selection at P r does depend on x r * , it is called a posterior method. Adler et al. [7] developed a prior method in which a violated inoperative constraint was chosen randomly at each active-set iteration. Zeleny [8] proposed a posterior method in which the inoperative constraint most violated by x r * was added. This method is a classical cutting-plane generation technique and is called VIOL here. VIOL is also used as a pricing rule in delayed column generation [12] , as an approach for adding multiple constraints in the interior point cutting plane method of [13] , and as part of the sifting algorithm of [11] for column generation.

More recently, Corley et al. [14] developed a geometric prior active-set method for P called the cosine simplex method. At each active-set iteration r , a single violated constraint maximizing the cosine of the angle between a i and c is added to the operative set for P r . This cosine constraint selection criterion is equivalent to the “most-obtuse-angle” pivot rule for the modified simplex algorithm introduced by Pan [15] , where it was applied to the dual problem for P. Junior and Lins [16] also utilized a cosine criterion to choose an initial basis for the simplex algorithm on P resulting in a fewer number of simplex iterations.

References [17] [18] [19] [20] are most directly related to the current work and involve the authors here. In [17] , Corley and Rosenberger proposed the constraint selection metric maximizing

R A D ( a i , b i , c ) = a i c b i (4)

for NNLPs. RAD is a geometric constraint selection criterion for determining the constraints most likely to be binding at optimality. In the associated active-set algorithm of [18] , all constraints of (2) are initially ordered by decreasing value of RAD prior to solving an initial bounded problem P 0 by the primal simplex. The dual simplex is then used when violated inoperative constraints are added according to their RAD value. In computational experiments, RAD proved superior to existing linear programming methods for NNLPs. A similar constraint selection metric GRAD was developed in [19] to solve general linear programs (LPs). Finally, in [20] a dynamic active-set method was developed for adding a varying number of violated constraints at P r based on progress at P r 1 . It was incorporated into both RAD and GRAD to improve the computational results of [18] and [19] .

1.3. Overview

In this paper a posterior constraint selection metric NVRAD is developed for NNLPs. NVRAD may be considered as a posterior version of RAD. The posterior NVRAD is then implemented in the dynamic framework of [20] . It should be noted that a constraint selection metric and the associated active-set method are identified by the same name - in this case NVRAD. For the active-set method NVRAD, we provide extensive computational extensive computational experiments to show that it solves NNLPs faster than other computational methods, including RAD and various versions of the existing posterior active-set method VIOL described above.

More specifically, in Section 2 we state the posterior constraint selection metric NVRAD and provide a geometric interpretation. A dynamic version of NVRAD for NNLPs is then developed. In Section 2 we extend NVRAD to a hybrid approach HYBR, where RAD and NVRAD are alternated. In Section 3, computational results are presented. NVRAD is shown to be significantly faster for NNLPs than all CPLEX solvers, as well as faster than VIOL and RAD. HYBR appears slightly faster than NVRAD. In Section 4, we present conclusions. Throughout the paper, both a constraint selection metric and the associated active-set algorithm are identified by the same name-RAD or NVRAD, for example. The use the term should be clear from context. The active-set algorithm itself is called a COST, i.e., a “Constraint Optimal Selection Technique”.

2. NVRAD

2.1. Definition and Interpretation

Let x r * be the current optimal solution for some P r with a perpendicular dis-

tance d = a i x r * b i a i to a violated hyperplane a i x = b i . It follows that

d b i a i = a i x r * b i b i . (5)

Note that b i a i is the perpendicular distance of hyperplane a i x = b i to the

origin. Consequently, it follows that choosing a violated hyperplane a i x = b i

with a maximum value a i x r * b i b i on the right side of (5) can be interpreted

from the left side of (5) as selecting a violated constraint giving the deepest cut

based on information derived from x r * . But from [18] , the expression a i c b i on

the right side of (4) is the distance from the origin to the hyperplane a i x b i along the vector c , i.e., the direction of steepest ascent for the objective function (1) of the NNLP P . For this reason, in [18] the inoperative constraint maximizing R A D ( a i , b i , c ) is deemed the best constraint to add to P r based on prior information. We combine this prior information (4) with the posterior information on the right side of (5) by multiplying them to give

N V R A D ( a i , b i , c , x r * ) = a i c b i 2 ( a i x r * b i ) . (6)

Equation (6) thus incorporates global information from RAD with local information at x r * , and our posterior active-set method adds to P r an inoperative constraint i * for which

i * arg max i OPERATIVE { a i c b i 2 ( a i x r * b i ) : a i x r * > b i } . (7)

We mention that the term b i 2 in the denominator of (7) works better than simply b i . This fact was established in computational results not reported here but obtained to support the above derivation.

2.2. The Dynamic Active-Set Algorithm

A dynamic version of RAD was developed by the authors in [20] . A similar approach is now used for NVRAD. Let x r * be the optimal extreme point for P r , with θ r the angle between x r * and c . Then

cos θ i = c T x r * x r * c , (8)

is nonnegative since P r is also an NNLP. We would like to decrease θ r at each active-set iteration so that x r * points more in the same direction as the gradient c of the objective function in (1). We adapt our dynamic heuristic of [20] that adds a varying number of violated inoperative constraints to P r according to the progress made P r 1 in reducing the angle θ r 1 .

As our ideal goal, let θ r = 0 in (8) to give

c T x r * = x r * c . (9)

When θ r = 0 , it follows from (9) that

j = 1 n c j x r j * j = 1 n c j 2 = j = 1 n ( x r j * ) 2 . (10)

Letting | | denote absolute value, define

δ r ( x r * ) = | j = 1 n c j x r j * j = 1 n c j 2 j = 1 n ( x r j * ) 2 | (11)

as a measure of the performance of our active-set method at iteration r . The value of δ r ( x r * ) decreases as θ r decreases. Such a decrease usually occurs as x r * approaches an optimal extreme point of P itself.

The dynamic COST NVRAD for solving NNLPs is described as follows. Constraints are initially ordered by the RAD constraint selection metric (4). To construct P 0 we choose constraints from (2) in descending order of RAD (since there is no x r * ) until the A 0 matrix of has no 0 column, i.e., until each variable x j has an a i j > 0. These selected constraints become the constraints of P 0 , and we say that the variables are covered by the inequality constraints of the initial problem. P 0 is then solved by the primal simplex to achieve an initial solution x 0 * , and δ 0 ( x 0 * ) is calculated. At iteration r let γ r be the number of constraints of problem P violated by x r * . Then at iteration r 1 and r , the values of δ r 1 ( x r 1 * ) and δ r ( x r * ) are calculated; and the percentage of improvement made in reducing the angle between vectors x r * and c at iteration r is measured by

ω r = max { 0 , ( δ r 1 ( x r 1 * ) δ r ( x r * ) δ r 1 ( x r 1 * ) ) } × 100 , r = 1 , 2 , . (12)

With [ ] denoting the greatest integer function, let

{ φ r + 1 = φ r × ( 1 + [ ( ln ω r ) 1 ] ) , r = 1 , 2 , , if ω r > 1 φ r + 1 = γ r , r = 1 , 2 , , if ω r 1 , (13)

where φ 1 = 200. The value of φ r is an upper bound on the possible number of violated constraints that can be added at active-set iteration r . The actual number added is min { φ r + 1 , γ r } . The active-set function φ r increases at every iteration since the optimal value of the objective function for P r is usually less affected by a constraint with a small value of (6) than one with a large value. Hence, more violated constraints should be added as r increases. Equation (13) represents one approach for doing so. If ω r > e (Euler’s number), for example, then φ r + 1 = φ r . If ω r = 1.01 , then φ r + 1 = 101 φ r . In other words, a much larger number and perhaps all of the remaining violated constraints could be added. NVRAD stops when γ r = 0 , i.e., when there are no more violated constraints.

The pseudocode for dynamic NVRAD algorithm is as follows.

Step 1―Identify constraints to initially bound the problem.

1: a * 0 , BOUNDING

2: while a * 0 do

3: Let i * arg max i EXPLORED R A D ( a i , b i , c ) .

4: if j | a j * = 0 and a i j * > 0 then

5: BOUNDING BOUNDING { i }

6: end if

7: a * a * + a i *

8. Optimized false

9: end while

Step 2―Using the primal simplex method, obtain an optimal x 0 * for the initial problem.

( P 0 ) Maximize z = c T x subjectto a i x b i , i BOUNDING x 0.

Step 3―Perform the following iterations until an answer to problem P is found.

1: r 0

2: while Optimized = false do

3: Calculate δ r ( x r * ) .

4: if r > 1 then ω r = max { 0 , ( δ r 1 ( x r 1 * ) δ r ( x r * ) δ r 1 ( x r 1 * ) ) } × 100

5: if ω r > 1 then φ r + 1 = φ r × ( 1 + [ ( ln ω r ) 1 ] )

6: else if ω r 1 then φ r + 1 = γ r

7: end if

8: else φ r 0

9: end if

10: if a i x r * > b i , i = 1 , , rowsthen

11: γ r #{ { a i x r * > b i , i = 1 , , rows }

12: Let i * arg max i OPERATIVE { N V R A D ( a i , b i , c , x r * ) = a i c b i 2 ( a i x r * b i ) : a i x r * > b i } .

13: for i = 1 , , min { φ r + 1 , γ r } OPERATIVE OPERATIVE { i } end

14: Solve the following P r by the dual simplex method to obtain x r * .

15: r r + 1

16: Go to 3

17: elseOptimized true / / x r * is an optimal solution to P .

18: end if

19: end while

2.3. A Hybrid Approach

A reasonable conjecture is that that combining the global information of RAD and the local information of NVRAD might be advantageous. Therefore, we will also consider an approach that alternates the dynamic RAD and NVRAD metrics in a single algorithm at even and odd iterations, respectively, to yield a hybrid COST designated here as HYBR. The results obtained for HYBR demonstrate that combining posterior and prior COSTs may be superior to either a prior or posterior approach by itself.

3. Computational Experiments

Dynamic NVRAD is compared in this section with the CPLEX primal simplex, dual simplex, and barrier methods. It is also compared with the prior active-set method RAD and the standard posterior active-set method VIOL, as well as to a normalized version of VIOL called NVIOL that was superior to VIOL in computational results not reported here. Both dynamic and multi-bound, multi-cut versions of NVRAD were compared to dynamic and multi-bound, multi-cut versions of the other active-set methods for insight into the individual merits of the dynamic and posterior approaches.

3.1. Problem Instances

Five sets of NNLPs from [18] are used to evaluate the performance of the dynamic posterior COST NVRAD. Each of Sets 1 - 4 contains 105 randomly generated NNLPs at 21 different density levels ranging from 0.005 to 1, and four ratios of ( m constraints)/( n variables) ranging from 200 to 1. The ratios for Sets 1 - 4 are 200, 20, 2, and 1, respectively. For each of Sets 1 - 4, there are five problem instances per combination of density level and ratio. In these problem sets, randomly generated real numbers between 1 and 5, 1 and 10, and 1 and 10 were assigned to the elements of A , b , and c , respectively. To prevent any constraint of P from having the form of an upper bound on some variable, each constraint is required to have at least two nonzero a i j . Next, problem Set 5 of NNLPs is a set of large-scale problems with 5000 variables and 1,000,000 constraints. In this set, real numbers between 1 and 100 are assigned to the elements of b and c with densities p ranging from 0.0004 to 0.06. Again, each constraint is required to have at least two nonzero a i j .

3.2. CPLEX Preprocessing

Two CPLEX parameters for solving linear programming are discussed here. The preprocessing pre-solve indicator (PREIND) and the preprocessing dual setting (PREDUAL) are the two parameters that CPLEX uses for solving linear programming. Preprocessing pre-solver is enabled with the parameter setting PREIND = 1 (ON), which reduces both the number of variables and the constraints before any type of algorithm is used. The pre-solver routine in CPLEX is disabled by setting PREIND = 0 (OFF). The second preprocessing parameter in CPLEX affecting the computational speed is PREDUAL. By setting parameter PREDUAL = 0 (ON) or PREDUAL = −1 (OFF), CPLEX automatically selects whether to solve the dual of the original LP or not, respectively.

Both PREIND and PREDUAL were turned off for CPLEX when CPLEX was used as part of NVRAD or HYBR. However, all computational results reported here for any individual CPLEX solver had PREIND and PREDUAL turned on. In other words, our NVRAD was compared to CPLEX at its fastest setting. CPLEX would choose automatically whether to solve either the primal or dual, whichever seemed best. Moreover, preprocessing would substantially reduce the size of any problem P by removing appropriate rows or columns of the constraint matrix A before applying the primal simplex, dual simplex, or interior-point barrier method. In fact, much of the speed of the CPLEX solvers is due to its proprietary preprocessing routines.

3.3. Computational Results

The experiments were performed on an Intel®CoreTM 2 Duo X9650 3.00 GHz processor with a Linux 64-bit operating system and 8 GB of RAM. The COST NVRAD uses the IBM CPLEX 12.5 callable library to solve P 0 by the primal simplex and then P r , r = 1 , 2 , by the dual simplex when selected constraints are added to P r 1 . The CPU times shown in the tables below represent the average computation time of five problem instances at each density level.

The results of Table 1 for Set 1 compare NVRAD to VIOL, as well as to both a dynamic and non-dynamic version of NVIOL. In addition, the dynamic NVRAD described in Section 2.2 was compared to a non-dynamic NVRAD that applies the multi-cut and multi-bound technique of [18] . The dynamic version was significantly faster. The efficacy of the dynamic approach was further demonstrated by the fact that in higher density problems a dynamic version of NVIOL was up to 21 times faster than the multi-cut, multi-bound NVIOL. Overall, dynamic NVRAD was faster than VIOL and NVIOL on every problem instance.

In Table 2, the CPU times of the test problems solved by dynamic NVRAD are compared with those for RAD. In problem Set 1, RAD is slightly faster than NVRAD over all densities and averages 3.98 compared to 4.55 seconds. However, in problem Set 2, the average computation times for RAD and dynamic

Table 1. CPU times for multi-cut, multi-bound and dynamic active-set approaches on problem Set 1 for random NNLPs with 1000 variables, 200,000 constraints, and a i j = 1 5 , b i = 1 10 , c j = 1 10 .

++Average of 5 instances at each density. −− Used CPLEX presolve = OFF and predual = OFF.

NVRAD over all densities are 19.07 and 16.86 seconds, respectively. For Set 3, dynamic NVRAD is superior to RAD averaging 38.91 seconds compared to 41.87 seconds. Similarly, for Set 4 the averages are 41.87 for NVRAD as compared to 46.98 for RAD. Thus the results of Table 2 affirm NVRAD’s ability to add appropriate constraints at each iteration. The results for Set 1 simply reflect how well the prior COST RAD performs when m is very much larger than n .

Table 3 presents the CPU times for problem Sets 1 - 4 solved by dynamic versions of both RAD and HYBR. In Table 3 HYBR is superior to RAD. Moreover, a comparison of Table 3 with Table 2 shows that HYBR is also slightly better than dynamic NVRAD on these problem sets. Such observations suggest that combining the global information of RAD and the local information of NVRAD gives a superior performance than either RAD or NVRAD by itself. We note further that HYBR can probably be improved. However, it is not our goal to seek the optimal combination of RAD and NVRAD in HYBR since an optimal combination would likely differ depending on various factors such as density and the ratio m/n.

Table 4, taken from [18] , provides a comparison of the posterior COST NVRAD with the standard CPLEX solvers. Comparing the results of Table 4 for the CPLEX solvers with the results for NVRAD in Table 2 shows that NVRAD was significantly faster across virtually all ratios m / n and all densities. For example, the primal simplex was the most robust CPLEX solver, but on the average across all densities the primal simplex took approximately 3 to 14 times more CPU time for the different rations m / n than NVRAD. For the dual simplex, the av-

Table 2. CPU times for multi-cut, multi-bound and dynamic active-set approaches on problem Sets 1 - 4 for random NNLPs with a i j = 1 5 , b i = 1 10 , c j = 1 10 .

++Average of 5 instances of LP at each density. −−Used CPLEX presolve = OFF and predual = OFF.

erage CPU across all densities was approximately 15 to 50 times greater than NVRAD over the different ratios. However, the CPLEX barrier method was slightly faster than NVRAD in problem instances with m / n = 20 and with densities less than 0.02. On the other hand, when the density reached 0.08 for m / n = 20 , NVRAD was already more than ten times faster than the barrier solver. Furthermore, note that average CPU times in Table 4 greater than 3000 seconds (50 minutes) at any density were not reported. This situation occurred for the CPLEX barrier solver for the ratios 1, 2, 20, and 200 with densities at least 0.3, 0.4, 0.5, and 0.75, respectively.

Finally, for large-scale, low-density test problems with n = 5000 and

Table 3. CPU times for dynamic HYBR and dynamic RAD on problem Sets 1 - 4 for random NNLPs with a i j = 1 5 , b i = 1 10 , c j = 1 10 .

++Average of 5 instances of LP at each density. −−Used CPLEX presolve = OFF and predual = OFF.

m = 1 , 000 , 000. Table 5 compares dynamic NVRAD to multi-cut and multi-bound RAD, VIOL, NVIOL, and NVRAD, as well as to the CPLEX primal simplex, dual simplex, and barrier solvers. Only the prior COST RAD was competitive. NVRAD averaged 63.45 seconds overall as compared to 71.79 for RAD. It should be noted that the highest density used in problem Set 5 was 0.0600 since the CPLEX solvers could not solve denser problems of such magnitude in a reasonable amount of time. Average CPU times greater than 2400 seconds (40 minutes) at any density were not reported in Table 5. This situation occurred beginning at some individual threshold density level for each CPLEX solver.

Table 4. CPU times from [18] for CPLEX solvers on problem Sets 1 - 4 for random NNLPs with a i j = 1 5 , b i = 1 10 , c j = 1 10 .

+CPLEX presolve = ON and predual = ON. ++Average of 5 instances at each density. b Runs with CPU times > 3000 s are not report.

Table 5. CPU times for NVRAD versus RAD, VIOL, NVIOL, and the CPLEX solvers on problem Set 5 for random NNLPs with 5000 variables, 1,000,000 constraints, and a i j = 1 5 , b i = 1 100 , c j = 1 100 .

++Average of 5 instances at each density.b Runs with CPU times > 2400 s are not reported. −−Used CPLEX presolve = OFF and predual = OFF. +Used CPLEX presolve = ON and predual = ON.

4. Conclusion

An efficient posterior COST called NVRAD was developed here for NNLPs to utilize both prior global information and posterior local information. The associated constraint selection metric NVRAD is a heuristic, so a geometric interpretation was presented to offer insight into its performance. NVRAD’s inherent active-set efficiency was enhanced by a dynamic approach varying the number of constraints added at each iteration. In addition to NVRAD, adynamic active-set approach HYBR was also proposed. HYBR alternates between the posterior method NVRAD and the prior method RAD. To check their performance, both NVRAD and HYBR were used to solve five sets of large-scale NNLPs. Dynamic NVRAD outperformed the previously developed COST RAD, as well as the standard posterior cutting-plane method VIOL. Dynamic NVRAD significantly outperformed the CPLEX primal simplex, dual simplex, and barrier solvers. On the other hand, HYBR appears slightly faster than NVRAD or RAD. The results of this paper provide further evidence that active-set methods may be the fastest approach for solving linear programming problems.

Cite this paper

Corley, H.W., Noroziroshan, A. and Rosenberger, J.M. (2017) Posterior Constraint Selection for Nonnegative Linear Programming. American Journal of Operations Research, 7, 26- 40. http://dx.doi.org/10.4236/ajor.2017.71002

References

  1. 1. Todd, M.J. (2002) The Many Facets of Linear Programming. Mathematical Programming, 91, 417-436. https://doi.org/10.1007/s101070100261

  2. 2. Dare, P. and Saleh, H. (2000) GPS Network Design: Logistics Solution Using Optimal and Near-Optimal Methods. Journal of Geodesy, 74, 467-478. https://doi.org/10.1007/s001900000104

  3. 3. Rosenberger, J.M., Johnson, E.L. and Nemhauser, G.L. (2003) Rerouting Aircraft for Airline Recovery. Transportation Science, 37, 408-421. https://doi.org/10.1287/trsc.37.4.408.23271

  4. 4. Li, H.-L. and Fu, C.-J. (2005) A Linear Programming Approach for Identifying a Consensus Sequence on DNA Sequences. Bioinformatics, 21, 1838-1845. https://doi.org/10.1093/bioinformatics/bti286

  5. 5. Stone, J.J. (1958) The Cross-Section Method, an Algorithm for Linear Programming. DTIC Document, P-1490.

  6. 6. Thompson, G.L., Tonge, F.M. and Zionts, S. (1996) Techniques for Removing Nonbinding Constraints and Extraneous Variables from Linear Programming Problems. Management Science, 12, 588-608. https://doi.org/10.1287/mnsc.12.7.588

  7. 7. Adler, I., Karp, R. and Shamir, R. (1986) A Family of Simplex Variants Solving an Linear Program in Expected Number of Pivot Steps Depending on d Only. Mathematics of Operations Research, 11, 570-590. https://doi.org/10.1287/moor.11.4.570

  8. 8. Zeleny, M. (1986) An External Reconstruction Approach (ERA) to Linear Programming. Computers & Operations Research, B, 95-100. https://doi.org/10.1016/0305-0548(86)90067-5

  9. 9. Myers, D.C. and Shih, W. (1988) A Constraint Selection Technique for a Class of Linear Programs. Operations Research Letters, 7, 191-195. https://doi.org/10.1016/0167-6377(88)90027-2

  10. 10. Curet, N.D. (1993) A Primal-Dual Simplex Method for Linear Programs. Operations Research Letters, 13, 233-237. https://doi.org/10.1016/0167-6377(93)90045-I

  11. 11. Bixby, R.E., Gregory, J.W., Lustig, I.J., Marsten, R.E. and Shanno, D.F. (1992) Very Large-Scale Linear Programming: A Case Study in Combining Interior Point and Simplex Methods. Operations Research, 40, 885-897. https://doi.org/10.1287/opre.40.5.885

  12. 12. Barnhart, C., Johnson, E., Nemhauser, G., Savelsbergh, M. and Vance, P. (1998) Branch-and-Price: Column Generation for Solving Huge Integer Programs. Operations Research, 46, 316-329. https://doi.org/10.1287/opre.46.3.316

  13. 13. Mitchell, J.E. (2000) Computational Experience with an Interior Point Cutting Plane Algorithm. SIAM Journal on Optimization, 10, 1212-1227. https://doi.org/10.1137/S1052623497324242

  14. 14. Corley, H.W., Rosenberger, J., Yeh, W.-C. and Sung, T.K. (2006) The Cosine Simplex Algorithm. The International Journal of Advanced Manufacturing Technology, 27, 1047-1050. https://doi.org/10.1007/s00170-004-2278-1

  15. 15. Pan, P.-Q. (1990) Practical Finite Pivoting Rules for the Simplex Method. Operations-Research-Spektrum, 12, 219-225. https://doi.org/10.1007/BF01721801

  16. 16. Junior, H.V. and Lins, M.P.E. (2005) An Improved Initial Basis for the Simplex Algorithm. Computers and Operations Research, 32, 1983-1993. https://doi.org/10.1016/j.cor.2004.01.002

  17. 17. Corley, H.W. and Rosenberger, J.M. (2011) System, Method and Apparatus for Allocating Resources by Constraint Selection. US Patent No. 8082549.

  18. 18. Saito, G., Corley, H.W., Rosenberger, J.M., Sung, T.-K. and Noroziroshan, A. (2015) Constraint Optimal Selection Techniques (COSTs) for Nonnegative Linear Programming Problems. Applied Mathematics and Computation, 251, 586-598. https://doi.org/10.1016/j.amc.2014.11.080

  19. 19. Saito, G., Corley, H.W. and Rosenberger, J. (2012) Constraint Optimal Selection Techniques (COSTs) for Linear Programming. American Journal of Operations Research, 3, 53-64. https://doi.org/10.4236/ajor.2013.31004

  20. 20. Noroziroshan, N., Corley, H.W. and Rosenberger, J. (2015) A Dynamic Active-Set Method for Linear Programming. American Journal of Operations Research, 5, 526-535. https://doi.org/10.4236/ajor.2015.56041