American Journal of Operations Research
Vol.08 No.01(2018), Article ID:82129,12 pages
10.4236/ajor.2018.81005

Shedding Light on Non Binding Constraints in Linear Programming: An Industrial Application

Alireza Tehrani Nejad Moghaddam1*, Thibault Monier2

1Energy Economist, 22 Rue d’arcueil, Paris, France

2Energy Economist, 10 Rue Mathis, Paris, France

Copyright © 2018 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: January 2, 2018; Accepted: January 27, 2018; Published: January 30, 2018

ABSTRACT

In Linear Programming (LP) applications, unexpected non binding constraints are among the “why” questions that can cause a great deal of debate. That is, those constraints that are expected to have been active based on price signals, market drivers or manager’s experiences. In such situations, users have to solve many auxiliary LP problems in order to grasp the underlying technical reasons. This practice, however, is cumbersome and time-consuming in large scale industrial models. This paper suggests a simple solution-assisted methodology, based on known concepts in LP, to detect a sub set of active constraints that have the most preventing impact on any non binding constraint at the optimal solution. The approach is based on the marginal rate of substitutions that are available in the final simplex tableau. A numerical example followed by a real-type case study is provided for illustration.

Keywords:

Linear Programming, Solution Interpretation, Non Binding Constraint, Decision Support

1. Introduction

Linear programming (LP) has found practical applications in all facets of business due to the computational efficiency of the simplex method and the availability of cheap and high-speed digital computers (for instance, see [1] ). The oil refining industry is an illustrative example of such applications since 1952 [2] .

The rapid evolution of the easy-to-use software made model building and LP solving accessible to everyone. For instance, engineers are capable to construct refinery models by drawing graphically the process models, connecting them into sophisticated external simulators (for non linear computations), designing intermediate tanks, blending final products and monitoring the storage capacities. User interfaces are also configured to easily adapt the problem at hand into a multi-objective, multi-period or multi-location model. The commercial software imports the required input data from external sources, generates thousand of material and quality balance equations, points out data inconsistencies, debugs mathematical formulations, removes redundancies, linearizes the problem during the recursion passes (if necessary) and solves the problem in a fraction of time. Modern visualization technologies are also provided to report the solution in comprehensive manners.

Consequently, the seize and complexity of production planning models have increased to represent more accurately real operations. Today, we can optimize far more complex problems that we can understand. Powerful report generators provide numerical results without explaining “why the solution is what it is”. However, in strategic decision making and production planning, managers are interested in whys not numbers [3] . Managers need to interrogate the model’s output in terms of arbitrage and orientations before becoming confident in its utilization for decision making. The answer to these questions, which emerge from a deep understanding of the solution, is seldom evident from the output reports. Depending on users’ experience, ability and patience, analyzing the solution and preparing support for decision making can take up to several days. Surprisingly, this practical need continues to be neglected by the commercial optimization software whose evolutions, driven primarily by computing speed and algorithms enhancement, have resulted in a new generation of LP users with a much less OR background.

The need to support the meaning of an optimal solution for non expert users is not new. During 1980 and 1990, Greenberg promoted and developed the project of Intelligent Mathematical Programming Systems (IMPS), initially motivated by the US Department of Energy and sponsored by a consortium of industries [4] [5] [6] [7] [8] . The analysis module of IMPS, called ANALYZE, includes a collection of algorithms and rule-based heuristics to find causal substructures supporting analysis such as infeasibility diagnosis, suggestive reasoning about redundancy and dual values interpretation associated with binding constraints.

In industrial practice, analyzing the unexpected non binding constraints needs also great support. That is, the constraints that are expected to have been active based on price signals, market drivers or manager’s know-how. For instance, within a profitable environment, when a key production unit is not operating at its full hydraulic capacity, the associated optimal solution is usually regarded suspicious. In this situation, if the LP result is not supplemented by comprehensive explanations, decision makers would most likely revert to intuitions or other mode of simple analysis. Admitting the model is validated for what it intends to represent, a plausible answer to this technical question could be formulated as follows. Production units operate under hydraulic capacity and operational condition constraints such as the catalytic limitation which reacts inversely with the poly-aromatics content of the feed. Due to reliability issues related to upstream logistics, the poly-aromatics content of the feed has been substantially increased. In this specific circumstance, the key production unit has reached its catalytic limitation before its hydraulic capacity constrain. This path-tracing analysis, that reveals the root cause of the non binding hydraulic constraint, requires advanced knowledge of the industrial process, the LP model scheme and its sensitivity to various input parameters. To the best of our knowledge, this kind of “why” questions related to non binding constraints have not been tackled in the literature.

The dual variables, which identify the bottlenecks of the model from the objective function viewpoint, are not a proper tool for non binding constraint analysis. In Sections 4 and 5, we illustrate some cases where dual variables provide counter intuitive indications. Moreover, the traditional sensitivity analysis on the right hand side parameters cannot neither explain why a given constraint is non binding. In the absence of any theoretical framework, users usually force the suspicious non binding constraints to operate at full capacity. Then, they investigate by comparing the output results. Most often, much more auxiliary LP runs are required before grasping meaning and consistency of the solution behavior. This practice, however, is cumbersome and time-consuming when the problem is of anything else than trivial size and complexity and the expert resources are not available.

This practical-oriented paper is aimed to answer to the clear statement of the insights we wish to obtain from the model about why a given constraint is not active at the optimal solution. We suggest a solution-assisted procedure that can be considered as a kind of local sensitivity analysis carried out in a deterministic framework. The procedure tracks the marginal rate of technical substitutions (partial derivatives) between the slack variable of the given non binding constraint and all the binding constraints in the final simplex tableau. Under the assumption of proportional perturbation, these marginal coefficients are transformed into elasticity measures capable to shed lights on directions which prevent a given non binding constraint from further saturation. These elasticities are comparable to the sensitivity measures of Samuelson’s comparative statics which play an important role to support decision makers [9] . Under the non degeneracy assumption, these elasticities are uniquely defined. The procedure can be easily automated through a post processor to interact with the user. Surprisingly, few LP practitioner is aware of an easy access to the final simplex tableau in commercial solvers such as LAMPS, Cplex or Xpress. The reminder of this paper is organized as follows. In Section 2, we review some basic definitions and introduce our notations. Section 3 delineates our suggested methodology. Section 4 extrapolates the results to the non linear models solved using sequential linearization techniques. In Section 5, a numerical example is provided to illustrate the procedure. In Section 6, we present a real-type case study from the oil refining industry. Finally, Section 7 concludes.

2. Definitions and Notations

Let us suppose the following profit maximization LP problem,

m a x { c T x | A x b , x 0 } (1)

where, A is a given m × n linear technology matrix with full row rank. The m-vector b represents the resource availabilities, called the right-hand-side (RHS) terms, and the n-vector c represents the constant output market prices, called the objective function coefficients. The n-vector x represents the non negative decision variables. The dual problem associated with model (1) can be formulated as follows,

min { b T y | A T y d = c , y 0 , d 0 } (2)

where, the m-vector y corresponds to the shadow input prices and the n-vector d represents the opportunity (or reduced) cost associated with the primal variables. The dual constraint indicates that an equilibrium solution is achieved only when the difference between marginal revenue c and marginal cost A T y is nonnegative for all the decision variables characterizing the production plan. If x is primal feasible, ( y , d ) is dual feasible and x T d = 0 , then they are primal-dual optimal solutions. Throughout this paper, we assume that the primal and dual problems are not degenerate so that the optimal variables are unique.

We partition the A technology matrix into a basic B and non basic component A N . The other vectors are also partitioned accordingly, c = [ c B , c N ] and x = [ x B , x N ] , where the basic variables x B > 0 and the non basic variables x N = 0 . We denote I B and I N the sets of basic and non basic index respectively. Rewriting the constraint equation of model (1), and pre-multiplying both sides by the inverse of the (non singular) basis matrix,

x B = B 1 b ( B 1 A N ) x N . (3)

Relation (3) provides a simultaneous system of equations showing how all of the basic variables x B are affected by marginal changes in the value of nonbasic variables x N . In relation (3), the expression ( B 1 A N ) corresponds to the matrix of marginal rate of technical substitution M R T S ( x B , x N ) between the non basic variables and all the basic variables involved in the production plan [10] . More precisely, the MRTS matrix shows the rate at which basic variables should be efficiently adjusted whenever the RHS of the binding constraints are increased individually within their validity ranges. The non degeneracy assumption of the primal and dual problems guarantees the uniqueness of the MRTS matrix. Commercially available software for large scale models includes some specific commands to extract these marginal coefficients from the optimal Simplex tableau. For instance, the available command in LAMPS (Linear and Mathematical Programming System) is TRANSFORMCOLUMNS. Equivalent instructions exist for Cplex and Xpress solvers. For other practical applications of the MRTS, please see Tehrani Nejad, 2007; Tehrani Nejad and Michelot, 2009; Tehrani Nejad and Saint Antonin, 2014 [11] [12] [13] .

3. Method

At the optimal solution, let us suppose that the i-th constraint is non binding. Following relation (3), its positive slack variable x s i can be stated as below,

x s i = x s i * k I N M i ( x s i , x s k ) x s k , (4)

where x s i * is the optimal level of the basic slack variable and x s k , k I N refer to non basic slack variables. In relation (4), the MRTS row-vector M i ( x s i , x s k ) , corresponds to the optimal adjustment of the basic slack variable x s i in response to marginal impulse in the RHS of the k-th binding constraint, b k . In other words, M i ( x s i , x s k ) is the rate at which x s i varies per unit increase in b k at the optimal solution. These marginal rates reveal some sort of importance measure that can be used to rank the influence of the active constraints on the saturation level of the ith non binding constraint. For notational convenience, let M i k = M i ( x s i , x s k ) . Since the marginal coefficients M i k belong to the basis inverse B 1 , they are free of sign. Depending on the optimal solution, negative or positive M i k refer respectively to those active constraints whose relaxation would increase or decrease the utilization rate of the i-th non binding constraint. For the purpose of this paper, active constraints with negative M i k are of interest.

That is evident that the M i k extracted from the final simplex tableau are not directly comparable for two reasons. First, the LP constraints are usually expressed in different units of measurement (ton, %, ˚C,...). Second, depending on the constraint structure, some M i k might need adjustment. To circumvent this limitation, we propose the following procedure that can be readily automated through post processors.

1) Extract the row-vector M i k associated with the i-th non binding constraint from the final simplex tableau.

2) If the k-th active constraint is originally of type q j x j / x i α , then the associated M i k must be multiplied by x i (see [14] ).

3) The sign of the M i k associated with an active greater-than-or-equal constraint must be reversed in order to be in line with a relaxation perturbation.

4) To render homogeneous the units of measurement and to scale the variations according to the size of the input constraints, each M i k must be converted into a cross elasticity indicator E i k at the point of measure:

E i k = M i k b k x s i , k I N , (5)

where b k corresponds to the RHS of the k-th binding constraint. The cross elasticity E i k measures the responsiveness of the slack variable of the i-th non binding constraint with respect to the marginal relaxation of the active constraints individually. This local and individual sensitivity measurement, called critically importance measure in reliability analysis [9] , can efficiently assist the user in a more in depth analysis of the LP solution.

5) Rank the E i k from the most negative values.

The most negative cross elasticity points out the active constraint that has the most preventive impact on the saturation level of the i-th non binding constraint. Then, the user has to evaluate the precision and the justification of the identified constraint. In practice, some of these active constraints can be relaxed after technical discussion with production engineers. Some others, however, remain truly the bottlenecks of the plant and have to be communicated to the managers for investment projects.

4. Non Linear Models

Industrial optimization models also include non linear constraints. These models can be summarized as follows,

max { c T x | g ( x ) b , h ( x ) = 0 , x 0 } (6)

In relation (6), the constraints are segregated into linear g ( x ) and non linear h ( x ) functions. The industrial practice consists in linearizing the non linear functions using the first order Taylor expansion around a base point x 0 . That is,

max { c T x | g ( x ) b , h ( x 0 ) + g ( x 0 ) ( x x 0 ) = 0 , x 0 } (7)

This approximation is valid only in the neighborhood of the original point. The base point x 0 and the derivatives g ( x 0 ) must be reevaluated through recursion steps until the convergence criteria are reached. The LP problem (7) can be solved using the Simplex method where the relation (4) applies to its the non binding constraints.

5. Numerical Illustration

Let us suppose a stylized problem of producing gasoline x G , diesel x D and fuel oil x F whose market prices are respectively $100, $150 and $55 per ton. The producer can purchase five different grades of crude oil x 1 , x 2 , x 3 , x 4 , x 5 whose market prices are respectively $90, $70, $80, $96, $75. We assume that the availability of crude oils x 2 , x 3 and x 5 is limited to 75 tons and the production capacity of diesel is limited to 150 tons. Due to corrosion issues, the sum of crude x 1 and x 4 should remain lower than 30% of the total crude mix. Processing each ton of crude generates respectively 0.3, 0.5, 0.3, 0.2 and 0.4 tons of CO2. The regional authorities require that the total CO2 emissions should not exceed 11.5 tons. Finally, all the activity levels must be nonnegative. The profit maximization LP model can be then stated as follows,

{ max 100 x G + 150 x D + 55 x F 90 x 1 70 x 2 80 x 3 96 x 4 75 x 5 s . t . 0.40 x 1 + 0.40 x 2 + 0.45 x 3 + 0.16 x 4 + 0.35 x 5 x G = 0 ( Gasolineproduction ) 0.40 x 1 + 0.25 x 2 + 0.25 x 3 + 0.55 x 4 + 0.30 x 5 x D = 0 ( Dieselproduction ) 0.20 x 1 + 0.35 x 2 + 0.30 x 3 + 0.29 x 4 + 0.35 x 5 x F = 0 ( Fueloilproduction ) 0.03 x 1 + 0.05 x 2 + 0.03 x 3 + 0.02 x 4 + 0.04 x 5 11.5 ( CO 2 emissions ) 0.70 x 1 0.30 x 2 0.30 x 3 + 0.70 x 4 0.30 x 5 0 ( Corrosionlimitation ) x 2 75 , x 3 75 , x 5 75 , x D 150 ( Crudeavailability )

At the optimum, the total profit amounts to $7174.8. This level of performance is obtained by processing the five crude oils at 57.1, 75, 75, 39.3 and 75 tons respectively. The output products are consequently equal to 119.1, 104.5 and 97.8 tones for gasoline, diesel and fuel oil respectively. The crude oils 2, 3 and 5 are processed at their maximum availability leading to a positive opportunity cost of $19.7, $17.1 and $19.8 respectively. Finally, the CO2 emissions and corrosion blending constraints are both binding at optimum. The information are summarized in Table 1 which contains a part of the final Simplex tableau.

The column to the immediate left indicates the basic activities as they appear in the column of the basic index I B . Their optimal values are read in the most right column. The first row corresponds to the slack variables associated with the binding constraints. The coefficients inside the tableau represent the MRTS between the basic variables and non basic slack variables. The last row represents the dual optimal variables and the optimal value of the objective function.

Despite the higher relative price of diesel, its production capacity constraint is not fully utilized. This unexpected result, however, needs to be explained before recommending the production plan to refinery’s operators. The MRTS coefficients that link the basic slack variable x s D to the binding constraints, i.e., the bold row in Table 1, can provide valuable insights to this question. Following

Table 1. Part of the final simplex tableau.

the suggested procedure in Section 3, we convert the extracted MRTS into an elasticity measure at the optimal solution. The blending constraint needs some extra adjustment, E = ( 1.214 ) ( 1 5 x i ) ( 0.3 / 45.5 ) = 2.57 . In words, at the optimal solution, 1% increase in the crude blending limitation would increase the saturation level of the diesel production by 2.57%. Table 2 ranks the computed elasticities from the most negative values.

Several interesting remarks are in order. First, the corrosion issue is the most preventive constraint for diesel production. Without this insight, the user should have relaxed all the binding constraints one by one in order to identify the most responsive one. Second, contrary to its largest marginal value, the CO2 constraint has a negative impact on diesel production: increasing the CO2 pollution rights would alter the optimal crude processing by replacing crude 4 with crude 1 which has a higher CO2 content and a lower diesel yield. This optimal substitution, which is a consequence of the Rybczynski theorem in economics [10] , leads to increase the gasoline product (+24 tons) to the detriment of diesel (−15 tons). This counter intuitive example confirms the limitation of marginal values for non binding constraint analysis. Third, according to diesel production equation, crude oils 2 and 3 have the same average yield in terms of diesel output (%25). However, the computed elasticities reveal that crude oil 2 has a higher marginal yield and is, therefore, a more suitable candidate to increase the diesel output.

6. Case Study

In the previous section, we provided a very simple numerical example to detail the procedure. In this section, we apply the suggested methodology to a real-type refinery LP model which contains near to 5000 linear constraints and more than 7000 continuous variables. In this LP model, the constraints are categorized into material and quality balance constraints, product specification constraints, crude availability constraints and process units capacity constraints. The linear objective function consists in maximizing the net profit of the oil refining operations (for more details, see Tehrani Nejad and Saint Antonin [15] ). Cplex is the used solver.

6.1. Model Overview

The general scheme of the model is given in Figure 1. In non technical words, the crude distillation unit (CDU) separates crude oils into various fractions according to their boiling points. Light fractions are used to make gasoline and naphtha whilst middle fractions are used to produce kerosene and diesel. The heaviest fractions are sent to vacuum distillation unit (VDU) to produce vacuum distillate and vacuum residue. The major part of vacuum residue is fed to a

Table 2. Sensitivity importance measures.

Figure 1. Oil refinery scheme.

visbreaker, to reduce the viscosity of the fuel oil products. The vacuum distillate is converted by a fluid catalytic cracker unit (FCCU) to a gasoline blending component and light cycle oil for blending into the diesel pool. Here, the FCCU is combined with an Alkylation unit to produce high value gasoline components called alkylate. The sulfur specifications for gasoline, middle and heavy oil products require the use of a hydro-desulfurization unit (HDS). On the other side, a reforming unit converts low-octane naphthas into high-octane gasoline blending components. Most often, reformer’s output is separated into light and heavy components by a fractionation unit called FDP. The oil product categories considered are propane, butane, naphtha, gasoline, jet fuel, diesel, heating oils, heavy fuel oils and different bitumen grades.

6.2. Results and Discussion

Based on market indicators, the manager expects the crude distillation unit (CDU) to be fully utilized. However, the optimal solution recommends an average utilization rate of 91.5%. Table 3 summaries the main active constraints at the optimal solution. These active constraints are illustrated by bold arrows in Figure 1. Non bold arrows correspond to part of the non active constraints. To identify what prevents the CDU from further processing, we compute the cross

Table 3. Marginal values, MRTS and elasticities measures.

elastisities E i k between the mentioned active constraints and the slack variable of the CDU capacity constraint. The results are reported in the two last columns of Table 3.

Several remarks are in order. First, the HDS feed rate constraint is directly identified to be the most preventing constraint with respect to the CDU throughput. We verified this result by increasing individually the RHS of the active constraints reported in Table 3. By relaxing only the HDS capacity up to 60%, the utilization rate of the crude unit increases steeply from 91.5% to 96.6%, and then flattens. Second, given the high marginal values of the gasoline-related units, i.e., Alkylation and the FCC effluents (HCCS and LCO), the LP practioner would have been most plausibly disoriented by first inspecting those constraints. Third, although the Alkylation unit has the highest marginal value, its cross elasticity with respect to crude distillation unit is zero. That simply implies having more Alkylation would significantly increase the overall net margin of the refinery without impacting the crude intake amount. Forth, the relative high marginal value of the HDS capacity constraint confirms that the low utilization rate of the crude unit, suggested by the optimal solution, is not simply due to low price effects.

7. Conclusion

Result communication is the most crucial step in any modeling-based study. Due to the complexity of the industrial LP models, providing managers with an enriched set of technical explanations has become a formidable task for non OR experts. Explaining the unexpected non binding constraints is such an example. The objective of this paper was to propose a simple method, based on known concepts in LP, to detect and rank a sub set of active constraints that have the most preventing impact on any non binding constraint at the optimal solution. The distinguished feature of this approach is that it requires no more information than what is provided by the final simplex tableau. A numerical example as well as a real-type oil refining case study was provided to illustrate the procedure. The simplicity of this method, we believe, constitutes its elegance.

Acknowledgments

Without involving them in any possible error, the first author would like to express his gratitude to Quirino Paris, Harvey Greenberg and Nastaran Manouchehri for their valuable comments on this paper.

Cite this paper

Tehrani Nejad, A. and Monier, T. (2018) Shedding Light on Non Binding Constraints in Linear Programming: An Industrial Application. American Journal of Operations Research, 8, 50-61. https://doi.org/10.4236/ajor.2018.81005

References

  1. 1. Bixby, R. (2012) Documenta Mathematica. Extra Volume ISMP, 107-121.

  2. 2. Charnes, A., Cooper, W.W. and Mellon, B. (1952) Blending Aviation Gasoline—A Study in Programming Interdependent Activities in an Integrated Oil Company. Econometrica, 20, 135-139. https://doi.org/10.2307/1907844

  3. 3. Geoffrion, A. (1976) The Purpose of Mathematical Modeling Is Insight, Not Numbers. Interfaces, 7, 81-92. https://doi.org/10.1287/inte.7.1.81

  4. 4. Greenberg, H.J. (1987) ANALYZE: A Computer-Assisted Analysis System for Linear Programming Model. Operational Research Letters, 76, 249-255. https://doi.org/10.1016/0167-6377(87)90057-5

  5. 5. Greenberg, H.J. (1991) Intelligent Analysis Support for Linear Programs. Computers Chemical Engineering, 16, 659-673. https://doi.org/10.1016/0098-1354(92)80015-2

  6. 6. Greenberg, H.J. (1991) A Primer for ANALYZE: A Computer-Assisted Analysis System for Mathematical Programming Models and Solutions. Operations Research Society of America, Baltimore, MD.

  7. 7. Greenberg, H.J. (1993) How to Analyze the Results of Linear Programs, Part 2: Price Interpretation Interfaces. 23, 97-114. https://doi.org/10.1287/inte.23.5.97

  8. 8. Greenberg, H.J. (1996) A Bibliography for the Development of an Intelligent Mathematical Programming System. Annals of Operations Research, 65, 55-90. https://doi.org/10.1007/BF02187327

  9. 9. Borgonovoa, E. and Plischke, E. (2016) Sensitivity Analysis: A Review of Recent Advances. European Journal of Operational Research, 248, 869-887. https://doi.org/10.1016/j.ejor.2015.06.032

  10. 10. Paris, Q. (2013) An Economic Interpretation of Linear Programming. Revised Edition, Palgrave Mcmillan.

  11. 11. Tehrani Nejad, M.A. (2007) Allocation of CO2 Emissions in Petroleum Refineries to Petroleum Joint Products: A Linear Programming Model for Practical Application. Energy Economics, 29, 974-977. https://doi.org/10.1016/j.eneco.2006.11.005

  12. 12. Tehrani Nejad, M.A. and Michelot, Ch. (2009) A Contribution to the Linear Programming Approach in Joint Cost Allocation: Methodology and Application. European Journal of Operational Research, 197, 999-1011. https://doi.org/10.1016/j.ejor.2007.12.043

  13. 13. Tehrani Nejad, M.A. and Saint-Antonin, V. (2014) Factors Driving Refinery CO2 Intensity with Allocation into Products: Comments. International Journal of Life Cycle Assessment, 19, 24-28. https://doi.org/10.1007/s11367-013-0634-9

  14. 14. Rosenthal, R.E. (2015) GAMS, a User’s Guide. GAMS Development Corporation, Washington DC.

  15. 15. Tehrani Nejad, M.A. and Saint-Antonin, V. (2008) Impact of Tightening the Sulfur Specifications on the Automotive Fuels’ CO2 Contribution: A French Refinery Case Study. Energy Policy, 36, 2293-2770.