﻿ Reduction and Analysis of a Max-Plus Linear System to a Constraint Satisfaction Problem for Mixed Integer Programming

American Journal of Operations Research
Vol.07 No.02(2017), Article ID:74725,8 pages
10.4236/ajor.2017.72008

Reduction and Analysis of a Max-Plus Linear System to a Constraint Satisfaction Problem for Mixed Integer Programming

Hajime Yokoyama, Hiroyuki Goto

Department of Industrial & System Engineering, Hosei University, Tokyo, Japan Copyright © 2017 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/   Received: January 28, 2017; Accepted: March 12, 2017; Published: March 15, 2017

ABSTRACT

This research develops a solution method for project scheduling represented by a max-plus-linear (MPL) form. Max-plus-linear representation is an approach to model and analyze a class of discrete-event systems, in which the behavior of a target system is represented by linear equations in max-plus algebra. Several types of MPL equations can be reduced to a constraint satisfaction problem (CSP) for mixed integer programming. The resulting formulation is flexible and easy-to-use for project scheduling; for example, we can obtain the earliest output times, latest task-starting times, and latest input times using an MPL form. We also develop a key method for identifying critical tasks under the framework of CSP. The developed methods are validated through a numerical example.

Keywords:

Max-Plus Algebra, Scheduling, Critical Path, Constraint Satisfaction Problems, Mixed Integer Programing 1. Introduction

This research develops a solution method for a project scheduling using max- plus algebra (MPA). Goto  developed a solution method for two types of linear equations in MPA: $A\otimes x=b$ and $x=A\otimes x\oplus b$ , where $\oplus$ and $\otimes$ are the max and plus operations in MPA. These equations, also referred to as max- plus-linear (MPL) equations, can be reduced to constraint satisfaction problems (CSPs) for mixed integer programing (MIP)  . MPA   is an algebraic system wherein the max and plus operations are defined as addition and multiplication, respectively. MPL representation is an approach to model and analyze a class of discrete-event systems with structures of non-concurrency, synchronization, parallel processing of multiple tasks, and so on. Such systems include production systems and project scheduling, the behavior of which is represented by linear equations in MPA. In project scheduling, the linear equation $x=A\otimes x\oplus b$ is used to obtain the earliest start times $x$ , while the equation $A\otimes x=b$ to obtain the latest event occurrence times $x$ . If both matrix $A$ and vector $b$ consist of constants only, then the optimal solution is a unique constants vector. There are already solution methods for such case  . On the other hand, if matrix $A$ or vector $b$ contain variables, then the optimal solution becomes a function of variables. For example, suppose that matrix $A$ includes system parameters which correspond to the duration times. If matrix $A$ or vector $b$ is incorporated into other constraints, the existing solution methods cannot be used. Accordingly, Goto  developed a solution method for these equations by reducing the two MPL equations into CSPs for MIP. However, the reference does not provide a solution method for calculating other essential quantities in project scheduling.

Therefore, we newly develop a solution method for calculating the earliest output times to all output transitions, latest task-starting times, and latest input times, all of which are represented by an MPL from. In addition, we develop a method to identify critical tasks under the framework of CSP, which shall be a key method in this article.

2. Max-Plus Algebra

We define a set ${ℝ}_{\mathrm{max}}=ℝ\cup \left\{-\infty \right\}$ , where $ℝ$ is the whole real line. Then, for $x,y\in {ℝ}_{\mathrm{max}}$ , we define the following two basic operators:

$x\oplus y=\mathrm{max}\left(x,y\right),$ (1)

$x\otimes y=x+y.$ (2)

Additionally, we define a set ${\stackrel{¯}{ℝ}}_{\mathrm{max}}=ℝ\cup \left\{-\infty \right\}\cup \left\{+\infty \right\}$ to add the following two complementary operators:

$x\wedge y=\mathrm{min}\left(x,y\right),$ (3)

$x\odot y=-x+y.$ (4)

The priority of operators $\otimes$ and $\odot$ is higher than that of $\oplus$ and $\wedge$ . We shall denote the zero and unit elements for operators $\oplus$ and $\otimes$ by $\epsilon \left(=-\infty \right)$ and $e\left(=0\right)$ , respectively, and the unit element for operator $\wedge$ by $\stackrel{¯}{\epsilon }\left(=+\infty \right)$ . If $X,Y\in {ℝ}_{\mathrm{max}}^{n×m}$ , and $Z\in {ℝ}_{\mathrm{max}}^{m×q}$ , then

${\left[X\oplus Y\right]}_{ij}={\left[X\right]}_{ij}\oplus {\left[Y\right]}_{ij},$ (5)

${\left[X\otimes Z\right]}_{ij}={\mathrm{max}}_{k=1}^{n}\left({\left[X\right]}_{ik}\otimes {\left[Z\right]}_{kj}\right).$ (6)

Moreover,

${\left[X\wedge Y\right]}_{ij}={\left[X\right]}_{ij}\wedge {\left[Y\right]}_{ij},$ (7)

${\left[X\odot Z\right]}_{ij}={\mathrm{min}}_{k=1}^{n}\left(-{\left[X\right]}_{ik}\odot {\left[Z\right]}_{kj}\right),$ (8)

${X}^{\text{T}}\odot Z=X\Z.$ (9)

$\epsilon$ is a matrix whose elements are all $\epsilon$ , and $e$ is a matrix whose diagonal elements are $e$ and off-diagonal elements are $\epsilon$ . If $X\in {ℝ}_{\mathrm{max}}^{n×n}$ , then ${X}^{\ast }$ repre- sents the Kleene staroperation defined below:

${X}^{\ast }=e\oplus X\oplus {X}^{\otimes 2}\oplus \cdots \oplus {X}^{\otimes \left(s-1\right)}$ (10)

where $s\left(1\le s\le n\right)$ is an instance that satisfies ${X}^{\otimes \left(s-1\right)}\ne \epsilon$ and ${X}^{\otimes s}=\epsilon$ .

3. Reduction of the Relevant Operators to CSPs

The addition and multiplication operators are reduced to CSPs for MIP. For $x,y,z\in {ℝ}_{\mathrm{max}}$ ,

$z=x\oplus y,$ (11)

$z=x\otimes y$ (12)

are focused on. First, Equation (11) is reduced to a CSP. The resulting formulation is:

$\begin{array}{l}z\ge x,\\ z\ge y,\\ z\le x+M\left(1-{s}_{1}\right),\\ z\le y+M\left(1-{s}_{2}\right),\\ \left({s}_{1},{s}_{2}\right)\in \left\{0,1\right\},\\ {s}_{1}+{s}_{2}\ge 1,\end{array}$ (13)

where ${s}_{1},{s}_{2}$ are binary variables, whilst $M$ is a large positive constant called big-M. The big-M, ${s}_{1}$ , and ${s}_{2}$ play a role of switching the equal signs. By generalizing this result for multiple numbers, the addition $z={\oplus }_{i=1}^{n}{x}_{i}$ can be reduced to a CSP as follows:

$\begin{array}{l}z\ge {x}_{i}\text{}\forall i\in {\mathcal{V}}_{n},\\ z\le {x}_{i}+M\left(1-{s}_{i}\right)\text{}\forall i\in {\mathcal{V}}_{n},\\ {s}_{i}\in \left\{0,1\right\}\text{}\forall i\in {\mathcal{V}}_{n},\\ {\sum }_{k=1}^{n}{s}_{k}\ge 1,\end{array}$ (14)

where ${\mathcal{V}}_{n}=\left\{1,2,\cdots ,n\right\}$ . If $X,Y\in {ℝ}_{\mathrm{max}}^{n×m}$ , and $Z\in {ℝ}_{\mathrm{max}}^{m×r}$ , then the addition $P=X\oplus Y$ can be formulated as follows:

$\begin{array}{l}{p}_{ij}\ge {x}_{ij}\text{}\forall i\in {\mathcal{V}}_{n},\forall j\in {\mathcal{V}}_{m},\\ {p}_{ij}\ge {y}_{ij}\text{}\forall i\in {\mathcal{V}}_{n},\forall j\in {\mathcal{V}}_{m},\\ {p}_{ij}\le {x}_{ij}+M\left(1-{s}_{ij1}\right)\text{}\forall i\in {\mathcal{V}}_{n},\forall j\in {\mathcal{V}}_{m},\\ {p}_{ij}\le {y}_{ij}+M\left(1-{s}_{ij2}\right)\text{}\forall i\in {\mathcal{V}}_{n},\forall j\in {\mathcal{V}}_{m},\\ \left({s}_{ij1},{s}_{ij2}\right)\in \left\{0,1\right\}\text{}\forall i\in {\mathcal{V}}_{n},\forall j\in {\mathcal{V}}_{m},\\ {s}_{ij1}+{s}_{ij2}\ge 1\text{}\forall i\in {\mathcal{V}}_{n},\forall j\in {\mathcal{V}}_{m}.\end{array}$ (15)

With respect to Equation (12), the multiplication of two numbers in MPA can be formulated as follows:

$z=x+y.$ (16)

Then, the multiplication of two matrices $Q=Y\otimes Z$ can be reduced to:

$\begin{array}{l}{q}_{ij}\ge {y}_{ik}+{z}_{kj}\text{}\forall i\in {\mathcal{V}}_{n},\forall j\in {\mathcal{V}}_{r},\forall k\in {\mathcal{V}}_{m},\\ {q}_{ij}\le {y}_{ik}+{z}_{kj}+M\left(1-{s}_{ijk}\right)\text{}\forall i\in {\mathcal{V}}_{n},\forall j\in {\mathcal{V}}_{r},\forall k\in {\mathcal{V}}_{m},\\ {s}_{ijk}\in \left\{0,1\right\}\text{}\forall i\in {\mathcal{V}}_{n},\forall j\in {\mathcal{V}}_{r},\forall k\in {\mathcal{V}}_{m},\\ {\sum }_{l=1}^{m}{s}_{ijl}\ge 1\text{}\forall i\in {\mathcal{V}}_{n},\forall j\in {\mathcal{V}}_{r}.\end{array}$ (17)

Next, we focus on the two complementary operators:

$z=x\wedge y,$ (18)

$z=x\odot y.$ (19)

In a similar manner to the reduction of $x\oplus y$ , Equation (18) can be reduced to:

$\begin{array}{l}z\le x,\\ z\le y,\\ z\ge x-M\left(1-{s}_{1}\right),\\ z\ge y-M\left(1-{s}_{2}\right),\\ \left({s}_{1},{s}_{2}\right)\in \left\{0,1\right\},\\ {s}_{1}+{s}_{2}\ge 1.\end{array}$ (20)

By generalizing this result for multiple numbers, the minimization $z={\wedge }_{i=1}^{n}{x}_{i}$ can be reduced as follows:

$\begin{array}{l}z\le {x}_{i}\text{}\forall i\in {\mathcal{V}}_{n},\\ z\ge {x}_{i}-M\left(1-{s}_{i}\right)\text{}\forall i\in {\mathcal{V}}_{n},\\ {s}_{i}\in \left\{0,1\right\}\text{}\forall i\in {\mathcal{V}}_{n},\\ {\sum }_{k=1}^{n}{s}_{k}\ge 1,\end{array}$ (21)

If $X,Y\in {ℝ}_{\mathrm{max}}^{n×m}$ , and $Z\in {ℝ}_{\mathrm{max}}^{n×r}$ , then the minimization of two matrices $P=X\wedge Y$ can be formulated as follows:

$\begin{array}{l}{p}_{ij}\le {x}_{ij}\text{}\forall i\in {\mathcal{V}}_{n},\forall j\in {\mathcal{V}}_{m},\\ {p}_{ij}\le {y}_{ij}\text{}\forall i\in {\mathcal{V}}_{n},\forall j\in {\mathcal{V}}_{m},\\ {p}_{ij}\ge {x}_{ij}-M\left(1-{s}_{ij1}\right)\text{}\forall i\in {\mathcal{V}}_{n},\forall j\in {\mathcal{V}}_{m},\\ {p}_{ij}\ge {y}_{ij}-M\left(1-{s}_{ij2}\right)\text{}\forall i\in {\mathcal{V}}_{n},\forall j\in {\mathcal{V}}_{m},\\ \left({s}_{ij1},{s}_{ij2}\right)\in \left\{0,1\right\}\text{}\forall i\in {\mathcal{V}}_{n},\forall j\in {\mathcal{V}}_{m},\\ {s}_{ij1}+{s}_{ij2}\ge 1\text{}\forall i\in {\mathcal{V}}_{n},\forall j\in {\mathcal{V}}_{m}.\end{array}$ (22)

Next, we focus on Equation (19), which can be reduced in a straightforward manner from the definition of $\odot Z$ to:

$z=-x+y.$ (23)

Then, the pseudo division operation $Q={X}^{\text{T}}\odot Z$ can be formulated as follows:

$\begin{array}{l}{q}_{ij}\le -{x}_{ik}+{w}_{kj}\text{}\forall i\in {\mathcal{V}}_{m},\forall j\in {\mathcal{V}}_{r},\forall k\in {\mathcal{V}}_{n},\\ {q}_{ij}\ge -{x}_{ik}+{w}_{kj}+M\left(1-{s}_{ijk}\right)\text{}\forall i\in {\mathcal{V}}_{m},\forall j\in {\mathcal{V}}_{r},\forall k\in {\mathcal{V}}_{n},\\ {s}_{ijk}\in \left\{0,1\right\}\text{}\forall i\in {\mathcal{V}}_{m},\forall j\in {\mathcal{V}}_{r},\forall k\in {\mathcal{V}}_{n},\\ {\sum }_{l=1}^{n}{s}_{ijl}\ge 1\text{}\forall i\in {\mathcal{V}}_{m},\forall j\in {\mathcal{V}}_{r}.\end{array}$ (24)

4. Solution Methods for MPL Equations

4.1. MPL Representation

After defining the following relevant matrices and vectors, we introduce the MPL equations taken up in references  and  :

$n$ : number of tasks;

$p$ : number of external outputs;

$q$ : number of external inputs;

$B\in {ℝ}_{\mathrm{max}}^{n×q}$ : input matrix, ${\left[B\right]}_{ij}=$ { $e$ : if task $i$ has an input transition $j$ , $\epsilon$ : otherwise};

$C\in {ℝ}_{\mathrm{max}}^{p×n}$ : output matrix, ${\left[C\right]}_{ij}=$ { $e$ : if task $j$ has an output transition $i$ , $\epsilon$ : otherwise};

$d\in {ℝ}_{\mathrm{max}}^{n}$ : system parameter, ${\left[d\right]}_{i}$ : duration time in task $i$ ;

$u\in {ℝ}_{\mathrm{max}}^{q}$ : input vector, ${\left[u\right]}_{i}$ : input time to external input $i$ ;

$y\in {ℝ}_{\mathrm{max}}^{p}$ : output vector, ${\left[y\right]}_{i}$ : output time from external output $i$ ;

$x\in {ℝ}_{\mathrm{max}}^{n}$ : state vector, ${\left[x\right]}_{i}$ : start or completion time of task $i$ .

The earliest task-completion times of all tasks, ${x}_{E}$ , are calculated using

${x}_{E}=A\otimes {x}_{E}\oplus b,$ (25)

Matrix $A\in {ℝ}_{\mathrm{max}}^{n×m}$ is the weighted transition matrix, and vector $b\in {ℝ}_{\mathrm{max}}^{n}$ is the weighted input vector which satisfies the following relation $b=P\otimes B\otimes u$ . The earliest output times to all output transitions, ${y}_{E}$ , are then calculated by

${y}_{E}=C\otimes {x}_{E}.$ (26)

Then, the latest task-starting times, ${x}_{L}$ , are calculated using

${x}_{L}={A}^{\text{T}}\odot {x}_{L}\wedge c$ (27)

Vector $c\in {ℝ}_{\mathrm{max}}^{n}$ is the weighted output vector which satisfies: $c=P\\left(C\{y}_{E}\right)$ . The latest input times, ${u}_{L}$ , are calculated using ${x}_{L}$ :

${u}_{L}={B}^{\text{T}}\odot {x}_{L}.$ (28)

As a consequence, the total floats of all tasks can be calculated using Equations (25) and (27):

$m=\left({x}_{L}+d\right)-{x}_{E}.$ (29)

All tasks can be subsequently classified into two types according to ${\left[m\right]}_{i}=0$ or ${\left[m\right]}_{i}>0$ , where the former one is classified as a critical task whereas the latter one as a non-critical task.

4.2. Reduction of MPL Equations to CSPs

We consider a solution method for Equation (25). A simple approach is to relax the equation into the following inequality:

${x}_{E}\ge A\otimes {x}_{E}\oplus b.$ (30)

The solution that has the smallest elements satisfying Equation (30), also called the least solution, is given by ${x}_{E}={A}^{\ast }\otimes b$ . We can reduce Equation (30) to the following CSP for MIP in the following manner:

$\begin{array}{l}{x}_{Ei}\ge {a}_{ik}+{x}_{Ek}\text{}\forall \left(i,k\right)\in {\epsilon }_{A},\\ {x}_{Ei}\le {a}_{ik}+{x}_{Ek}+M\left(1-{s}_{ik}\right)\text{}\forall \left(i,k\right)\in {\epsilon }_{A},\\ {x}_{Ei}\ge {b}_{i}\text{}\forall \left(i,k\right)\notin {\epsilon }_{A},\\ {x}_{Ei}\le {b}_{i}+M\left(1-{s}_{ik}\right)\text{}\forall \left(i,k\right)\notin {\epsilon }_{A},\\ {s}_{ik}\in \left\{0,1\right\}\text{}\forall i,k\in {\mathcal{V}}_{n},\\ {\sum }_{l=1}^{n}{s}_{il}\ge 1\text{}\forall i\in {\mathcal{V}}_{n}.\end{array}$ (31)

If ${a}_{ik}=\epsilon$ , then ${a}_{ik}+{x}_{Ekj}=\epsilon$ follows. It is notable that we can compute ${A}^{\ast }\otimes b$ directly without calculating ${A}^{\ast }$ . After relaxing the equation, we reduce Equation (26) to a CSP for MIP with the help of Equation (17):

$\begin{array}{l}{y}_{Ei}\ge {C}_{ik}+{x}_{Ek}\text{}\forall i\in {\mathcal{V}}_{p},\forall k\in {\mathcal{V}}_{n},\\ {y}_{Ei}\le {C}_{ik}+{x}_{Ek}+M\left(1-{s}_{ik}\right)\text{}\forall i\in {\mathcal{V}}_{p},\forall k\in {\mathcal{V}}_{n},\\ {s}_{ik}\in \left\{0,1\right\}\text{}\forall i\in {\mathcal{V}}_{p},\forall k\in {\mathcal{V}}_{n},\\ {\sum }_{l=1}^{n}{s}_{il}\ge 1\text{}\forall i\in {\mathcal{V}}_{p}.\end{array}$ (32)

Next, we consider a solution method for Equation (27). A simple approach is to relax the equation into the following inequality:

${x}_{L}\le {A}^{\text{T}}\odot {x}_{L}\wedge c.$ (33)

The solution that has the maximum elements satisfying Equation (33), also called the greatest solution, is given by ${x}_{L}={A}^{\ast }\c$ . We can reduce Equation (33) to a CSP for MIP in the following manner:

$\begin{array}{l}{x}_{Li}\le -{a}_{ik}+{x}_{Lk}\text{}\forall \left(i,k\right)\in {\epsilon }_{A},\\ {x}_{Li}\ge -{a}_{ik}+{x}_{Lk}-M\left(1-{s}_{ik}\right)\text{}\forall \left(i,k\right)\in {\epsilon }_{A},\\ {x}_{Li}\le {c}_{i}\text{}\forall \left(i,k\right)\notin {\epsilon }_{A},\\ {x}_{Li}\ge {c}_{i}-M\left(1-{s}_{ik}\right)\text{}\forall \left(i,k\right)\notin {\epsilon }_{A},\\ {s}_{ik}\in \left\{0,1\right\}\text{}\forall i,k\in {\mathcal{V}}_{n},\\ {\sum }_{l=1}^{n}{s}_{il}\ge 1\text{}\forall i\in {\mathcal{V}}_{n}.\end{array}$ (34)

If ${a}_{ik}=\epsilon$ , then $-{a}_{ki}+{x}_{Lkj}=\epsilon$ follows. Here, it is again remarkable that Equation (34) can compute ${A}^{\ast }\c$ directly without calculating ${A}^{\ast }$ . After relaxing the equation, we reduce Equation (28) to a CSP for MIP with the help of Equation (24):

$\begin{array}{l}{u}_{Li}\le -{B}_{ik}+{x}_{Lk}\text{}\forall i\in {\mathcal{V}}_{q},\forall k\in {\mathcal{V}}_{n},\\ {u}_{Li}\ge -{B}_{ik}+{x}_{Lk}+M\left(1-{s}_{ik}\right)\text{}\forall i\in {\mathcal{V}}_{q},\forall k\in {\mathcal{V}}_{n},\\ {s}_{ik}\in \left\{0,1\right\}\text{}\forall i\in {\mathcal{V}}_{q},\forall k\in {\mathcal{V}}_{n},\\ {\sum }_{l=1}^{n}{s}_{il}\ge 1\text{}\forall i\in {\mathcal{V}}_{q}.\end{array}$ (35)

Lastly, we focus on reducing Equation (29), the resulting formulation of which is as follows:

${m}_{i}=\left({x}_{Li}+{d}_{i}\right)-{x}_{Ei}\text{}\forall i\in {\mathcal{V}}_{n}.$ (36)

Then, we reduce Equation (36) to a CSP for MIP as follows.

$\begin{array}{l}{m}_{i}-M{\alpha }_{i}\le 0\text{}\forall i\in {\mathcal{V}}_{n},\\ {m}_{i}-\left(1/M\right){\alpha }_{i}\ge 0\text{}\forall i\in {\mathcal{V}}_{n},\\ {\alpha }_{i}\in \left\{0，1\right\}\text{}\forall i\in {\mathcal{V}}_{n}.\end{array}$ (37)

If the total float ${m}_{i}$ is a real number, then Equation (37) can be computed by introducing a small positive constant $\left(1/M\right)$ . If ${\alpha }_{i}=0$ , then task $i$ is critical since ${m}_{i}=0$ follows. Conversely, if ${\alpha }_{i}=1$ , then task $i$ is non-critical because $\left(1/M\right)\le {m}_{i}\le M$ holds. This is an important and key technique to classify all tasks into either critical or non-critical.

5. Numerical Example

A numerical example is presented to validate the developed framework. We use a personal computer with the following execution environment:

・ machine: Dell Optiplex 9020;

・ CPU: Intel® Core™ i7-4790 3.60 GHz;

・ OS: Microsoft Windows 7 Professional;

・ memory: 4.0 GB.

To solve CSPs, we use SCIP version 3.2.1. Figure 1 is a simple manufacturing system with five processes, two inputs, and one output, which is the same as taken in reference  . Each node represents a task having a processing time equal to the node number. We feed raw materials to inputs 1 and 2 at t = 3 and t = 0, respectively.

$d=\left[\begin{array}{c}1\\ 2\\ 3\\ 4\\ 5\end{array}\right],\text{}A=\left[\begin{array}{ccccc}\epsilon & \epsilon & \epsilon & \epsilon & \epsilon \\ \epsilon & \epsilon & \epsilon & \epsilon & \epsilon \\ 3& \epsilon & \epsilon & \epsilon & \epsilon \\ \epsilon & 4& \epsilon & \epsilon & \epsilon \\ \epsilon & \epsilon & 5& 5& \epsilon \end{array}\right],\text{}b=\left[\begin{array}{c}4\\ 2\\ \epsilon \\ \epsilon \\ \epsilon \end{array}\right],\text{}c=\left[\begin{array}{c}\stackrel{¯}{\epsilon }\\ \stackrel{¯}{\epsilon }\\ \stackrel{¯}{\epsilon }\\ \stackrel{¯}{\epsilon }\\ 7\end{array}\right].$

The constant vectors and a matrix, $d$ , $A$ , $b$ , and $c$ , reflect the processing times, precedence constraints, locations of the external inputs, and locations of the external outputs, the definitions of which appear in Section 4.1. Since the solver cannot treat the zero element $\epsilon$ , we need to set the two constants big-M and zero element $\epsilon$ carefully. In accordance with the procedure in reference  , we set the constants to $M=70$ and $\epsilon =20$ . The resulted values from the solver are as follows: the earliest task-completion times: ${x}_{E}={\left[4\text{}2\text{}7\text{}6\text{}12\right]}^{\text{T}}$ , the earliest output time: ${y}_{E}=12$ , the latest task-starting times: ${x}_{L}={\left[3\text{}1\text{}4\text{}3\text{}7\right]}^{\text{T}}$ ,

Figure 1. A simple manufacturing system with five processes (the same as  ).

the latest input times: ${u}_{L}={\left[3\text{}1\right]}^{\text{T}},$ the total floats: $m={\left[0\text{}1\text{}0\text{}1\text{}0\right]}^{\text{T}}$ , and the criticalities of all tasks: $\alpha ={\left[0\text{}1\text{}0\text{}1\text{}0\right]}^{\text{T}}$ . We hence were able to obtain solutions by reducing the MPL equations to CSPs for MIP.

6. Conclusion

This research has developed a solution method for reducing a project scheduling problem represented by a max-plus-linear form. The resulting formulation was constraint satisfaction problems for mixed-integer programming. We attained calculating the earliest output times, latest task-starting times, latest input times, and critical tasks. Moreover, we also developed a method to classify all tasks into either critical or non-critical by introducing a small positive constant $\left(1/M\right)$ . Our future works include a proper setting of the small positive constant as well as considering resource contentions.

Cite this paper

Yokoyama, H. and Goto, H. (2017) Reduction and Analysis of a Max-Plus Linear System to a Constraint Satisfaction Problem for Mixed Integer Programming. American Journal of Operations Research, 7, 113-120. https://doi.org/10.4236/ajor.2017.72008

References

1. 1. Goto, H. (2017) Reduction of Max-Plus Algebraic Equations to Constraint Satisfaction Problems for Mixed Integer Programming. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Science, E100-A, 427-430.

2. 2. Heidergott, B., Olsder, G.J. and Woude, L. (2006) Max Plus at Work: Modeling and Analysis of Synchronized Systems. Princeton University Press, New Jersey.

3. 3. Baccelli, F., Cohen, G., Older, G.J., and Quadrat, J.P. (1992) Synchronization and Linearity. An Algebra for Discrete Event Systems, John Wiley & Sons, New York.

4. 4. Goto, H., and Masuda, S. (2004) On the Properties of the Greatest Subsolution for Linear Equations in the Max-Plus Algebra. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Science, E87-A, 424-432.

5. 5. Goto, H. (2013) Address a Project Scheduling Problem using Max-Plus Algebra. Journal of the Society of Instrument and Control Engineers, 52, 1083-1089.

6. 6. Yokoyama, H. and Goto, H. (2016) Resolution of Resource Contentions in the CCPM-MPL Using Simulated Annealing and Genetic Algorithm. American Journal of Operations Research, 6, 480-488. https://doi.org/10.4236/ajor.2016.66044