^{1}

^{*}

^{1}

^{1}

To reach an acceptable controller strategy and tuning it is important to state what is considered “good”. To do so one can set up a closed-loop specification or formulate an optimal control problem. It is an interesting question, if the two can be equivalent or not. In this article two controller strategies, model predictive control (MPC) and constrained direct inversion (CDI) are compared in controlling the model of a pilot-scale water heater. Simulation experiments show that the two methods are similar, if the manipulator movements are not punished much in MPC, and they act practically the same when a filtered reference signal is applied. Even if the same model is used, it is still important to choose tuning parameters appropriately to achieve similar results in both strategies. CDI uses an analytic approach, while MPC uses numeric optimization, thus CDI is more computationally efficient, and can be used either as a standalone controller or to supplement numeric optimization.

Every local control problem is an inverse task. The desired output of the system, the set-point, is prescribed, and the input of the system, the MV, is obtained in the feasible range. MPC solves this inverse task in the form of a constrained optimization problem, while CDI uses an analytical rule to calculate the MV, but the two methods can be very similar. Goodwin [

The direct synthesis method for the tuning of PID controllers is similar in idea to the constrained direct inversion [

In literature several PID tuning methods are described with one degree of freedom in tuning. One notable example is DS-d tuning [_{c} values. Unfortunately there is no single rule for the decision.

The SIMC method described by Skogestad [_{c} in fast control, and also a suggestion for slower controller tuning. This article has the advantage that one can override the suggestions, because the formulas are also provided.

Tuning a MPC is even more complicated: the three time horizons (control, prediction, model horizons), the weighting factors of manipulator punishment, and in case of MIMO control, the weighting of controlled variables are the most apparent parameters [

In [

This paper does not aim to answer the question about the best objective function or closed loop time constant. Here we compare controller strategies with two different philosophies to reveal equivalencies and differences. The article is built around a case-study, which is introduced in the first section. The following parts of the article introduce the controller strategies, and then they are compared. Finally the conclusions are drawn.

The example system chosen for this control study is a pilot-scale water heater [

Modeling the process relies on first principles. Some assumptions are made: only a heat balance equation is needed, in which convection and heat transfer from the heater rod towards the flowing water is accounted. The heat loss towards the environment can be neglected. Perfect plug flow is assumed. The temperature dependence of material properties can be neglected. The model can be written in the following form:

Solving a partial differential equation can be hard, so division into discrete units (cascades) is a good approximation with ordinary differential equations. Also it should be noted, that the signal of the differential pressure meter is transformed in a way that it is in linear correlation with the actual flow rate. By merging the constants of the equation, the following form is the result:

where

This equation describes the process well, but the MV appears only indirectly in it. To create connection between the valve position and the flow rate the steady-state characteristic was obtained, and it was used as a lookuptable. The relationship of u_{v} and p is zero order with dead time:

The last step before identification is to discretize the model. Previous studies revealed that the identification may benefit from turning the model to discrete from continuous. The model gets the following form:

Index i marks the number of the cascade, while index k marks the time instance.

The constants a, b, q and dead times t_{d,p}, t_{d,q} and t_{d}, in were subject to identification. A measurement was carries out in which all the inputs changed, but only one at once. The resulting data set was used for identification.

The studied object is a relative first order system, thus a first order specification is prescribed:

τ_{c} is the closed-loop time constant. The smaller the τ_{c} is, the faster and more aggressive the control becomes. By

substituting the derivative from the model equation, we get an algebraic equation. It can be rearranged to express the signal p, which is in direct connection with the MV:

The valve position is looked up from the previously obtained steady-state characteristics.

This paper is not meant to present a novel MPC method, but rather to use the flexibility of well-known elements. The model behind the MPC is the discrete nonlinear model discussed above. The optimization algorithm is the built in Matlab fmincon function, which was set to use SQP algorithm. The control horizon would be either too small for efficient control or too large for reaching optimality, if the value of the MV would be optimized in every discrete time instance. To overcome this difficulty only some of the points were optimized, while the ones between them were interpolated. The MV after the control horizon was the steady-state value, calculated from the steady-state model. The starting guess of the optimization was the sequence found to be optimal in the previous time instance, with the according time shift. Modeling error is not studied here, thus feedback is not included in the algorithm.

The most widely used objective functions are the sum of squared errors and the sum of absolute errors, although there are numerous other possibilities. Here the squared error is studied, because the main effects are similar with absolute error. As a recent example [

Let the cost function be the following:

where λ is the weighting factor between the control error term and MV punishment term:

pr denotes the prediction horizon, c the control horizon, k the index of the present time sample.

It is important to note, that the errors are averaged in time, thus the differences in control and prediction horizon do not affect the weighting between the two terms. Because of this, the same function can be used for the evaluation of the whole time series.

Although MPC optimizes the MV with regards to a short part of the time series, it is still very close the optimality of the whole time series, as the MV found to be optimal has little effect on the CV (and the cost function) outside the prediction horizon.

The question is how these results can be compared to the direct constrained inversion. Constrained inversion of a relative first order system has one degree of freedom, the filtering time constant of the closed-loop specification. By varying the time constant, different cost function values can be obtained for the time series. Further if the λ weighting factor changes, the value of the cost function also changes. The obtained surface is represented on _{c} with the lowest cost function at a given λ.

Here we can see the relationship between the closedloop time constant and the λ weighting factor. It is visible, that there is some kind of discontinuity between λ values

of 0.35 and 0.4. The reason is that there are multiple local minimums, and the location of the global minimum is suddenly transferred from one local minimum to another. To understand this effect, let the derivative of the cost function to be equal with zero, which is a necessary condition of optimality:

By rearranging we get:

On _{c} values for a given cost function, the values achievable are presented on

The main reason to this effect is that the system, which is nonlinear by its nature, is forced to act as a linear system because of the direct inversion closed-loop specification. This causes sometimes that the system cannot act as fast as desired, and the MV hits the constraints, while in other cases the system would have more reserves to

act faster, but because of the specification this is not exploited. The result is generally more rapid handling of the MV, which results that low values are hard to be achieved.

The two approaches become similar, when the manipulator constraints rule the controlled system.

Although our primary idea was that the two approaches are very similar in their inversion capabilities, by now a lot of differences were found. To resolve these differences some further changes have to be made on the objective function of the MPC. It is generally accepted to filter the set-point signal to obtain the reference signal in the objective function. The reason for this is the same as including a manipulator punishment term: preserving the stability of the MPC algorithm.

If λ is set to zero, then only the control error is punished. This way any reference signal can be followed, until MV constraints are not hit. If the same filtering and time shift is applied, as the closed-loop specification of the constrained inversion, then the difference seems to disappear. Still there are some minor sources of difference: numeric inaccuracy of the optimizer, and different handling of constraints. These results are summed up in Figures 9 and 10.

Simulation results show that objective function is a key point in optimal control. Even the parameters of the introduced simple objective function can cause significant differences in the behavior of the closed loop. It was also shown, that there are some differences, but with a wellchosen closed-loop time constant, the CDI can act in a very similar way as the MPC does. It is evident that MPC usually performs better in reaching optimality, as the process is evaluated with the same objective function as it was used in its optimization, while CDI follows a rule that is not meant to be an optimal solution. Modifying the reference signal resolves most of the differences. When MV constraints are not hit, MPC and CDI act almost the same.

Finally the source of differences in the two methods can come from different filtering of the reference signal, constraint handling and the difference of numeric and analytic approach. For lower level control it is advantageous to use a fast controller, and as simple as possible. CDI is able to calculate the MV analytically, which is a great advantage when compared to the computationally less efficient or less precise numeric optimization. For more complex systems CDI can also support the optimization of the process by calculating the initial guess that is already close to optimality.

This work was supported by the European Union and co-

financed by the European Social Fund in the frame of the TAMOP-4.2.1/B-09/1/KONV-2010-0003 and TAMOP- 4.2.2/B-10/1-2010-0025 projects.