Acessibilidade / Reportar erro

ROBUST MPC FOR STABLE LINEAR SYSTEMS

Abstract

In this paper, a new model predictive controller (MPC), which is robust for a class of model uncertainties, is developed. Systems with stable dynamics and time-invariant model uncertainty are treated. The development herein proposed is focused on real industrial systems where the controller is part of an on-line optimization scheme and works in the output-tracking mode. In addition, the system has a time-varying number of degrees of freedom since some of the manipulated inputs may become constrained. Moreover, the number of controlled outputs may also vary during system operation. Consequently, the actual system may show operating conditions with a number of controlled outputs larger than the number of available manipulated inputs. The proposed controller uses a state-space model, which is aimed at the representation of the output-predicted trajectory. Based on this model, a cost function is proposed whereby the output error is integrated along an infinite prediction horizon. It is considered the case of multiple operating points, where the controller stabilizes a set of models corresponding to different operating conditions for the system. It is shown that closed-loop stability is guaranteed by the feasibility of a linear matrix optimization problem.

Model Predictive Control; Robust Stability; Constrained Control


a02v19n1

ROBUST MPC FOR STABLE LINEAR SYSTEMS

M.A.Rodrigues and D.Odloak* * To whom correspodence should be addressed

Department of Chemical Engineering, University of São Paulo.

C.P. 61548, 05424-970, São Paulo, Brazil.

E - mail: odloak@usp.br

(Received: October 4, 2001 ; Accepted: December: 27, 2001)

Abstract - In this paper, a new model predictive controller (MPC), which is robust for a class of model uncertainties, is developed. Systems with stable dynamics and time-invariant model uncertainty are treated. The development herein proposed is focused on real industrial systems where the controller is part of an on-line optimization scheme and works in the output-tracking mode. In addition, the system has a time-varying number of degrees of freedom since some of the manipulated inputs may become constrained. Moreover, the number of controlled outputs may also vary during system operation. Consequently, the actual system may show operating conditions with a number of controlled outputs larger than the number of available manipulated inputs. The proposed controller uses a state-space model, which is aimed at the representation of the output-predicted trajectory. Based on this model, a cost function is proposed whereby the output error is integrated along an infinite prediction horizon. It is considered the case of multiple operating points, where the controller stabilizes a set of models corresponding to different operating conditions for the system. It is shown that closed-loop stability is guaranteed by the feasibility of a linear matrix optimization problem.

Keywords: Model Predictive Control, Robust Stability, Constrained Control.

INTRODUCTION

In the model predictive control strategy, at each sampling instant an optimal sequence of control inputs is computed to minimize an open-loop cost function subject to hard constraints on the inputs and soft constraints on the outputs or states. In the output-tracking problem, the cost function is defined as a quadratic sum of the differences between the predicted output and the output reference value. The sum of the squared input moves, on the control horizon, is also included.

An important issue concerning MPC is robust stability, and recent reviews have focused on this subject (Mayne et al., 2000; Morari and Lee, 1999). The goal is to design a controller that is stable independent of operating conditions, which usually alter the model parameters, and of the adopted tuning parameters.

For the nominal case, where only the most likely model is considered, there are several methods to obtain a stable MPC. A popular approach assumes that output prediction and control horizons are finite and stability is obtained by inclusion of a terminal state constraint (Keerthi and Gilbert, 1988; Mayne and Michalska, 1990; Meadows et al., 1995; Polak and Yang, 1993). This method can not usually be extended to the robust output-tracking problem because the terminal constraint can not be simultaneously obeyed by different models. Another usual method to obtain a stable MPC considers an infinite prediction horizon (Rawlings and Muske, 1993). For stable systems, the infinite horizon open-loop objective function can be expressed as a finite horizon function where a terminal state penalty matrix appears and has to be determined by solution of a Lyapunov equation. The extension of this approach to the robust multi-operating point MPC design problem was proposed by Badgwell (1997) under the assumption that the cost function approaches zero at infinite time for all the process models. This assumption is not usually true for the output-tracking MPC problem. The existence of unknown disturbances invalidates the argument that the tracking problem could be converted into the standard regulator by defining shifted variables = y k – ys and = u k – u s, where ys and us are respectively the output and input steady-state values. Kothare et al. (1996) proposed a min-max predictive control algorithm, where the worst-case optimal linear state feedback is calculated as an LMI problem. This approach was extended to polytopic linear parameter varying systems and to the MPC scheduling problem by Lu and Arkun (2000). Lee and Cooley (2000) extended the approach of Rawlings and Muske (1993) to the robust case with a time-varying input matrix. In these cases, it is also assumed that the cost function approaches zero at infinite time for all system models.

The proposed method can be considered an extension of the IHMPC proposed by Rawlings and Muske (1993) to the robust MPC output-tracking problem of stable systems. When compared to the strategy of Badgwell (1997), this controller has the following advantages:

i) The state-space model used by the controller is different from the state-space models usually adopted in the MPC literature. With the model used here, the cost function of the MPC with infinite control horizon can be partitioned into two separated terms and the one ending at infinite time can be analytically integrated. With this approach, there is no need to solve a Lyapunov equation to compute the weight of the terminal state. The on-line solution of this equation can be time-consuming for large systems.

ii) The cost function is modified to guarantee that it remains bounded for all models considered by the controller. This is done by the inclusion of slack variables in the output-predicted error. With this approach, it can be proved that the control law drives the process outputs to the desired reference values.

iii) The proposed method is applied to the output-tracking problem of stable systems without the need to assume that the process gain is the same independent of operating conditions.

The paper is organized as follows: in Section 2, the adopted state-space formulation is presented and a nominal stabilizing MPC is derived, based on this representation. In Section 3, the nominal infinite horizon MPC is extended to the robust case with multi-operating points, and in Section 4 the application of the proposed controller to a typical chemical process is exemplified. Finally, Section 5 concludes the paper.

THE NOMINALLY STABLE MPC

The usual approach of MPC to predict the output trajectory is to consider a discrete-time version of the system model as follows:

where x Î

nx is the vector of states, u Î nu is the vector of inputs, Duk = uk– uk–1 is the input increment, k is the present time and y Î ny is the vector of outputs. A, B and C are matrices with appropriate dimensions. In this paper, the study of the robustness of MPC assumes multi-operating points to approximate model uncertainties, i.e., the triplet (A, B, C) is assumed to be any member of a finite set {A j, B j, C j}, j = 1, 2, , L. Each linear model j corresponds to a specific operating point of the actual process.

Now, our focus is on the development of a nominally stable predictive controller. In the output-tracking case, the infinite horizon MPC (IHMPC; Rawlings and Muske, 1993) minimizes the following open-loop quadratic cost function:

where [e( jT )] k = [y( jT )] k – r , m is the control horizon, [ y( jT )] k is the output prediction at instant (k + j)T , T is the sampling time, Q Î ny´ny and R Î nu´nu are positive definite weighting matrices and r is the desired output reference. In most of the papers found in the MPC literature for stable systems, the infinite horizon problem is handled by computing a terminal state-weighting matrix . Such a matrix can be calculated by the following Lyapunov equation (Muske and Rawlings, 1993; Lee and Cooley, 2000):

Thus, the cost function can be written as follows:

Industrial MPCs usually provide the operator with the ease to include or exclude one or more outputs from the optimization problem that computes the control action. This is particularly useful when the operator detects a faulty output measurement. In this case, Eq. (4) has to be solved at every sampling instant k to determine corresponding to the set of valid outputs. This procedure may be time-consuming for actual applications where matrix A is large. In this paper, a different approach is followed to calculate the infinite horizon cost function. This is done using a state-space model, where the parameters of the output prediction function are the model states. This model is designated here as the Output-Prediction-Oriented Model (OPOM). A simple example illustrates how this model can be constructed:

Example 1

Consider a system represented by

The corresponding step response of this system is S(t) = K – Ke – ( 1/t ) t = d 0 +d 1e– ( 1/t ) t. With a sampling time T = 1, the output-prediction-oriented model for this system can be written as follows:

where

Observe that C(t) depends on the prediction time t and is a continuous function. For a process with complex poles, some components of the state vector will be complex.

Now, consider a SISO system where the relation between input u and output y is described by the following transfer function:

where {na,nb Î | nb<na}. Here one assumes that the system has only stable poles with single multiplicity, but the method can be extended to the case of integrating and multiple poles. For this system, the unit step response at any time t can be written as follows:

where rj , l =1, 2, ..... , na are the stable poles of the system. Coefficients are obtained by fractions expansion of the system transfer function. Using the notation introduced in (1) and (2), the following equations can be written for the system states at sampling step k+1 and output prediction at time t:

Analogously, for a MIMO system with nu inputs and ny outputs, the transfer function between input uj and output yi is represented by

The prediction of output i at time t can be written as follows:

where

It is convenient to separate the state vector into two components as follows: x = [xs xd]T where xs= [x 1,...,xny] T, xsÎ ny, corresponds to the prediction of system output at steady state and xd = [ xny + 1 xny + 2 ... xny ( nu.na + 1)]T, xdÎ ny.nu.na is related to the stable modes of the system. With these definitions and using vector notation, Eq. (8) can be written as follows:

where

Analogously, equations (8), (9) and (10) can be written as follows:

where

Finally, equations (6) and (7) can be written as follows:

where

Development of a modified version of the infinite horizon MPC takes into account the following expression for the cost function that can be considered equivalent to Eq. (5):

which can be written as

where

is the desired reference value

In the integral term on the right-hand side of Eq. (18), e(t) is a function of continuous time t. However, inside the control horizon, k + j (j = 1, 2, ..., m-1) is a discrete variable. Using OPOM, the predicted output inside the control horizon can be given by

where

and nd was defined in Eq.(12).

If one defines [e s] k = [x s] k – r , then Eq. (19) gives

and the first term on the right-hand side of Eq.(18) can be written as follows:

To develop the integral term of Eq. (18), consider the prediction error at time t such that t > m:

where

Thus, the integral term becomes

At this point, some considerations about the terms in Eq. (20) are necessary. Firstly, for stable systems Y(t) approaches zero exponentially as t approaches infinity, and secondly, the term

will be bounded only if

Substituting Eq. (21) into Eq. (20) produces

where matrix G (·) is given by

, which has an analytical closed form. Therefore, its numerical evaluation is quite simple.

Finally, the IHMPC objective function becomes

Thus, for stable systems, the output-tracking problem of IHMPC can be summarized by the following theorem:

Theorem 1: For stable systems, if there is a feasible solution to Problem P1) below, then the resulting control law is also stabilizing.

Problem P1)

where

subject to

where

Proof: It is easy to show that for stable systems H > 0. Consequently, if Problem P1) is feasible, the optimal solution is unique and obeys the constraints defined by Eqs. (24), (25) and (21). Let this solution be represented by . The cost function value corresponding to this optimal solution is designated . Next, assuming that the control action, , is implemented at the true plant and using the receding horizon concept, Problem P1) is solved at sampling instant k+1. Borrowing the idea of Rawlings and Muske (1993), consider the value of the cost function corresponding to the control sequence and designate this value . It is easy to show that the integral term in is equal to the integral term in and

It is clear that obeys the constraints represented by inequalities (24) and (25). It remains to be shown that obeys Eq. (21). Since is a solution of Problem P1), we have

From Eq. (13), it is clear that

Consequently, the modified cost function, Jk,¥(defined in Eq. (18)), is a Lyapunov function for the infinite horizon MPC and the theorem is proved.

In the formulation of Problem 1), constraints on the outputs were not taken into account. The usual practice (Badgwell, 1997) is to assume that these constraints are soft and penalize the constraint deviations in the cost function. This strategy is equivalent to increasing the number of controlled outputs. Consequently, Problem P1) could be considered the general case, but with a number of controlled outputs larger than the number of manipulated inputs (ny > nu). In this case, Eq. (21) is not easily satisfied, and consequently Problem P1) often becomes infeasible. This means that IHMPC is not asymptotically stable and the system output does not converge to the reference value. Industrial MPCs have to face the situation where ny > nu quite frequently. In this case, the assumption that the cost approaches zero when time tends to infinite (Rawlings and Muske, 1993; Kothare et al., 1996; Badgwell, 1997; Ralhan and Badgwell, 2000) is not true. Thus, the cost function of IHMPC is not bounded. For stable systems with ny > nu, the open-loop optimization strategy of MPC produces permanent errors in the controlled outputs and Jk, ¥ tends to infinite. These permanent errors can be represented as slack variables in the IHMPC optimization problem and the cost function is redefined as follows:

where dkÎ ny is the vector of slack variables. The model states can also be redefined with state [x s] k substituted by [x s + d] k in Equation (15). The output-tracking problem can be reformulated as shown in the theorem below:

Theorem 2: For stable systems with any number of controlled outputs and manipulated inputs, if there is a feasible solution to Problem P2) below, then this solution is also stabilizing.

Problem P2)

where

subject to

Proof: The proof follows the same steps as those of the proof for Theorem 1. Particularly, it can be shown that if corresponds to the optimal solution to Problem 2) at instant k, then is a feasible solution at instant k+1. Control action is defined as in Theorem 1. Analogously, it can be shown that the corresponding cost function is smaller than the optimal cost function at instant k. Thus, the cost function defined in Eq. (26) is a Lyapunov function and there is asymptotic convergence.

Remark 1: Depending on the set point changes and/or disturbance magnitude, Problem P1) may require a large control horizon to be feasible. The inclusion of slack variables, as shown above, enlarges the feasibility region of IHMPC and produces a feasible solution even when a very short control horizon is employed.

In the next section, the IHMPC optimization problem will be written as a conventional LMI optimization problem, which can be extended to the case of model uncertainty.

A STABLE MPC FOR UNCERTAIN SYSTEMS

Before developing the equations for the robust IHMPC, let us focus on the cost function of Problem P2). Using some algebra one can redefine Problem P2) as follows:

Problem P2a)

subject to

and eqs. (29), (24) and (25).

When ny = nu, Inequality (31) can be written as follows:

where

As presented in the Introduction of this paper, we assume herein that the uncertain model can be approximated by a set of local linear models. This type of mismatch representation has been called multi-model. It is assumed that the plant model is not precisely known, but it lies within a set W of L locally linear stable models:

where each one of the members of set W corresponds to a different operating point of the system. This kind of uncertainty was also considered by Badgwell (1997) and our corresponding version of the robust IHMPC for the output-tracking case will be presented next.

The state-space model given by equations (15) and (16) is used to represent the plant as follows:

We can now consider an IHMPC based on the following "min-max" problem:

Problem P3)

subject to

Observe that Equation (36) and Inequality (37) are written for each model of set W and g is an upper bound to all the cost functions corresponding to the models of set W. Thus, Problem P3) can be reformulated as follows:

Problem P4)

subject to

Robust stability of the controller defined by Problem P3) or P4) is summarized in the following theorem:

Theorem 3: Assume that the plant is stable and belongs to set W, which is defined by the possible operating points of the system. If there is a feasible solution to Problem P4), then the IHMPC corresponding to this problem stabilizes all the plants contained in W.

Proof: Since Problem P4) is convex, the existence of a feasible solution means that there is an optimal solution to this problem and it obeys constraints (39), (40), (24) and (25). Assume that Problem P4) is solved at time k and the optimal solution corresponds to the following set of decision variables:

where

Now assume that is implemented at each plant of set W. Then, at time k+1, the following set of variables is a feasible solution to Problem P4):

where

Since R and Q are positive definite matrices, it is clear that .

Now, solving Problem P4) at time k+1 will result in and consequently . Thus, g, which is an upper bound to the cost functions of the plants defined by set W, is a Lyapunov function for all the plants in W. Consequently, the proposed IHMPC stabilizes the actual plant that is unknown but belongs to W and the proof is complete.

Remark 2: In Theorem 3, the stability of robust IHMPC, but not asymptotic stability, was proved. However, if one assumes that

has full row rank and the number of non saturated inputs is at least equal to the number of outputs, then asymptotic stability of the robust IHMPC represented by Problem P4) can be assured. Here (•) is the largest singular value of (•) and sj (•) is the jth largest singular value of (•).

Remark 3: Theorem 3 extends the infinite horizon MPC proposed by Rawlings and Muske (1993) to the case of output tracking of uncertain systems. With uncertain models, the output-tracking problem cannot be transformed into a regulatory problem, for which the origin, (y,u) = (0,0), is the steady state of the system. In this case a change of variables of the form: (as suggested by Kothare et al., 1996) is not possible, since in the real system, the presence of unmeasured disturbances makes it impossible to calculate us. Then, the hypothesis that for stable systems Jk,¥ is bounded when (u (k + j) = 0 (j = l, ¼, m – 1) is no longer true. This assumption has been adopted by the existing approaches (Kothare et al., 1996; Badgwell, 1997; Ralhan and Badgwell, 2000; Lee and Cooley, 2000).

CASE STUDY

This section presents the simulation results of the robust IHMPC previously proposed. The system adopted in these simulations was borrowed from Badgwell (1997). It is an adiabatic CSTR, where a single irreversible first-order reaction is carried out. The reactor volume, fluid density, heat capacity, heat of reaction and flowrate are assumed constant, and the system can be represented by the following linearized continuous-time model:

The controlled outputs are the dimensionless exit concentration of component A ( x 1 ) and the dimensionless exit temperature ( x 2 ). The manipulated inputs are the dimensionless feed concentration ( u 1) and the dimensionless feed temperature ( u 2 ). Da is the Damköhler number, a is the dimensionless activation energy and b is the dimensionless heat of reaction. For the nominal model, it is assumed that a = 1 and Da = 20. Under normal operating conditions b = –0.45, but occasional feed impurities can produce side reactions that can reduce b to values as low as –0.90.

Based on the state-space model represented by Eq. (41), one can derive the following transfer function matrix:

The uncertain domain W was built using two different values for the heat of reaction: b1 = –0.45 and b2 = –0.90. The robust IHMPC of Problem P4) was implemented using sampling time T = 1, control horizon = 2, output and input weights Q1 = diag (1,1) and R = diag(10 –4, 10 –4), respectively, and slack weights Q2 = diag(10,10). The constraints on the manipulated inputs and on the input increment were u max = 0.15, u min = 0.65 and Du max = 0.05. We adopted the MATLAB LMI Control ToolboxÒ as the optimization solver. In the first simulated case, the output tracking was studied by applying set point changes of +1 to the first controlled output and –1 to the second controlled output.

Figure 1 shows the input and output profiles for this case. One can verify that the proposed robust controller performs suitably, driving the system outputs to their reference values smoothly, and satisfies the input constraints. It can be verified that the standard QDMC algorithm with these tuning parameters and plant models is unstable.


In the second simulated case, the behavior of the proposed controller was studied for a load disturbance. One assumes that at sampling step k=2, the feed temperature is increased to 0.5 and the desired reference value is kept constant. The robust IHMPC defined by Problem P4) was used with the same tuning parameters as those in the previous case, but the constraint bounds were modified to u max = 0.50, u min= -0.15 and Du max= 0.20.

The results for the load disturbance are plotted in Figure 2. It can be seen that the robust IHMPC can reject the disturbance in u2 quite easily, bringing the process outputs to their nominal steady state.


CONCLUSION

In this paper, a robust stable predictive controller based on the infinite horizon MPC proposed by Rawlings and Muske (1993) was presented. The paper focused on the output-tracking problem and sufficient conditions were derived to guarantee stability for specific types of uncertainties, which are usually found in real world applications. The proposed strategy is based on a state-space output-prediction-oriented model (OPOM), which is built to predict for the purpose of predicting system output along a time horizon. This model representation allows integration of the squared error of the output prediction. The method was developed for the case in which model inaccuracy can be described by a discrete set of models, each one corresponding to a different operating point. The approach was illustrated by the application of the proposed method to a typical chemical process. Simulation results showed that the robust predictive controller proposed herein is able to take the system to new reference values and reject load disturbances for large model mismatches.

ACKNOWLEDGMENTS

Support for this work was provided by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) under grant 96/08087-0 and by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) under grant 300860/97-9.

  • Badgwell, T.A. (1997). Robust Model Predictive Control of Stable Linear Systems, International Journal of Control, 68, 797-818.
  • Keerthi, S.S. and E.G. Gilbert (1988). Optimal Infinite-Horizon Feedback Laws for a General Class of Constrained Discrete-Time systems: Stability and Moving-Horizon Approximations, Journal of Optimization Theory and Applications, 57, 265-293.
  • Kothare, M.V., V. Balakrishnan and M. Morari (1996). Robust Constrained Model Predictive Control Using Linear Matrix Inequalities, Automatica, 32, 1361-1379.
  • Lee, J.H. and B.L. Cooley (2000). Min-Max Predictive Control Techniques for a Linear State-Space System with a Bounded Set of Input Matrices, Automatica, 36, 463-473.
  • Lu, Y. and Y. Arkun (2000). Quasi-Min-Max MPC Algorithms for LPV Systems, Automatica, 36, 527-540.
  • Mayne, D.Q. and H. Michalska (1990). Receding Horizon Control of Nonlinear Systems, IEEE Transactions on Automatic Control, 35, 7, 814-824.
  • Mayne, D.Q., J.B. Rawlings, C.V. Rao and P.O.M. Scokaert (2000). Constrained Model Predictive Control: Stability And Optimality, Automatica, 36, 789-814.
  • Meadows, E.S., T.H. Henson, J.W. Eaton and J.B. Rawlings (1995). Receding Horizon Control and Discontinuous State Feedback Stabilization, International Journal of Control, 61, 1217-1229.
  • Morari, M. and J.H. Lee (1999). Model Predictive Control: Past, Present and Future. Computers and Chemical Engineering, 23, 667-682.
  • Muske, K.R. and J.B. Rawlings (1993). Model Predictive Control with Linear Models. AIChE Journal, 39, 262-287.
  • Polak, E. and T.H. Yang (1993). Robust Receding Horizon Control of Linear Systems with Input Saturation and Plant Uncertainty, International Journal of Control, 58, 613-663.
  • Ralhan, S. and T.A. Badgwell (2000). Robust Control of Stable Linear Systems with Continuous Uncertainty, Computers and Chemical Engineering, 24, 2533-2544.
  • Rawlings, J.B. and K.R. Muske (1993). The Stability of Constrained Multivariable Receding Horizon Control, IEEE Transactions on Automatic Control, 38, 1512-1516.
  • *
    To whom correspodence should be addressed
  • Publication Dates

    • Publication in this collection
      14 May 2002
    • Date of issue
      Mar 2002

    History

    • Accepted
      27 Dec 2001
    • Received
      04 Oct 2001
    Brazilian Society of Chemical Engineering Rua Líbero Badaró, 152 , 11. and., 01008-903 São Paulo SP Brazil, Tel.: +55 11 3107-8747, Fax.: +55 11 3104-4649, Fax: +55 11 3104-4649 - São Paulo - SP - Brazil
    E-mail: rgiudici@usp.br