Acessibilidade / Reportar erro

Model reduction in large scale MIMO dynamical systems via the block Lanczos method

Abstract

In the present paper, we propose a numerical method for solving the coupled Lyapunov matrix equations A P + P A T + B B T = 0 and A T Q + Q A + C T C = 0 where A is an n ×n real matrix and B, C T are n × s real matrices with rank(B) = rank(C) = s and s << n . Such equations appear in control problems. The proposed method is a Krylov subspace method based on the nonsymmetric block Lanczos process. We use this process to produce low rank approximate solutions to the coupled Lyapunov matrix equations. We give some theoretical results such as an upper bound for the residual norms and perturbation results. By approximating the matrix transfer function F(z) = C (z In - A)-1 B of a Linear Time Invariant (LTI) system of order n by another one Fm(z) = Cm (z Im - Am)-1 Bm of order m, where m is much smaller than n , we will construct a reduced order model of the original LTI system. We conclude this work by reporting some numerical experiments to show the numerical behavior of the proposed method.

coupled Lyapunov matrix equations; Krylov subspace methods; nonsymmetric block Lanczos process; reduced order model; transfer functions


Model reduction in large scale MIMO dynamical systems via the block Lanczos method

M. HeyouniI; K. JbilouI; A. MessaoudiII; K. TabaaIII

IL.M.P.A, Université du Littoral, 50 rue F. Buisson BP 699, F-62228 Calais Cedex, France

IIÉcole Normale Supérieure Rabat, Département de Mathématiques, Rabat, Maroc

IIIFaculté des Sciences Agdal, Département de Mathématiques, Rabat, Maroc, E-mails: heyouni@lmpa.univ-littoral.fr / jbilou@lmpa.univ-littoral.fr / abdou.messaoudi@caramail.com / tabaa.khalid@caramail.com

ABSTRACT

In the present paper, we propose a numerical method for solving the coupled Lyapunov matrix equations

A P + P AT + B BT = 0 and AT Q + Q A + CT C = 0

where A is an n ×n real matrix and B, CT are n × s real matrices with rank(B) = rank(C) = s and s << n . Such equations appear in control problems. The proposed method is a Krylov subspace method based on the nonsymmetric block Lanczos process. We use this process to produce low rank approximate solutions to the coupled Lyapunov matrix equations. We give some theoretical results such as an upper bound for the residual norms and perturbation results. By approximating the matrix transfer function F(z) = C (z In - A)-1B of a Linear Time Invariant (LTI) system of order n by another one Fm(z) = Cm (z Im - Am)-1Bm of order m, where m is much smaller than n , we will construct a reduced order model of the original LTI system. We conclude this work by reporting some numerical experiments to show the numerical behavior of the proposed method.

Mathematical subject classification: 65F10, 65F30.

Key words: coupled Lyapunov matrix equations; Krylov subspace methods; nonsymmetric block Lanczos process; reduced order model; transfer functions.

1 Introduction

Consider a stable linear multi-input multi-output (MIMO) state-space model of the form:

where A

n×n, B, CT
n×s, x(t)
n is the state vector, u(t)
s is the input vector and y(t)
s is the output vector of the system (1.1). When dealing with high-order models, it is reasonable to look for an approximate stable model

in which Am

m×m , Bm, m×s and xm(t), ym(t)
m , with m << n . Hence, the reduction problem consists in approximating the triplet {A, B, C} by another one {Â, , Ĉ} of small size. Several approaches in this area have been used as Padé approximation [15, 33, 34], balanced truncation [29, 37], optimal Hankel norm [16, 17] and Krylov subspace methods [3, 6, 11, 12, 21, 22]. These approaches require the solution of coupled Lyapunov matrix equations [1, 13, 25, 27] having the form

where P, Q are the controllability and the observability Grammians of the system (1.1). For historical developments, applications and importance of Lyapunov equations and related problems, we refer to [10, 13] and the references therein. Throughout the paper, we will assume that λi(A) + j(A) ≠ 0 for all i,j = 1,...,n where λk(A) and it's conjugate k(A) are eigenvalues of A. In this case, the equations (1.3) have unique solutions [26].

Direct methods for solving the Lyapunov matrix equations (1.3) such as those proposed in [5, 18, 24] are attractive if the matrices are of moderate size. These methods are based on the Schur or the Hessenberg decomposition. For large problems, several iterative methods have been proposed; see [14, 22, 23, 32]. These methods use Galerkin projection technics to produce low-dimensional Sylvester or Lyapunov matrix equations that are solved by using direct methods.

For the single-input single-output (SISO) case, i.e., s = 1 , two approaches based on Arnoldi and Lanczos processes were proposed in [21, 22] to solve large Lyapunov matrix equations. The Arnoldi and Lanczos processes were also applied in order to give an approximate reduced order model to (1.1) [2, 6, 11, 13].

Our purpose in this paper is to describe a method based on the nonsymmetric block Lanczos process [4, 15, 19, 36] for solving the coupled Lyapunov matrix equations (1.3). In this method, we project the initial equations onto block Krylov subspaces generated by the block Lanczos process to produce low-dimensional Lyapunov matrix equations that are solved by direct methods. By approximating the transfer function F(z) = C (z In - A)-1 B by another one Fm(z) = Cm (z Im - Am)-1Bm where Am

m×m, Bm, CmT
m×s , and m is much smaller than n, we will construct an approximate reduced order model of the continuous time linear system (1.1).

The remainder of the paper is organized as follows. In the following section, we review the nonsymmetric block Lanczos process and give the exact solutions of the coupled Lyapunov matrix equations. In Section 3, we first show how to extract low rank approximate solutions to (1.3). Then, we give some theoretical results on the residual norms and demonstrate that the low rank approximate solutions are exact solutions to a pair of perturbed Lyapunov matrix equations. In Section 4, we consider the problem of obtaining reduced order models to LTI systems by approximating the associated transfer function. This approach is based on the nonsymmetric block Lanczos process. Finally, we will present some numerical experiments.

2 The block Lanczos method and coupled Lyapunov matrix equations

2.1 The nonsymmetric block Lanczos process

Let V

n×s and consider the block matrix Krylov subspace Km(A, V) = span{V, A V, ..., Am-1V} . Notice that Z Km(A, V) means that

We also recall that the minimal polynomial of A with respect to V is the nonzero monic polynomial of lowest degree q such that

where Ωi

s×s and Ωq = Is. The grade of V denoted by grad(V) is the degree of the minimal polynomial, hence grad(V) = q.

In the sequel, we suppose that given two matrices V, W

n×s of full rank, we compute initial block vectors V1 and W1 using the QR decomposition of WT V . Hence, if WT V = δβ where δ
s×s is an orthogonal matrix (i.e., δTδ = δ δT = Is ) and β
s×s is an upper triangular matrix, then

Given an n × n matrix A and the initial n × s block vectors V, W, the block Lanczos process applied to the triplet (A, V, W) and described by Algorithm 1, generates sequences of n × s right and left block Lanczos vectors {V1, ..., Vm} and {W1, ..., Wm}. These block vectors form biorthonormal bases of the block Krylov subspaces Km(A, V1) and Km(AT, W1).

Algorithm 1. The nonsymmetric block Lanczos process [4]

Inputs : A an n × n matrix, V, W two n × s matrices and m an integer.

Step 0 . Compute the QR decomposition of WT V , i.e., WT V = δβ;

V1 = V β-1 ; W1 = W δ; 2 = A V1 ; 2 = AT W1 ;

Step 1 . For j = 1,¼,m

Specifically, after m steps, the block Lanczos procedure determines two block matrices m = (V1, ..., Vm)

n×ms, m = (W1, ..., Wm)
n×ms and an ms × ms block-tridiagonal matrix

that satisfy the following relations

and

where = (0s, ..., 0s, Is)

s×ms.

Notice that, a breakdown may occur in Algorithm 1 if

j is singular, or if j or j is not full rank. In [4], several strategies are proposed to overcome breakdowns and near-breakdowns in order to preserve the numerical stability of the block Lanczos process for eigenvalue problems. In the sequel, we assume that m < min{q, r} where q and r are the degrees of the minimal polynomials of A with respect to V and of AT with respect to W respectively.

2.2 Exact solutions of coupled Lyapunov matrix equations

Before deriving exact expressions for the solutions of the coupled Lyapunov matrix equations, let us give the following result which will be useful in the sequel.

Lemma 2.1. Let q = grad(B) , r = grad(CT) and m < min{q, r}. Assume that m steps of Algorithm 1 are applied to the triplet (A, B, CT ), then we have

where

= (Is, 0s, ..., 0s) s×ms.

Proof. Since m is a block tridiagonal matrix, then for 0 < j < m - 2 , is a block band matrix with j upper-diagonals and j lower-diagonals. Therefore, letting = (0s, ..., 0s, Is) we have

Using this last relation and the fact that

E1 = 0 , we can proof (2.7) and (2.8) by induction.

Using the same arguments, we obtain (2.8).

Using the previous lemma, we next give the low rank approximate solutions to (1.3). Let

q be the minimal polynomial of A with respect to B and q = grad(B), i.e.,

and let

r be the minimal polynomial of AT for CT where r = grad(CT), i.e.,

Define

as the block companion polynomials of

q and r . Denote by Mq and Nr the following Krylov matrices

Mq = (B, A B, ..., Aq-1B), Nr = (CT, AT CT, ..., (AT)r-1CT).

Then

We now give the following theorem

Theorem 2.2. The general solutions P and Q of the coupled Lyapunov matrix equation (1.3) are given by

where X

qs × qs and Y
rs × rs satisfy

with

= (Is, 0s,..., 0s) s×qs and = (Is, 0s, ..., 0s) s×rs.

Proof. Premultiplying (2.10) by A, postmultiplying (2.11) by AT, using (2.9) and the fact that B = Mq E1, we have

By the same way, we prove (2.11).

Again using Lemma 2.1, we have the following result which states that the solutions of (1.3) could be obtained by using the block Lanczos process.

Theorem 2.3. Suppose that l = grad(B) = grad(CT) and assume that l steps of Algorithm 1 have been run. Then, the solutions P and Q of the coupled Lyapunov matrix equations (1.3) could be expressed as follows

where Γ and

are the solutions of the following reduced Lyapunov matrix equations

and

= (Is, 0s, ..., 0s) s×ls.

Proof. Since B = V1β, the general solution P of the first Lyapunov equation in (1.3) can be expressed as follows

Using (2.7), we get

where = (Is, 0s, ..., 0s) is an s × ls matrix. Setting

then P = and we have

Multiplying the last equalities on the right by

l, on the left by and using (2.5) and (2.6) with m = l we obtain (2.16).

Note that (2.17) is obtained as above by using (2.8), (2.11) and the fact that CT = W1δT and δTδ = δδT = Is.

Before ending this section, we have to say that in general grad(B) ≠ grad(CT). Hence, if l = min{grad(B), grad(CT)} and l steps of the nonsymmetric block Lanczos process have been run, then only (2.14) and (2.16) are satisfied if l = grad(B). Similarly, only (2.15) and (2.17) are satisfied if l = grad(CT).

3 Solving the coupled Lyapunov matrix equations by the block Lanczos process

Let q = grad(B) , r = grad(CT) and m < min{q, r} . Assuming that m steps of the nonsymmetric block Lanczos process have been run, we show how to extract low rank approximate solutions of the coupled Lyapunov matrix equations (1.3).

The results given in the previous section show that the matrices given below could be considered as approximate solutions to (1.3).

where Xm and Ym are solutions of the following reduced Lyapunov equations

and E1 = (Is, 0s, ..., 0s)T

s×ms.

The low-dimensional Lyapunov equations (3.3) and (3.4) could be solved by direct methods such those described in [5, 18, 24]. In the sequel, we assume that the eigenvalues λi(m) of the block tridiagonal matrix m constructed by the nonsymmetric block Lanczos process satisfy λi(m) + j(m) ≠ 0 , for i,j = 1, ..., m. This condition ensures the existence and uniqueness of Xm and Ym the solutions of the reduced Lyapunov equations and that these solutions are symmetric and positive semidefinite.

Next, we show how to compute an upper-bound for the Frobenius residual norms in order to use it as a stopping criterion. Notice that the upper bound given below will allow us to stop the algorithm without having to compute the approximate solutions Pm and Qm. Hence, letting

be the residuals associated to Pm and Qm respectively, we have the following result

Theorem 3.1. Let Pm , Qm be the approximate solutions defined by (3.1), (3.3) and (3.2), (3.4) respectively. Let R(Pm) and R(Qm) be the corresponding residuals defined by (3.5) and (3.6) respectively. Then

where

m ,
m are the s × n matrices corresponding to the last s rows of Xm and Ym respectively.

Proof. From (3.1) and (3.3), we have

then using (2.3) and the fact that B = V1β, we get

Since Xm is the solution of the reduced Lyapunov equation (3.3) and m+1 = Vm+1βm+1, then

As Xm is a symmetric matrix, it follows that

where

m = Xm represents the s last rows of Xm.

Similarly, from (2.4) and as CT = W1δT , m+1 = Wm+1 and the fact that Ym is symmetric and is the solution of the reduced Lyapunov equation (3.4), we obtain the second inequality of (3.7).

To reduce the cost in the coupled Lyapunov block Lanczos method, the solution of the low-order Lyapunov equations are computed every k0 iterations where k0 is a chosen parameter. Note also that the approximate solutions are computed only when

where is a chosen tolerance. Summarizing the previous results, we get the following algorithm

Algorithm 2. The coupled Lyapunov block Lanczos algorithm (CLBL)

  • Inputs : A an n × n stable matrix, B an n × s matrix and C an s × n matrix.

  • Step 0 . Choose a tolerance > 0 , an integer parameter k0 and set k = 0 ; m = k0;

  • Step 1 . For j = k + 1, k + 2, ..., m;

    construct the block tridiagonal matrix m, the biorthonormal

    bases {Vk+1, ..., Vm} , {Wk+1, ..., Wm} , m+1 and m+1 by

    Algorithm 1 applied to the triplet (A, B, CT);

    end For

  • Step 2 . Solve the low-dimensional Lyapunov equations:

  • Step 3 . Compute the upper bounds for the residual norms:

  • Step 4 . If rm > or sm > , set k = k + k0; m = k + k0 and go to step 1.

  • Step 5 . The approximate solutions are represented by the matrix product:

We end this section by the following result which shows that the approximate solutions Pm and Qm are the exact solutions of two perturbed Lyapunov matrix equations.

Theorem 3.2. Suppose that m steps of Algorithm 2 have been run. Let Pm , Qm be the approximate solutions defined by (3.1), (3.3) and (3.2), (3.4) respectively. Then Pm and Qm are the exact solutions of the perturbed Lyapunov matrix equations

where

Proof. Multiplying (3.3) on the left by m, on the right by , we obtain

Using (2.3), we have

and since = Ims, then

Hence,

(A - Δ1) Pm + Pm (A - Δ1)T + B BT = 0,

where

We use the same arguments to show (3.9).

4 Reduced order models via the nonsymmetric block Lanczos process

In this section, we consider the following state formulation of a multi-input and multi-output linear time invariant system (LTI)

where x(t)

n is the state vector, u(t)
s is the input, y(t)
s is the output of interest and A
n×n, B, CT
n×s. Applying the Laplace transform to (4.1), we obtain

where X(z) , Y(z) and U(z) are the Laplace transforms of x(t) , y(t) and u(t) respectively.

The standard way of relating the input and output vectors of (4.1) is to use the associated transfer function F(z) such that Y(z) = F(z) U(z) . Hence, if we eliminate X(z) in the previous two equations, we get:

We recall that most of model reduction techniques, like the moment-matching approaches, are based on this transfer function [2, 13, 15, 20, 22]. Moreover, if the number of state variables of the previous LTI system is very high, (i.e., if n the order of A is large), direct computations of F(z) becomes inefficient or even prohibitive. Hence, it is reasonable to look for a model of low order that approximates the behavior of the original model (4.1). This low-order model can be expressed as follows

where Am

m×m , Bm and
s with m << n.

In [22] and for the single-input single-output case (i.e., s = 1 ), the authors proposed a method based on the classical Lanczos process to construct an approximate reduced order model to (4.1). The aim of this section is to generalize some of the results given in [22] to the multi-input multi-output case.

More precisely, let us see how to obtain an efficient reduced model to (4.1) by using the nonsymmetric block Lanczos process. This is done by computing an approximate transfer function Fm(z) to the original one F(z) . In fact, writing F(z) = C X where X = (z In - A)-1B

n×s and considering the block linear system

we see that, approximating F(z) can be achieved by computing an approximate solution Xm to X by using the block Lanczos method for solving linear systems with multiple right-hand sides [19].

Letting

m , m and m be the biorthonormal bases and the block tridiagonal matrix, respectively, given by the nonsymmetric block Lanczos process applied to the triplet (A, B, CT) and starting from an initial guess X0 = 0 , we can show that

Xm = m (z Ims - m)-1E1β.

Since

the transfer function F(z) can then be approximated by

The above result allows us to suggest the following reduced order model to (4.1)

Next, we show that the reduced order model (4.5) proposed in the above approach approximates the behavior of the original model (4.1). Moreover, we give an upper bound for ||Fm(z) - F(z)|| which enable us to monitor the progress of the iterative process at each step. More precisely, we have the following results

Theorem 4.1. The matrices

m , β and δ generated by the block Lanczos process applied to the triplet (A, B, CT) are such that the first 2m - 1 Markov parameters of the original and the reduced models are the same, that is,

Proof. For j {0, 1, ..., 2m - 1} , let j1, j2 {0, 1, ..., m - 1} such that j = j1 + j2 . Using the results of Lemma 2.1 and the fact that C = δ ,

m = , we have

Before giving an upper bound for ||Fm(z)-F(z)||, we have to recall the definition of the Schur complement [35] and give the first matrix Sylvester identity [28].

Definition 4.2. Let be a matrix partitioned into four blocks

where the submatrix is assumed to be square and nonsingular. The Schur complement of in , denoted by ( / ) , is defined by

If is not a square matrix then a pseudo-Schur complement of in can still be defined [7, 8]. Now, considering the matrices and partitioned as follows

we have the following property

Proposition 4.3. If the matrices and are square and nonsingular, then

Theorem 4.4. Let αi, β, βi , δ, δi (1 < i < m), m+1 and m+1 be the matrices obtained after m steps of the nonsymmetric block Lanczos process applied to the triplet (A, B, CT) . If (z Ims - Tm), (z In - A) are nonsingular and z is such that |z| > ||A||2, then

where

and

Dm = (z Is - αm)-1, Dj-1 = (z Is - αj-1 - δj Djβj)-1 for j = m, ..., 2.

Proof. As C = δ and B = V1β, we have

where G(z) = (z In - A)-1V1(z Ims - m)-1E1 . Now, from (2.3), we have

Multiplying the last relation on the left by , on the right by E1 we obtain

Similarly, by using (2.4), we have

and again, multiplying the last equality on the left by , on the right by m+1 , we have

Combining the formulas (4.10), (4.11) and letting

Γm,1 (z) = (z Ims - m)-1E1, Γ1,m (z) = (z Ims - m)-1Em,

we get

Γ(z) = Γ1,m(z) (z In - A)-1

m+1Γm,1(z),

and then

Finally, using the inequality ||(z In - A)-1||2< for |z| > ||A||2, we have

Next, set Dm = (z Is - αm)-1 and remark that Γm,1(z) is the Schur complement of

Hence, using the result of Proposition 4.3, we have

where

= (Is, 0s, ..., 0s) ∈ s×(m-1)s, = (0s, ..., 0s, Is) ∈ s×(m-1)s

and

m-1 is a block tridiagonal matrix having the same elements that m-1 except the (m - 1, m - 1) block which is equal to (αm-1 + δm Dmβm).

Again, applying Proposition 4.3 to compute (z I(m-1)s - m-1)-1E1 , and so on, we finally obtain (4.8). Similarly, we remark that Γ1,m(z)T is the Schur complement of

Then, using the same arguments as for Γm,1(z) we get (4.9).

Summarizing the previous results, we get the following algorithm

Algorithm 4. Model reduction via the block Lanczos process

  • Inputs : A the system matrix, B the input matrix, C the output matrix.

  • Step 0 . Choose a tolerance > 0 , an integer parameter k0 and set k = 0 ; m = k0.

  • Step 1 . For j = k + 1, k + 2, ..., m

    construct the block tridiagonal matrix m, m+1 and m+1 by

    Algorithm 1 applied to the triplet (A, B, CT).

    compute the matrices Γ1,m(z) and Γm,1(z) using (4.8), (4.9).

    end For

  • Step 2. Compute the upper bound for the residual norm:

  • Step 3 . If rm > , set k = k + k0 ; m = k + k0 and go to step 1.

  • Step 4 . The reduced order model is Am = m, Bm = E1β and Cm = δ .

5 Numerical experiments

In this section, we present some numerical experiments to illustrate the behavior of the block Lanczos process when applied to solve coupled Lyapunov equations. We also applied the block Lanczos process for model reduction in large scale dynamical systems. All the experiments were performed on a computer of Intel Pentium-4 processor at 3.4GHz and 1024 MBytes of RAM. The experiments were done using Matlab 6.5.

Experiment 1. In this first experiment, we compared the performance of the coupled Lyapunov block Lanczos (CLBL) and the coupled Lyapunov block Arnoldi (CLBA) algorithms [21]. Notice that:

  • In all the experiments, the parameter k0 used to compute the solutions of the low-order Lyapunov equations is k0 = 5.

  • For the coupled Lyapunov block Arnoldi algorithm, the tests were stopped when the residual given in [21] was less than = 10-6 .

  • For the coupled Lyapunov block Lanczos algorithm, the iterations were stopped when

We note that Res1 = ||A Pm + Pm AT + B BT||F and Res2 = ||AT Qm + Qm A + CT C||F are the exact Frobenius residual norms for the first and the second Lyapunov equations (1.3) respectively. The number Iter of iterations required for CLBA corresponds to the total number of iterations needed for solving separately the two Lyapunov equations (1.3) by the Lyapunov block Arnoldi method.

The matrices A1 and A2 tested in this experiment comes from the five-point discretization of the operators [30]

on the unit square [0,1] × [0,1] with homogeneous Dirichlet boundary conditions. The dimension of each matrix is n = where n0 is the number of inner grid points in each direction. The obtained stiffness matrices A1 and A2 are sparse and nonsymmetric with a band structure [30]. The entries of the matrices B and C were random values uniformly distributed on [0,1].

Experiment 2. The dynamical system used in this experiment is a non trivial constructed model (FOM) from [9, 31]. Originally, the system obtained from the FOM model is SISO and is of order n = 1006 . So, in order to get a MIMO system, we modified the inputs and outputs. The state matrices are given by

where

and Ã4 = diag(-1, ..., -1000). The columns of B and C are such that

are random column vectors.

The response plot and error plot given below show the singular values σmax(F(j ω)) and σmax(Fm(j ω) - F(j ω)) as a function of the frequency ω respectively, where σmax(.) denotes the largest singular value and ω [10-1 105] . As a stopping criterion, we used the upper bound (4.7). More precisely, we stopped the computation when

The frequency response (solid line) of the modified FOM model is given in the top of Figure 5.1 and is compared to the frequencies responses of order m = 12 when using the block Arnoldi process (dash-dotted line) and the block Lanczos process (dashed line). The exact errors ||F(z) - F12(z)||2 produced by the two processes are shown in the bottom of Figure 5.1.


6 Conclusion

In this paper, we applied the block Lanczos process for solving coupled Lyapunov matrix equations and also for model reduction. We gave some new theoretical results and showed the effectiveness of this process with some numerical examples.

Acknowledgments. We would like to thank the referees for their recommendations and helpful suggestions.

Received: 01/VII/07. Accepted: 05/XII/07.

#746/07.

  • [1] A.C. Antoulas and D.C. Sorensen, Projection methods for balanced model reduction, Technical Report, Rice University, Houston, Tx, (2001).
  • [2] Z. Bai and Q. Ye, Error estimation of the Padé approximation of transfer functions via the Lanczos process Elect. Trans. Numer. Anal., 7 (1998), 1-17.
  • [3] Z. Bai, Krylov subspace techniques for reduced-order modeling of large-scale dynamical systems App. Num. Math., 43 (2002), 9-44.
  • [4] Z. Bai, D. Day and Q. Ye, ABLE: An adaptive block Lanczos method for non-Hermitian eigenvalue problems SIAM J. Mat. Anal. Appl., 20(4) (1999), 1060-1082.
  • [5] R.H. Bartels and G.W. Stewart, Solution of the matrix equation AX + XB = C Comm. ACM, 15 (1972), 820-826.
  • [6] D.L. Boley and B.N. Datta, Numerical methods for linear control systems, in: C. Byrnes, B. Datta, D. Gilliam, C. Martin (Eds.) Systems and Control in the twenty-First Century, Birkhauser, pp. 51-74, (1996).
  • [7] C. Brezinski and M.R. Zaglia, A Schur complement approach to a general extrapolation algorithm Linear Algebra Appl., 368 (2003), 279-301.
  • [8] D. Carlson, What are Schur complements, anyway? Linear Algebra Appl., 74 (1986),257-275.
  • [9] Y. Chahlaoui and P. Van Dooren, A collection on Benchmark examples for model reduction of linear time invariant dynamical systems SILICOT Working Note 2002-2. http://www.win.tue.nl/niconet/NIC2/benchmodred.html
  • [10] B.N. Datta, Linear and numerical linear algebra in control theory: Some research problems Lin. Alg. Appl., 197-198 (1994), 755-790.
  • [11] B.N. Datta, Large-Scale Matrix computations in Control Applied Numerical Mathematics, 30 (1999), 53-63.
  • [12] B.N. Datta, Krylov Subspace Methods for Large-Scale Matrix Problems in Control Future Gener. Comput. Syst., 19(7) (2003), 1253-1263.
  • [13] B.N. Datta, Numerical Methods for Linear Control Systems Design and Analysis Elsevier Academic Press, (2003).
  • [14] A. El Guennouni, K. Jbilou and A.J. Riquet, Block Krylov subspace methods for solving large Sylvester equations Numer. Alg., 29 (2002), 75-96.
  • [15] P. Feldmann and R.W. Freund, Efficient Linear Circuit Analysis by Padé Approximation via The Lanczos process IEEE Trans. on CAD of Integrated Circuits and Systems, 14 (1995), 639-649.
  • [16] K. Glover, All optimal Hankel-norm approximations of linear multivariable systems and their L-infinity error bounds International Journal of Control, 39 (1984), 1115-1193.
  • [17] K. Glover, D.J.N. Limebeer, J.C. Doyle, E.M. Kasenally and M.G. Safonov, A characterisation of all solutions to the four block general distance problem SIAM J. Control Optim., 29 (1991), 283-324.
  • [18] G.H. Golub, S. Nash and C. Van Loan, A Hessenberg-Schur method for the problem A X + X B = C IEEE Trans. Autom. Control, AC-24 (1979), 909-913.
  • [19] G.H. Golub and R. Underwood, The block Lanczos method for computing eigenvalues Mathematical Software III, J.R. Rice, ed., Academic Press, New York, pp. 361-377, (1977).
  • [20] E.J. Grimme, D.C. Sorensen and P. Van Dooren, Model reduction of state space systems via an implicitly restarted Lanczos method Numer. Alg., 12 (1996), 1-32.
  • [21] I.M. Jaimoukha and E.M. Kasenally, Krylov subspace methods for solving large Lyapunov equations SIAM J. Matrix Anal. Appl., 31(1) (1994), 227-251.
  • [22] I.M. Jaimoukha and E.M. Kasenally, Oblique projection methods for large scale model reduction SIAM J. Matrix Anal. Appl., 16(2) (1995), 602-627.
  • [23] K. Jbilou and A.J. Riquet, Projection methods for large Lyapunov matrix equations Linear Alg. Appl., 415(2-2) (2006), 344-358.
  • [24] S.J. Hammarling, Numerical solution of the stable, nonnegative definite Lyapunov equation IMA J. Numer. Anal., 2 (1982), 303-323.
  • [25] A.S. Hodel, Recent applications of the Lyapunov equation in control theory, in Iterative Methods in Linear Algebra, R. Beauwens and P. de Groen, eds., Elsevier (North Holland), pp. 217-227, (1992).
  • [26] R.A. Horn and C.R. Johnson, Topics in Matrix Analysis Cambridge University Press, Cambridge, (1991).
  • [27] J. Lasalle and S. Lefschetz, Stability of Lyapunov direct methods Academic Press, New York, (1961).
  • [28] A. Messaoudi, Recurssive interpolation algorithm: a formalism for solving systems of linear equations, I. direct methods Comput. Appl., 76 (1996), 31-53.
  • [29] B.C. Moore, Principal component analysis in linear systems: controllability, observability and model reduction IEEE Trans. Automatic Contr., AC-26 (1981), 17-32.
  • [30] T. Penzl, LYAPACK A MATLAB toolbox for Large Lyapunov and Riccati Equations, Model Reduction Problem, and Linear-quadratic Optimal Control Problems http://www.tu-chemintz.de/sfb393/lyapack
  • [31] T. Penzl, Algorithms for model reduction of large dynamical systems Technical Report, SFB393/99-40. TU Chemintz, 1999. http://www.tu-chemintz.de/sfb393/sfb99pr.html
  • [32] Y. Saad, Numerical solution of large Lyapunov equations, in Signal Processing, Scattering, Operator Theory and Numerical Methods, M.A. Kaashoek, J.H. Van Shuppen and A.C. Ran, eds., Birkhauser, Boston, pp. 503-511, (1990).
  • [33] Y. Shamash, Stable reduced-order models using Padé type approximations IEEE. Trans. Automatic Control, AC-19 (1974), 615-616.
  • [34] Y. Shamash, Model reduction using the Routh stability criterion and the Padé approximation technique Internat. J. Control, 21 (1975), 475-484.
  • [35] I. Schur, Potenzreihn im Innern des Einheitskreises J. reine Angew. Math, 147 (1917), 205-232.
  • [36] Q. Ye, An Adaptive Block Lanczos Algorithm Num. Alg., 12 (1996), 97-110.
  • [37] K. Zhou, J.C. Doyle and K. Glover, Robust and Optimal Control New Jersey: Prentice Hall, (1996).

Publication Dates

  • Publication in this collection
    21 July 2008
  • Date of issue
    2008

History

  • Received
    01 July 2007
  • Accepted
    05 Nov 2007
Sociedade Brasileira de Matemática Aplicada e Computacional Sociedade Brasileira de Matemática Aplicada e Computacional - SBMAC, Rua Maestro João Seppe, nº. 900 , 16º. andar - Sala 163, 13561-120 São Carlos - SP Brasil, Tel./Fax: 55 16 3412-9752 - São Carlos - SP - Brazil
E-mail: sbmac@sbmac.org.br