Services on Demand
Journal
Article
Indicators
 Cited by SciELO
 Access statistics
Related links
 Cited by Google
 Similars in SciELO
 Similars in Google
Share
Computational & Applied Mathematics
Online version ISSN 18070302
Comput. Appl. Math. vol.29 no.3 São Carlos 2010
http://dx.doi.org/10.1590/S180703022010000300012
The global Arnoldi process for solving the SylvesterObserver equation
B.N. Datta^{I}; M. Heyouni^{II}; K. Jbilou^{II}
^{I}Department of Mathematical Sciences Northern Illinois University, DeKalb, IL 60115, U.S.A
^{II}L.M.P.A., Université du Littoral, 50 rue F. Buisson BP 699, F62228 Calais Cedex, France Emails: heyouni@lmpa.univlittoral.fr / jbilou@lmpa.univlittoral.fr
ABSTRACT
In this paper, we present a method and associated theory for solving the multiinput SylvesterObserver equation arising in the construction of the Luenberger observer in control theory. The proposed method is a particular generalization of the algorithm described by Datta and Saad in 1991 to the multioutput. We give some theoretical results and present some numerical experiments to show the accuracy of the proposed algorithm.
Mathematical subject classification: 65F10, 65F30.
Key words: global Arnoldi, eigenvalue assignment, SylvesterObserver equation.
1 Introduction
Consider the timeinvariant linear control system
where A ∈ \^{n×n} is a large nonsymmetric matrix, B ∈ \^{n×p} , C ∈ \^{n×q} , (t) ∈ \^{n} , and (t) ∈ \^{p} . In many practical situations, the initial state _{0} and the states (t) for t > 0 are not explicitly known.
To implement the statefeedback control law for basic control design andanalysis, such as state feedback stabilization, eigenvalue and eigenstructure assignment, the LQR, and the statefeedback Hinfinity control, etc. (see [12] for details), one needs to explicitly know the state variables. Thus the unmeasured state variables must be estimated. There are two closely related approaches for state estimation: Stateestimation via eigenvalue assignment and the SylvesterObserver equation approach (see Chapter 12 of [12]). This paper deals with numerical solutions of largescale SylvesterObserver equation, suitable for large and sparse problems.
The SylvesterObserver equation is a variation of the classical Sylvester equation. It has the following form:
where the state matrix A and the output matrix C are known. The matrices X ∈ \^{n×q } ∈ \^{q×q } and G ∈ \^{q×q } are to be found. Note that, the matrix can be chosen to be asymptotically stable (that is every eigenvalue of has negative real part) and in that case it can be shown that the vector e(t) = x(t)  X^{T} (t) converges to zero as t increases, where x(t) is the solution of
A conventional way to solve (1.2) is to choose the matrices and G in a suitable manner. For example, can be chosen as a realSchur matrix and G can be chosen to be equal to the identity matrix I_{q} . In this case, the HessenbergSchur algorithm [15] is a natural choice for solving (1.2). Another widely used method for solving this equation is due to Van Dooren [27, 28]. The method is based on the reduction of the observable pair (A,C) to an observerHessenberg pair (H,) . That is, an orthogonal matrix P is computed such that H = P^{T} A P is a block upper Hessenberg matrix and = C P has the following form = (0,...,0, C_{1}) .
Note that if the matrices A and have disjoint spectra then the Sylvester equation (1.2) has a unique solution X [12, 19]. Sylvester equations play an important role in control and communication theory, model reduction, image restoration and numerical methods for ordinary differential equations; see [4, 7, 12, 13, 20] and the references therein.
In the remainder of the paper, we choose G = I_{q} and suppose that the matrix C has the following form: C = (0_{n×r},...,0_{n×r},) where the n×r matrix is of full rank, q = m r and with r << n . So, letting = (0_{n×r},...,0_{n×r},I_{r}) ∈ ^{n×mr}, the SylvesterObserver equation (1.2) becomes
where A ∈ ^{n×n}, ∈ ^{n×r} are given and arbitrary, while ∈ ^{mr×mr} and X ∈ ^{n×mr} are to be determined such that

is stable, i.e., all its eigenvalues have negative real parts

the spectrum of the matrix is disjoint from that of A

the pair (^{T}, G^{T}) is controllable, i.e., the matrix [G^{T},^{T}λI_{q}] is full rank for every λ ∈.
We consider the case when A is large and sparse, so that the standard techniques such as the HessenbergSchur method, for solving a Sylvester equation cannot be applied. Based on the Arnoldi process, a solution method, suitable for large and sparse computation, was proposed for the SylvesterObserver equation (1.4) by Datta and Saad [9]. The DattaSaad method is, however, restricted to the singleoutput case only ; that is when the righthand side matrix C is of rank one, i.e., r = 1 . The matrix A is used only in matrixvector product evaluations and this makes the method wellsuited for the solution of large and sparse SylvesterObserver equations.
In this paper, we propose a particular generalization of the DattaSaad method [9] to the multioutput with the righthand side having the structure given by (1.4) and where the matrix is obtained as a Kronecker product. This special generalization is based on the global Arnoldi method proposed in [21]. Other methods for solving small to medium SylvesterObserver equations (with special righthand sides) were introduced in [1, 6, 10, 11]. The new proposed method consists in choosing a starting n×r block vector appropriately, and thenrunning m steps of the global Arnoldi process with this starting block vector. The method, like the DattaSaad method, requires the solution of a special type of eigenvalue assignment problem which is solved with the simple recursive method of Datta proposed in [7]. The new algorithm computes simultaneously an mr ×mr block upper Hessenberg matrix having a set of m eigenvalues with multiplicity r and an F orthonormal matrix X solving the SylvesterObserver equation. When dealing with multiple eigenvalues then instead of using the global Arnoldi process, it would be interesting to apply the block Arnoldi algorithm for solving these SylvesterObserver equations but this needs new properties (to be done) of the algorithm. The numerical experiments show that both the solution matrix X and the assigned eigenvalues are accurate up to computational precisions. Furthermore, the matrix X has low condition number. The accuracy in both cases were measured by the corresponding relative residual norms.
The remainder of the paper is organized as follows. We review, in Section 2, some properties of the ⊗product and the ◊product introduced in [3]. In Section 3, we show how to apply the global Arnoldi process for solving the multioutput SylvesterObserver equation. Section 4 is devoted to some numerical experiments.
2 Background and notations
We use the following notations. For X and Y two matrices in ^{n×r} , we consider the Frobenius inner product defined by ‹ X,Y ›_{F} = tr(X^{T} Y) , where tr(Z) denotes the trace of a square matrix Z . The associated norm is the Frobenius norm denoted by ║.║_{F} . The notation X ⊥_{F} Y means that ‹ X,Y ›_{F} = 0 .
The Kronecker product of two matrices A and B is defined by A ⊗B : = [a_{i,j} B] and satisfy the following properties
( A ⊗B)( C ⊗D) = ( A C ⊗B D),
( A ⊗B)^{T} = A^{T} ⊗B^{T},
( A ⊗B)^{1} = A^{1} ⊗B^{1},if A and B are invertible.
In the following, we recall the ◊product defined in [3] and list some properties that will be useful later.
Definition 2.1 [3]. Let A = [ A_{1}, A_{2},..., A_{p}] and B = [ B_{1}, B_{2},...,B_{l}] be matrices of dimension n×pr and n×lr, respectively, where the blocks A_{i} and B_{j} , (i = 1,...,p; j = 1,...,l) are n×r matrices. Then the p×l matrix A^{T} ◊ B is defined by: A^{T}◊B =
We notice that

If r = 1 then A^{T} ◊ B = A^{T} B .

The matrix A = [ A_{1}, A_{2},..., A_{p}] is Forthonormal if and only if A^{T} ◊A = I_{p} , i.e.,
 If p = l = 1, then A^{T} ◊ B = < A, B > _{F}, (note that if A = B then A^{T }◊ A = ).
and that the following properties hold for the ◊product.
Proposition 2.2 [3]. Let A, B, C ∈ ^{n×pr} , D ∈ ^{n×n} , L ∈ ^{p×p} and α ∈ . Then we have
1. (A + B)^{T} ◊ C = A^{T} ◊ C + B^{T} ◊ C .
2. A^{T} ◊ ( B+ C) = A^{T} ◊ B + A^{T} ◊C .
3. (α A)^{T} ◊ C = α (A^{T} ◊ C) = A^{T} ◊ (α C) .
4. (A^{T} ◊ B)^{T} = B^{T} ◊ A .
5. (D A)^{T} ◊ B = A^{T} ◊ ( D^{T} B) .
6. A^{T} ◊ [B (L ⊗ I_{r})] = (A^{T} ◊ B) L .
3 The global Arnoldi process for the SylvesterObserver equation
The global Arnoldi process [21], and other global processes such as the global Lanczos [17] and the global Hessenberg process [16], were recently used in the context of iterative methods for large sparse matrix equations. Combined with a Galerkin orthogonality condition or with a minimizing norm condition, these algorithms were applied for large sparse linear systems with multiple righthand sides and related problems [2, 17, 22, 23].
Before describing the global Arnoldi process, we first give some definitions and remarks on matrix Krylov subspace methods [17, 21].
Let A ∈ ^{n×n} , V ∈ ^{n×r} and m a fixed integer. The matrix Krylov subspace _{m}(A,V) = span{V,A V,...,A^{m}^{1} V} is the subspace spanned by the vectors (matrices) V, A V, ...,A^{m}^{1} V . Let _{m} be the n × mr block matrix whose jth block is A^{j}^{1} V , (j = 0,...,m1) , then
3.1 The global Arnoldi process
Given an n ×n matrix A , an n × r starting block vector V and an integer m < n , the global Arnoldi process applied to the pair (A,V) is described as follows:
Algorithm 1. The modified global Arnoldi process.

Inputs: A an n × n matrix, V an n × r matrix and m an integer.

Step 0. V_{1} = V/║V║_{F} ;
 Step 1. For j = 1,...,m
= A V_{j} ;
for i = 1,...,j
h_{i,j} = ‹V_{i},›_{F} ;
= h_{i,j} V_{i} ;
endfor
h_{j}_{+1,j} = ║ ║_{F} ;
V_{j}_{+1} = /h_{j}_{+1,j} ;
end For.
The above process computes simultaneously a set of Forthonormal block vectors V_{1}, V_{2},...,V_{m}, V_{m}_{+1} , i.e.,
where _{j} is the n × jr matrix _{j} = [ V_{1}, ...,V_{j}] ( j = 1,...,m+1 ) and an (m+1) × m upper Hessenberg matrix _{m} whose nonzero entries are the h_{i,j} defined in Algorithm 1. We also have the following relations
where
E_{m} = (e_{m} ⊗ I_{r}) = [0_{r},...,0_{r},I_{r}]^{T} with e_{m} = (0,...,0,1)^{T} ∈ ^{m}
and the matrix H_{m} is an upper Hessenberg matrix obtained from _{m} by removing its last row.
Notice that the global Arnoldi process breaks down at step j , i.e., V_{j}_{+1} = 0 ,if and only if the degree of the minimal polynomial of V is exactly j . Moreover, it is easy to establish the following result.
Proposition 3.1. Apart from a multiplicative scalar, the polynomial p_{m} such that V_{m}_{+1} = p_{m}(A)V_{1} is the characteristic polynomial of the Hessenberg matrix H_{m} . Moreover, this polynomial minimizes the norm ║q(A) V_{1}║_{F} over all monic polynomials of degree m .
Proof. The proof is similar to the one given in [26] for the case r = 1 withthe classical Arnoldi process.
3.2 Application of the global Arnoldi process to Solution of the SylvesterObserver equation
We start by rewriting the equation (3.3) as
and let us observe the resemblance between this equation and the equation (1.4). Hence, in order to solve the SylvesterObserver equation (1.4), we first need to find a block vector V_{1} such that the corresponding last block vector V_{m}_{+1} is equal to , apart from a multiplicative scalar, and then transform the upper Hessenberg matrix H_{m} to _{m} such that the eigenvalues of _{m}: Sp(_{m}) = {µ_{1},...,µ_{m}} , where µ_{1}, ...,µ_{m} are some given scalars. In particular, these scalars can be chosen as numbers with negative real parts, if desired. We can take in (1.4), = _{m} ⊗ I_{r} , and observe that Sp() = where each eigenvalue of is of multiplicity r .
To find the block vector V_{1} , we will use the result of Proposition 3.1. In fact, since the matrix H_{m} given by the global Arnoldi process must be transformedby an eigenvalue assignment algorithm [8] to _{m} to have the preassigned spectrum {µ_{1},...,µ_{m}} , we obtain V_{1} = where Y is the solution of the block linear system
and
is the characteristic polynomial of _{m} . Note that, the partial approach suggested in [9] can be used to solve (3.5). It consists in decomposing the above block system into m linearly independent systems
The solution Y is then obtained as the following linear combination; (see [9])
where
For more details about obtaining (3.7) and (3.8), we refer to [9].
In order to solve the m multiple linear systems (3.7), we can apply l steps of the global GMRES algorithm [21]. In this case, we construct an F orthonormal matrix _{l} = [V_{1},...,V_{l}] whose blocks form an Forthonormal basis of the matrix Krylov subspace _{l} = span{,A ...,A^{l}^{1} } and then we solve m independent small (l+1) ×l least squares problems with the matrices (_{l}µ_{i} I_{l}) . The bulk of the work is in generating _{l} , and this is done onlyonce [9].
When the solution Y of the block linear system (3.5) is obtained, we apply m steps of the global Arnoldi process to the pair (A,Y) to get an Forthonormal matrix _{m} and an upper Hessenberg matrix H_{m} . We then have to modify the last column of H_{m} in such a way that the resulting Hessenberg matrix _{m} has a desired set of eigenvalues {µ_{1},...,µ_{m}} . To achieve this task, we will use a variant of the poleassignment method proposed in [8].
Let H_{m} = [h_{i,j}] be an m × m unreduced upper Hessenberg matrix, and define the following quantities:
where q_{m} is defined by (3.6). If the parameters µ_{1},...,µ_{m} form a set of distinct complex numbers and are such that Sp(H_{m}) ∩ {µ_{j}}_{j}_{ = 1,...,m} = ø, we can show, using Theorem 5.1 in [9], that
Moreover, if the set {µ_{j}}_{j = 1,...,m} is invariant under complex conjugation, then the matrix _{m} is real [5].
Next, we give some results that will be used later.
Lemma 3.2. Suppose that _{m}, H_{m} are obtained after applying m steps of the global Arnoldi process to the pair (A,V) and let p be a polynomial of degree less than m. Then
Proof. By induction, it is clear that A^{j} V_{1} =_{m} (e_{1} ⊗ I_{r}) , for 0 < j < m , where e_{1} = (1,0,...,0)^{T} ∈ ^{m} . Hence, for any polynomial p of degree less than m , we have
and we get (3.12) by premultiplying, with the ◊product, the previous equality on the left by and by using the relation 6 of Proposition 2.2.
Lemma 3.3 [5]. Let H_{m}_{+1} = [H_{i,j} ]_{i,j = 1,...,(m+1)} ∈ ^{(m+1)×(m+1)} be an upper Hessenberg matrix and p a monic polynomial of degree m. Then
Now, let {µ_{j}}_{j = 1,...,m} be a set of distinct complex numbers, such that Sp(A) ∩{µ_{j}}_{j}_{ = 1,...,m} = ø. Let Y be the unique solution of the blocklinear system of equations (3.5). The following results show how to combine the global Arnoldi process with the assignment procedure in order to solve the SylvesterObserver equation (1.4).
Proposition 3.4. Let _{m}_{+1} = [_{m}, _{m}_{+1}] and H_{m} be the Forthonormal matrix and the upper Hessenberg matrix obtained after applying m steps of the global Arnoldi process to the pair (A,Y), respectively. Define
Then
Moreover
Proof. As the starting block vector Y ensures that V_{m}_{+1} is equal to up to ascaling factor, then ∈ _{m+1}(A,V_{1}) . Now, since the block vectors V_{1},..., V_{m}_{+1} , form a basis of _{m+1}(A,V_{1}) , it follows that there exist g ∈ ^{m} and λ ∈ such that
Premultiplying (3.17) on the left by , respectively, and using the fact that V_{m+1} = [_{m}, V_{m}_{+1} ] is an Forthonormal matrix, we get
and
Using (3.17), the fact that f = β_{m} g and ◊ = tr( ) , we obtain
β_{m} =_{m} (f ⊗I_{r}) +β_{m} tr( ) V_{m}_{+1}
=_{m} (f ⊗ I_{r}) + h_{m}_{+1,m} V_{m}_{+1},
which gives
Finally, combining this last equality with (3.3), we get (3.15).
Now, since V_{1} = , then using (3.5), property 3 of Proposition 2.2 and Lemma 3.2 we get
Thus f = β_{m} ║Y║_{F} s, where s is defined by (3.9). Moreover, we have
and by using (3.13) and Lemma 3.3, we get
where the parameter α is defined by (3.10). Hence α = β_{m} ║Y║_{F} , and weobtain (3.16) by using (3.11).Next, we give a new expression for the scaling factor β_{m} given in Proposition 3.4. Let D be the last block column of A _{m}  _{m} (_{m }⊗ I_{r}) , we have D = β_{m} and so
We note that the new expression (3.20) gives better numerical results than the one given in Proposition 3.4.
Finally, the solution X of the SylvesterObserver equation (1.4) is defined by
In the single input case, i.e. r = 1 , the solution obtained by the DattaSaad method is orthonormal, while in the multipleoutput case the obtained solution is Forthonormal.
Using the previous results, we summarize the global Arnoldi process for multipleoutput SylvesterObserver equation as follows
Algorithm 2. The global Arnoldi algorithm for multipleoutput SylvesterObserver equation.

Inputs: A an n × n matrix, an n × r matrix and m parameters µ_{1},...,µ_{m} equation.
 Step 1. Solve the linear system q(A) Y = , where q is given by (3.6); i.e.,
solve the m linear independent systems
(Aµ_{i} I_{n}) Y_{i} = , i = 1,...,m.
Get the solution Y as the linear combination Y =
where λ_{i} = and q^{'}_{m}(µ_{i}) = (µ_{i} µ_{j}) .
 Step 2. Apply m steps of the global Arnoldi process to the pair (A,Y) to generate
_{m+1} = [V_{1},...,V_{m}_{+1}] and H_{m} ;
 Step 3. Change the last column of H_{m} to get _{m} such that
Sp(_{m}) = , i.e.,
compute f = α s where α and s are given by (3.10) and (3.9);
set _{m} = H_{m}  f ;
 Step 4. Compute D the last blockcolumn of A _{m}  _{m} (_{m} ⊗ I_{r}) ;
set β_{m} =
 Step 5. Set X = and = _{m} .
4 Numerical experiments
The numerical tests were run using Matlab 7.1, on an Intel Pentium workstation, with machine precision equal to 2.22×10^{16} . In all our experiments, the n × r matrix is generated randomly using the Matlab function rand. The m linear systems, in Step 1 of Algorithm 2 are solved using the global GMRES and a maximum of 50 iterations was allowed in the global Arnoldi process. The initial guess was (Y_{i})_{0} = 0_{n×r} . The relative tolerance used when solving (3.7) was ε = 10^{10} .
Example 1. The matrix A = gearmat used in this first example is of size n = 10000 . It is the tridiagonal matrix given by
For a detailed description of the Gear matrix, we refer the readers to [14] and to [18]. We choose µ_{k} =  4k, for k = 1,...,m . The obtained results for different values of r and m are given in Table 4.1.
The approximation X_{m} = was computed with the parameter β_{m} as given by (3.20).
Example 2. In this second experiment, we consider the following matrix of size n = 2p with p = 4000 .
where L and D are diagonal matrices of size p × p. Letting L = diag(l_{1},...,l_{p}) and D = diag(d_{1},...,d_{p}) , the eigenvalues of A are given by the solutions of the quadratic equations
x^{2}d_{k} xl_{k}, k = 1,...,p.
Hence, if d_{k} = 2 α_{k} and l_{k} = (+) then Sp(A) = {λ_{k},}_{k = 1,...,p} ,where λ_{k} = α_{k}+ι β_{k} . In this example, we took r = 4 and the parameters α_{k}, β_{k} were random values uniformly distributed in [1, 0] and [0, 1] respectively. In order to show the influence of the parameters {µ_{1},...,µ_{m}} , we give the results for two choices: We consider the set {µ_{1},...,µ_{m}} , invariant under complex conjugation, and such that the real and imaginary parts of µ_{1}, ..., µ_{m} are uniformly distributed in [2, 1] and [0, 1] respectively in the first choice (Table 4.2). For the second choice (Table 4.3), the real and imaginary parts of µ_{1}, ..., µ_{m} are uniformly distributed in [4, 3] and [0, 1], respectively. To show that the formula (3.20) gives better numerical results than the one defined in Proposition 3.4, we set X_{m} = , where β_{m} is given by (3.20) and _{m} = , where β_{m} is given in Proposition 3.4. The obtained results with different values of m are listed in Table 4.2 and Table 4.3.
In Table 4.3, we reported the Frobenius norms of the residuals corresponding to the approximations X_{m} and _{m} for different values of the parameter m, the total number of iterations (in parentheses) and the required cputime.
As can be seen from Table 4.3, the results obtained with X_{m} are more accurate than those given by the approximation _{m}.
Example 3. The last example describes a model of heat flow with convection in the given domain. The matrix A , is obtained from the centered finitedifference discretization of the operator
on the unit square [0,1] ×[0,1] with homogeneous Dirichlet boundary conditions. The dimension of the matrix A is n = where n_{0} = 70 is the number of inner grid points in each direction. As the eigenvalues of A are large, we divided the matrix A by ║ A ║_{1}. For this experiment, we used µ_{i} = i; i = 1,...,m and different values of m and r. In Table 4.4, we listed the relative residual norms with the total number of iterations (in parentheses) and also the obtained cputime (in seconds).
5 Conclusion
A global Arnoldi method, suitable for large and sparse computing, is proposed for the solution of the multioutput SylvesterObserver equation arising in stateestimation in a linear timeinvariant control system. The method can be considered to be a generalization of the Arnoldimethod proposed earlier by Datta and Saad in the singleoutput case. The proposed method is developed by exploiting an interesting relationship between the initially chosen blockrow vector and the blockrow vector obtained after m steps of the global Arnoldi method. This relationship holds for the standard Arnoldi method but does not seem to hold, in general, for the standard block Arnoldi method. The method has the additional feature that the solution produced is Forthonormal and in the singleoutput case the obtained solution is the same as the one obtained by the standard Arnoldi method. A numerical stability analysis of the method, as is done in the singleoutput case by Calvetti, Reichel and the coauthors earlier, is in order.
REFERENCES
[1] C. Bischof, B.N. Datta and A. Purkyastha, A parallel algorithm for the SylvesterObserver matrix equation. SIAM Journal on Scientific Computing, 17 (1996), 686698. [ Links ]
[2] A. Bouhamidi and K. Jbilou, Sylvester Tikhonovregularization methods in image restoration. JCAM, 206(1) (2007), 8698. [ Links ]
[3] R. Bouyouli, K. Jbilou, R. Sadaka and H. Sadok, Convergence properties of some block Krylov subspace methods for multiple linear systems. J. Comp. Appl. Math., 196 (2006), 498511. [ Links ]
[4] D. Calvetti and L. Reichel, Application of ADI iterative methods to the restoration ofnoisy images. SIAM J. Matrix Anal. Appl., 17 (1996), 165186. [ Links ]
[5] D. Calvetti, B. Lewis and L. Reichel, On the solution of large SylvesterObserver equation. Num. Lin. Alg. Appl., 8 (2001), 116. [ Links ]
[6] J. Carvalho and B.N. Datta, A block algorithm for the SylvesterObserver equation arisingin state estimation. Proc. IEEE Conf. Dec. Control, Orlando, Florida (2001). [ Links ]
[7] B.N. Datta and K. Datta, Theoretical and computational aspects of some linear algebra problems in control theory, in: C.I. Byrnes, A. Lindquist (Eds.), Computational and Combinatorial Methods in Systems Theory, Elsevier, Amsterdam, pp. 201212 (1986). [ Links ]
[8] B. N. Datta, An algorithm to assign eigenvalues in a Hessenberg matrix: singleinput case. IEEE, Trans. Autom. Contr., AC32 (1987), 414417. [ Links ]
[9] B.N. Datta and Y. Saad, Arnoldi methods for large Sylvesterlike observer matrix equations, and an associated algorithm for partial spectrum assignment. Linear Algebra and its Applications, 154156 (1991), 225244. [ Links ]
[10] B.N. Datta and C. Hetti, Generalized Arnoldi methods for the SylvesterObserver equation and the multiinput pole placement problem, in Proceedings of the 36^{\textth} IEEE Conference on Decision and Control, IEEE, Piscataway, pp. 43794383 (1997). [ Links ]
[11] B. N. Datta and D. Sarkissian, Block algorithms for state estimationand functional observers, in Proceedings IEEE Conf. Control Appl. Comput. aided Control Syst., pp. 1923 (2000). [ Links ]
[12] B.N. Datta, Numerical Methods for Linear Control Systems Design and Analysis, Academic Press (2003). [ Links ]
[13] M. Epton, Methods for the solution of AXDBXC = E and its applications in the numerical solution of implicit ordinary differential equations. BIT, 20 (1980), 341345. [ Links ]
[14] C.W. Gear, A simple set of test matrices for eigenvalue programs. Math. Comp., 23 (1969), 119125. [ Links ]
[15] G.H. Golub, S. Nash and C. Van Loan, A HessenbergSchur method for the problem AX+XB = C. IEEE Trans. Autom. Control, AC, 24 (1979), 909913. [ Links ]
[16] M. Heyouni, The global Hessenberg and global CMRH methods for linear systems with multiple righthand sides. Numer. Alg., 26 (2001), 317332. [ Links ]
[17] M. Heyouni and K. Jbilou, Matrix Krylov subspace methods for large scale model reduction problems. Applied Mathematics and Computation, 181(2) (2006), 12151228. [ Links ]
[18] N.J. Higham, The Matrix Computation Toolbox. http://www.ma.man.ac.uk/~higham/mctoolbox. [ Links ]
[19] R.A. Horn and C.R. Johnson, Topics in Matrix Analysis. Cambridge Univ. Press, Cambridge (1991). [ Links ]
[20] C. Hyland and D. Bernstein, The optimal projection equations for fixedorder dynamic compensation. IEEE Trans. Contr., AC29 (1984), 10341037. [ Links ]
[21] K. Jbilou, A. Messaoudi and H. Sadok, Global FOM and GMRES algorithms for matrix equations. Appl. Num. Math., 31 (1999), 4963. [ Links ]
[22] K. Jbilou and A.J. Riquet, Projection methods for large Lyapunov matrix equations. Linear Alg. Appl., 415(22) (2006), 344358. [ Links ]
[23] K. Jbilou, An Arnoldi based algorithm for large algebraic Riccati equations. Applied Mathematics Letters, 19(5) (2006), 437444. [ Links ]
[24] T. Kailath, Linear Systems. PrenticeHall, Englewood Cliffs (1980). [ Links ]
[25] D.G. Luenberger, Observers for multivariable systems. IEEE Transactions on Automatic Control, AC11 (1966), 190197. [ Links ]
[26] Y. Saad and M.H. Schultz, GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Statis. Comput., 7 (1986), 856869. [ Links ]
[27] P. Van Dooren, The generalized eigenstructure problem in linear system theory . IEEE Transactions on Automatic Control, AC26 (1981), 111129. [ Links ]
[28] P. Van Dooren, Reduced order observers: A new algorithm and proof. System Control Lett., 4 (1984), 243251. [ Links ]
Received: 08/XI/09.
Accepted: 24/V/10.
#CAM150/09