## On-line version ISSN 1807-0302

### Comput. Appl. Math. vol.29 no.3 São Carlos  2010

#### http://dx.doi.org/10.1590/S1807-03022010000300012

The global Arnoldi process for solving the Sylvester-Observer equation

B.N. DattaI; M. HeyouniII; K. JbilouII

IDepartment of Mathematical Sciences Northern Illinois University, DeKalb, IL 60115, U.S.A
IIL.M.P.A., Université du Littoral, 50 rue F. Buisson BP 699, F-62228 Calais Cedex, France E-mails: heyouni@lmpa.univ-littoral.fr / jbilou@lmpa.univ-littoral.fr

ABSTRACT

In this paper, we present a method and associated theory for solving the multi-input Sylvester-Observer equation arising in the construction of the Luenberger observer in control theory. The proposed method is a particular generalization of the algorithm described by Datta and Saad in 1991 to the multi-output. We give some theoretical results and present some numerical experiments to show the accuracy of the proposed algorithm.
Mathematical subject classification: 65F10, 65F30.

Key words: global Arnoldi, eigenvalue assignment, Sylvester-Observer equation.

1 Introduction

Consider the time-invariant linear control system

where A \n×n is a large nonsymmetric matrix, B \n×p , C \n×q , (t) \n , and (t) \p . In many practical situations, the initial state 0 and the states (t) for t > 0 are not explicitly known.

To implement the state-feedback control law for basic control design andanalysis, such as state feedback stabilization, eigenvalue and eigenstructure assignment, the LQR, and the state-feedback H-infinity control, etc. (see [12] for details), one needs to explicitly know the state variables. Thus the unmeasured state variables must be estimated. There are two closely related approaches for state estimation: State-estimation via eigenvalue assignment and the Sylvester-Observer equation approach (see Chapter 12 of [12]). This paper deals with numerical solutions of large-scale Sylvester-Observer equation, suitable for large and sparse problems.

The Sylvester-Observer equation is a variation of the classical Sylvester equation. It has the following form:

where the state matrix A and the output matrix C are known. The matrices X \n×q \q×q and G \q×q are to be found. Note that, the matrix can be chosen to be asymptotically stable (that is every eigenvalue of has negative real part) and in that case it can be shown that the vector e(t) = x(t) - XT (t) converges to zero as t increases, where x(t) is the solution of

A conventional way to solve (1.2) is to choose the matrices and G in a suitable manner. For example, can be chosen as a real-Schur matrix and G can be chosen to be equal to the identity matrix Iq . In this case, the Hessenberg-Schur algorithm [15] is a natural choice for solving (1.2). Another widely used method for solving this equation is due to Van Dooren [27, 28]. The method is based on the reduction of the observable pair (A,C) to an observer-Hessenberg pair (H,) . That is, an orthogonal matrix P is computed such that H = PT A P is a block upper Hessenberg matrix and = C P has the following form = (0,...,0, C1) .

Note that if the matrices A and have disjoint spectra then the Sylvester equation (1.2) has a unique solution X [12, 19]. Sylvester equations play an important role in control and communication theory, model reduction, image restoration and numerical methods for ordinary differential equations; see [4, 7, 12, 13, 20] and the references therein.

In the remainder of the paper, we choose G = Iq and suppose that the matrix C has the following form: C = (0n×r,...,0n×r,) where the n×r matrix is of full rank, q = m r and with r << n . So, letting = (0n×r,...,0n×r,Ir) n×mr, the Sylvester-Observer equation (1.2) becomes

where A n×n, n×r are given and arbitrary, while mr×mr and X n×mr are to be determined such that

• is stable, i.e., all its eigenvalues have negative real parts

• the spectrum of the matrix is disjoint from that of A

• the pair (T, GT) is controllable, i.e., the matrix [GT,T-λIq] is full rank for every λ ∈.

We consider the case when A is large and sparse, so that the standard techniques such as the Hessenberg-Schur method, for solving a Sylvester equation cannot be applied. Based on the Arnoldi process, a solution method, suitable for large and sparse computation, was proposed for the Sylvester-Observer equation (1.4) by Datta and Saad [9]. The Datta-Saad method is, however, restricted to the single-output case only ; that is when the right-hand side matrix C is of rank one, i.e., r = 1 . The matrix A is used only in matrix-vector product evaluations and this makes the method well-suited for the solution of large and sparse Sylvester-Observer equations.

In this paper, we propose a particular generalization of the Datta-Saad method [9] to the multi-output with the right-hand side having the structure given by (1.4) and where the matrix is obtained as a Kronecker product. This special generalization is based on the global Arnoldi method proposed in [21]. Other methods for solving small to medium Sylvester-Observer equations (with special right-hand sides) were introduced in [1, 6, 10, 11]. The new proposed method consists in choosing a starting n×r block vector appropriately, and thenrunning m steps of the global Arnoldi process with this starting block vector. The method, like the Datta-Saad method, requires the solution of a special type of eigenvalue assignment problem which is solved with the simple recursive method of Datta proposed in [7]. The new algorithm computes simultaneously an mr ×mr block upper Hessenberg matrix having a set of m eigenvalues with multiplicity r and an F -orthonormal matrix X solving the Sylvester-Observer equation. When dealing with multiple eigenvalues then instead of using the global Arnoldi process, it would be interesting to apply the block Arnoldi algorithm for solving these Sylvester-Observer equations but this needs new properties (to be done) of the algorithm. The numerical experiments show that both the solution matrix X and the assigned eigenvalues are accurate up to computational precisions. Furthermore, the matrix X has low condition number. The accuracy in both cases were measured by the corresponding relative residual norms.

The remainder of the paper is organized as follows. We review, in Section 2, some properties of the ⊗-product and the ◊-product introduced in [3]. In Section 3, we show how to apply the global Arnoldi process for solving the multi-output Sylvester-Observer equation. Section 4 is devoted to some numerical experiments.

2 Background and notations

We use the following notations. For X and Y two matrices in n×r , we consider the Frobenius inner product defined by ‹ X,Y F = tr(XT Y) , where tr(Z) denotes the trace of a square matrix Z . The associated norm is the Frobenius norm denoted by .F . The notation X F Y means that ‹ X,YF = 0 .

The Kronecker product of two matrices A and B is defined by A ⊗B : = [ai,j B] and satisfy the following properties

( A ⊗B)( C ⊗D) = ( A C ⊗B D),
( A ⊗B)T = AT ⊗BT,
( A ⊗B)-1 = A-1 ⊗B-1,

if A and B are invertible.

In the following, we recall the ◊-product defined in [3] and list some properties that will be useful later.

Definition 2.1 [3]. Let A = [ A1, A2,..., Ap] and B = [ B1, B2,...,Bl] be matrices of dimension n×pr and n×lr, respectively, where the blocks Ai and Bj , (i = 1,...,p; j = 1,...,l) are n×r matrices. Then the p×l matrix ATB is defined by: ATB =

We notice that

• If r = 1 then ATB = AT B .

• The matrix A = [ A1, A2,..., Ap] is F-orthonormal if and only if ATA = Ip , i.e.,

• If p = l = 1, then ATB = < A, B > F, (note that if A = B then AT A = ).

and that the following properties hold for the ◊-product.

Proposition 2.2 [3]. Let A, B, C n×pr , D n×n , L p×p and α . Then we have

1. (A + B)TC = ATC + BTC .

2. AT ◊ ( B+ C) = ATB + ATC .

3. (α A)TC = α (ATC) = AT ◊ (α C) .

4. (ATB)T = BTA .

5. (D A)TB = AT ◊ ( DT B) .

6. AT ◊ [B (LIr)] = (ATB) L .

3 The global Arnoldi process for the Sylvester-Observer equation

The global Arnoldi process [21], and other global processes such as the global Lanczos [17] and the global Hessenberg process [16], were recently used in the context of iterative methods for large sparse matrix equations. Combined with a Galerkin orthogonality condition or with a minimizing norm condition, these algorithms were applied for large sparse linear systems with multiple right-hand sides and related problems [2, 17, 22, 23].

Before describing the global Arnoldi process, we first give some definitions and remarks on matrix Krylov subspace methods [17, 21].

Let A n×n , V n×r and m a fixed integer. The matrix Krylov subspace m(A,V) = span{V,A V,...,Am-1 V} is the subspace spanned by the vectors (matrices) V, A V, ...,Am-1 V . Let m be the n × mr block matrix whose j-th block is Aj-1 V , (j = 0,...,m-1) , then

3.1 The global Arnoldi process

Given an n ×n matrix A , an n × r starting block vector V and an integer m < n , the global Arnoldi process applied to the pair (A,V) is described as follows:

Algorithm 1. The modified global Arnoldi process.

• Inputs: A an n × n matrix, V an n × r matrix and m an integer.

• Step 0. V1 = V/VF ;

• Step 1. For j = 1,...,m

= A Vj ;

for i = 1,...,j

hi,j = ‹Vi,F ;

= -hi,j Vi ;

endfor

hj+1,j = F ;

Vj+1 = /hj+1,j ;

end For.

The above process computes simultaneously a set of F-orthonormal block vectors V1, V2,...,Vm, Vm+1 , i.e.,

where j is the n × jr matrix j = [ V1, ...,Vj] ( j = 1,...,m+1 ) and an (m+1) × m upper Hessenberg matrix m whose nonzero entries are the hi,j defined in Algorithm 1. We also have the following relations

where

Em = (emIr) = [0r,...,0r,Ir]T with em = (0,...,0,1)T m

and the matrix Hm is an upper Hessenberg matrix obtained from m by removing its last row.

Notice that the global Arnoldi process breaks down at step j , i.e., Vj+1 = 0 ,if and only if the degree of the minimal polynomial of V is exactly j . Moreover, it is easy to establish the following result.

Proposition 3.1. Apart from a multiplicative scalar, the polynomial pm such that Vm+1 = pm(A)V1 is the characteristic polynomial of the Hessenberg matrix Hm . Moreover, this polynomial minimizes the norm q(A) V1F over all monic polynomials of degree m .

Proof. The proof is similar to the one given in [26] for the case r = 1 withthe classical Arnoldi process.

3.2 Application of the global Arnoldi process to Solution of the Sylvester-Observer equation

We start by rewriting the equation (3.3) as

and let us observe the resemblance between this equation and the equation (1.4). Hence, in order to solve the Sylvester-Observer equation (1.4), we first need to find a block vector V1 such that the corresponding last block vector Vm+1 is equal to , apart from a multiplicative scalar, and then transform the upper Hessenberg matrix Hm to m such that the eigenvalues of m: Sp(m) = {µ1,...,µm} , where µ1, ...,µm are some given scalars. In particular, these scalars can be chosen as numbers with negative real parts, if desired. We can take in (1.4), = mIr , and observe that Sp() = where each eigenvalue of is of multiplicity r .

To find the block vector V1 , we will use the result of Proposition 3.1. In fact, since the matrix Hm given by the global Arnoldi process must be transformedby an eigenvalue assignment algorithm [8] to m to have the pre-assigned spectrum {µ1,...,µm} , we obtain V1 = where Y is the solution of the block linear system

and

is the characteristic polynomial of m . Note that, the partial approach suggested in [9] can be used to solve (3.5). It consists in decomposing the above block system into m linearly independent systems

The solution Y is then obtained as the following linear combination; (see [9])

where

For more details about obtaining (3.7) and (3.8), we refer to [9].

In order to solve the m multiple linear systems (3.7), we can apply l steps of the global GMRES algorithm [21]. In this case, we construct an F -orthonormal matrix l = [V1,...,Vl] whose blocks form an F-orthonormal basis of the matrix Krylov subspace l = span{,A ...,Al-1 } and then we solve m independent small (l+1) ×l least squares problems with the matrices (l-µi Il) . The bulk of the work is in generating l , and this is done onlyonce [9].

When the solution Y of the block linear system (3.5) is obtained, we apply m steps of the global Arnoldi process to the pair (A,Y) to get an F-orthonormal matrix m and an upper Hessenberg matrix Hm . We then have to modify the last column of Hm in such a way that the resulting Hessenberg matrix m has a desired set of eigenvalues {µ1,...,µm} . To achieve this task, we will use a variant of the pole-assignment method proposed in [8].

Let Hm = [hi,j] be an m × m unreduced upper Hessenberg matrix, and define the following quantities:

where qm is defined by (3.6). If the parameters µ1,...,µm form a set of distinct complex numbers and are such that Sp(Hm) {µj}j = 1,...,m = ø, we can show, using Theorem 5.1 in [9], that

Moreover, if the set {µj}j = 1,...,m is invariant under complex conjugation, then the matrix m is real [5].

Next, we give some results that will be used later.

Lemma 3.2. Suppose that m, Hm are obtained after applying m steps of the global Arnoldi process to the pair (A,V) and let p be a polynomial of degree less than m. Then

Proof. By induction, it is clear that Aj V1 =m (e1Ir) , for 0 < j < m , where e1 = (1,0,...,0)T m . Hence, for any polynomial p of degree less than m , we have

and we get (3.12) by pre-multiplying, with the ◊-product, the previous equality on the left by and by using the relation 6 of Proposition 2.2.

Lemma 3.3 [5]. Let Hm+1 = [Hi,j ]i,j = 1,...,(m+1) (m+1)×(m+1) be an upper Hessenberg matrix and p a monic polynomial of degree m. Then

Now, let {µj}j = 1,...,m be a set of distinct complex numbers, such that Sp(A) {µj}j = 1,...,m = ø. Let Y be the unique solution of the block-linear system of equations (3.5). The following results show how to combine the global Arnoldi process with the assignment procedure in order to solve the Sylvester-Observer equation (1.4).

Proposition 3.4. Let m+1 = [m, m+1] and Hm be the F-orthonormal matrix and the upper Hessenberg matrix obtained after applying m steps of the global Arnoldi process to the pair (A,Y), respectively. Define

Then

Moreover

Proof. As the starting block vector Y ensures that Vm+1 is equal to up to ascaling factor, then m+1(A,V1) . Now, since the block vectors V1,..., Vm+1 , form a basis of m+1(A,V1) , it follows that there exist g m and λ ∈ such that

Pre-multiplying (3.17) on the left by , respectively, and using the fact that Vm+1 = [m, Vm+1 ] is an F-orthonormal matrix, we get

and

Using (3.17), the fact that f = βm g and = tr( ) , we obtain

βm =m (f ⊗Ir) +βm tr( ) Vm+1

=m (fIr) + hm+1,m Vm+1,

which gives

Finally, combining this last equality with (3.3), we get (3.15).

Now, since V1 = , then using (3.5), property 3 of Proposition 2.2 and Lemma 3.2 we get

Thus f = βm YF s, where s is defined by (3.9). Moreover, we have

and by using (3.13) and Lemma 3.3, we get

where the parameter α is defined by (3.10). Hence α = βm YF , and weobtain (3.16) by using (3.11).Next, we give a new expression for the scaling factor βm given in Proposition 3.4. Let D be the last block column of A m - m (m Ir) , we have D = βm and so

We note that the new expression (3.20) gives better numerical results than the one given in Proposition 3.4.

Finally, the solution X of the Sylvester-Observer equation (1.4) is defined by

In the single input case, i.e. r = 1 , the solution obtained by the Datta-Saad method is orthonormal, while in the multiple-output case the obtained solution is F-orthonormal.

Using the previous results, we summarize the global Arnoldi process for multiple-output Sylvester-Observer equation as follows

Algorithm 2. The global Arnoldi algorithm for multiple-output Sylvester-Observer equation.

• Inputs: A an n × n matrix, an n × r matrix and m parameters µ1,...,µm equation.

• Step 1. Solve the linear system q(A) Y = , where q is given by (3.6); i.e.,

solve the m linear independent systems

(A-µi In) Yi = , i = 1,...,m.

Get the solution Y as the linear combination Y =

where λi = and q'm(µi) = (µi -µj) .

• Step 2. Apply m steps of the global Arnoldi process to the pair (A,Y) to generate

m+1 = [V1,...,Vm+1] and Hm ;

• Step 3. Change the last column of Hm to get m such that

Sp(m) = , i.e.,

compute f = α s where α and s are given by (3.10) and (3.9);

set m = Hm - f ;

• Step 4. Compute D the last block-column of A m - m (mIr) ;

set βm =

• Step 5. Set X = and = m .

4 Numerical experiments

The numerical tests were run using Matlab 7.1, on an Intel Pentium workstation, with machine precision equal to 2.22×10-16 . In all our experiments, the n × r matrix is generated randomly using the Matlab function rand. The m linear systems, in Step 1 of Algorithm 2 are solved using the global GMRES and a maximum of 50 iterations was allowed in the global Arnoldi process. The initial guess was (Yi)0 = 0n×r . The relative tolerance used when solving (3.7) was ε = 10-10 .

Example 1. The matrix A = gearmat used in this first example is of size n = 10000 . It is the tridiagonal matrix given by

For a detailed description of the Gear matrix, we refer the readers to [14] and to [18]. We choose µk = - 4k, for k = 1,...,m . The obtained results for different values of r and m are given in Table 4.1.

The approximation Xm = was computed with the parameter βm as given by (3.20).

Example 2. In this second experiment, we consider the following matrix of size n = 2p with p = 4000 .

where L and D are diagonal matrices of size p × p. Letting L = diag(l1,...,lp) and D = diag(d1,...,dp) , the eigenvalues of A are given by the solutions of the quadratic equations

x2-dk x-lk, k = 1,...,p.

Hence, if dk = 2 αk and lk = -(+) then Sp(A) = {λk,}k = 1,...,p ,where λk = αkβk . In this example, we took r = 4 and the parameters αk, βk were random values uniformly distributed in [-1, 0] and [0, 1] respectively. In order to show the influence of the parameters {µ1,...,µm} , we give the results for two choices: We consider the set {µ1,...,µm} , invariant under complex conjugation, and such that the real and imaginary parts of µ1, ..., µm are uniformly distributed in [-2, -1] and [0, 1] respectively in the first choice (Table 4.2). For the second choice (Table 4.3), the real and imaginary parts of µ1, ..., µm are uniformly distributed in [-4, -3] and [0, 1], respectively. To show that the formula (3.20) gives better numerical results than the one defined in Proposition 3.4, we set Xm = , where βm is given by (3.20) and m = , where βm is given in Proposition 3.4. The obtained results with different values of m are listed in Table 4.2 and Table 4.3.

In Table 4.3, we reported the Frobenius norms of the residuals corresponding to the approximations Xm and m for different values of the parameter m, the total number of iterations (in parentheses) and the required cpu-time.

As can be seen from Table 4.3, the results obtained with Xm are more accurate than those given by the approximation m.

Example 3. The last example describes a model of heat flow with convection in the given domain. The matrix A , is obtained from the centered finitedifference discretization of the operator

on the unit square [0,1] ×[0,1] with homogeneous Dirichlet boundary conditions. The dimension of the matrix A is n = where n0 = 70 is the number of inner grid points in each direction. As the eigenvalues of A are large, we divided the matrix A by A 1. For this experiment, we used µi = -i; i = 1,...,m and different values of m and r. In Table 4.4, we listed the relative residual norms with the total number of iterations (in parentheses) and also the obtained cpu-time (in seconds).

5 Conclusion

A global Arnoldi method, suitable for large and sparse computing, is proposed for the solution of the multi-output Sylvester-Observer equation arising in state-estimation in a linear time-invariant control system. The method can be considered to be a generalization of the Arnoldi-method proposed earlier by Datta and Saad in the single-output case. The proposed method is developed by exploiting an interesting relationship between the initially chosen block-row vector and the block-row vector obtained after m steps of the global Arnoldi method. This relationship holds for the standard Arnoldi method but does not seem to hold, in general, for the standard block Arnoldi method. The method has the additional feature that the solution produced is F-orthonormal and in the single-output case the obtained solution is the same as the one obtained by the standard Arnoldi method. A numerical stability analysis of the method, as is done in the single-output case by Calvetti, Reichel and the co-authors earlier, is in order.

REFERENCES

[1] C. Bischof, B.N. Datta and A. Purkyastha, A parallel algorithm for the Sylvester-Observer matrix equation. SIAM Journal on Scientific Computing, 17 (1996), 686-698.         [ Links ]

[2] A. Bouhamidi and K. Jbilou, Sylvester Tikhonov-regularization methods in image restoration. JCAM, 206(1) (2007), 86-98.         [ Links ]

[3] R. Bouyouli, K. Jbilou, R. Sadaka and H. Sadok, Convergence properties of some block Krylov subspace methods for multiple linear systems. J. Comp. Appl. Math., 196 (2006), 498-511.         [ Links ]

[4] D. Calvetti and L. Reichel, Application of ADI iterative methods to the restoration ofnoisy images. SIAM J. Matrix Anal. Appl., 17 (1996), 165-186.         [ Links ]

[5] D. Calvetti, B. Lewis and L. Reichel, On the solution of large Sylvester-Observer equation. Num. Lin. Alg. Appl., 8 (2001), 1-16.         [ Links ]

[6] J. Carvalho and B.N. Datta, A block algorithm for the Sylvester-Observer equation arisingin state estimation. Proc. IEEE Conf. Dec. Control, Orlando, Florida (2001).         [ Links ]

[7] B.N. Datta and K. Datta, Theoretical and computational aspects of some linear algebra problems in control theory, in: C.I. Byrnes, A. Lindquist (Eds.), Computational and Combinatorial Methods in Systems Theory, Elsevier, Amsterdam, pp. 201-212 (1986).         [ Links ]

[8] B. N. Datta, An algorithm to assign eigenvalues in a Hessenberg matrix: single-input case. IEEE, Trans. Autom. Contr., AC-32 (1987), 414-417.         [ Links ]

[9] B.N. Datta and Y. Saad, Arnoldi methods for large Sylvester-like observer matrix equations, and an associated algorithm for partial spectrum assignment. Linear Algebra and its Applications, 154-156 (1991), 225-244.         [ Links ]

[10] B.N. Datta and C. Hetti, Generalized Arnoldi methods for the Sylvester-Observer equation and the multi-input pole placement problem, in Proceedings of the 36\textth IEEE Conference on Decision and Control, IEEE, Piscataway, pp. 4379-4383 (1997).         [ Links ]

[11] B. N. Datta and D. Sarkissian, Block algorithms for state estimationand functional observers, in Proceedings IEEE Conf. Control Appl. Comput. aided Control Syst., pp. 19-23 (2000).         [ Links ]

[12] B.N. Datta, Numerical Methods for Linear Control Systems Design and Analysis, Academic Press (2003).         [ Links ]

[13] M. Epton, Methods for the solution of AXD-BXC = E and its applications in the numerical solution of implicit ordinary differential equations. BIT, 20 (1980), 341-345.         [ Links ]

[14] C.W. Gear, A simple set of test matrices for eigenvalue programs. Math. Comp., 23 (1969), 119-125.         [ Links ]

[15] G.H. Golub, S. Nash and C. Van Loan, A Hessenberg-Schur method for the problem AX+XB = C. IEEE Trans. Autom. Control, AC, 24 (1979), 909-913.         [ Links ]

[16] M. Heyouni, The global Hessenberg and global CMRH methods for linear systems with multiple right-hand sides. Numer. Alg., 26 (2001), 317-332.         [ Links ]

[17] M. Heyouni and K. Jbilou, Matrix Krylov subspace methods for large scale model reduction problems. Applied Mathematics and Computation, 181(2) (2006), 1215-1228.         [ Links ]

[18] N.J. Higham, The Matrix Computation Toolbox. http://www.ma.man.ac.uk/~higham/mctoolbox.         [ Links ]

[19] R.A. Horn and C.R. Johnson, Topics in Matrix Analysis. Cambridge Univ. Press, Cambridge (1991).         [ Links ]

[20] C. Hyland and D. Bernstein, The optimal projection equations for fixed-order dynamic compensation. IEEE Trans. Contr., AC-29 (1984), 1034-1037.         [ Links ]

[21] K. Jbilou, A. Messaoudi and H. Sadok, Global FOM and GMRES algorithms for matrix equations. Appl. Num. Math., 31 (1999), 49-63.         [ Links ]

[22] K. Jbilou and A.J. Riquet, Projection methods for large Lyapunov matrix equations. Linear Alg. Appl., 415(2-2) (2006), 344-358.         [ Links ]

[23] K. Jbilou, An Arnoldi based algorithm for large algebraic Riccati equations. Applied Mathematics Letters, 19(5) (2006), 437-444.         [ Links ]

[24] T. Kailath, Linear Systems. Prentice-Hall, Englewood Cliffs (1980).         [ Links ]

[25] D.G. Luenberger, Observers for multivariable systems. IEEE Transactions on Automatic Control, AC-11 (1966), 190-197.         [ Links ]

[26] Y. Saad and M.H. Schultz, GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Statis. Comput., 7 (1986), 856-869.         [ Links ]

[27] P. Van Dooren, The generalized eigenstructure problem in linear system theory . IEEE Transactions on Automatic Control, AC-26 (1981), 111-129.         [ Links ]

[28] P. Van Dooren, Reduced order observers: A new algorithm and proof. System Control Lett., 4 (1984), 243-251.         [ Links ]