SciELO - Scientific Electronic Library Online

 
vol.28 número3The uncertainty effects of deformation bands on fluid flowOn the use of the Spectral Projected Gradient method for Support Vector Machines índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Computational & Applied Mathematics

versión On-line ISSN 1807-0302

Comput. Appl. Math. vol.28 no.3 São Carlos  2009

http://dx.doi.org/10.1590/S1807-03022009000300004 

An iterative method for solving a kind of constrained linear matrix equations system

 

 

Jing CaiI, II; Guoliang ChenI

IDepartment of Mathematics, East China Normal University, Shanghai 200062, P.R. China. E-mail: caijing@hutc.zj.cn
IISchool of Science, Huzhou Teachers College, Huzhou Zhejiang 313000, P.R. China. E-mail: glchen@math.ecnu.cn

 

 


ABSTRACT

In this paper, an iterative method is constructed to solve the following constrained linear matrix equations system: [A1(X),A2(X),... ,Ar(X)]=[E1,E2, ... ,Er ], X I={X |X= U(X)}, where Ai is a linear operator from Cmxn onto Cpixqi, Ei Cpixqi, i=1 , 2,..., r , and U is a linear self-conjugate involution operator. When the above constrained matrix equations system is consistent, for any initial matrix X0 I, a solution can be obtained by the proposed iterative method in finite iteration steps in the absence of roundoff errors, and the least Frobenius norm solution can be derived when a special kind of initial matrix is chosen. Furthermore, the optimal approximation solution to a given matrix can be derived. Several numerical examples are given to show the efficiency of the presented iterative method.
Mathematical subject classification: 15A24, 65D99,65F30.

Key words: iterative method, linear matrix equations system, linear operator, least Frobenius norm solution, optimal approximation.


 

 

1 Introduction

Throughout this paper, we need the following notations. Cmxn denotes the set of m x n complex matrices. For a matrix A Cm xn, we denote its transpose, conjugate transpose, trace, Frobenius norm and column space by AT, AH, tr(A), ║A║and R(A), respectively. Let In and Sn denote the n x n unit matrix and reverse unit matrix respectively. The symbol vec(-) stands for the vec operator, i.e., for A = (a1, a2, ...,an) Cmxn, where ai (i = 1, 2 ... ,n) denotes the ith column of A, vec(A) =. In the vector space Cmxn, we define inner product as: X, Y =tr(YHX) for all X, Y Cmxn. Two matrices X and Y are said to be orthogonal if X, Y= 0.

Let LCmxn,pxq denote the set of linear operators from Cmxn onto Cpxq. Particularly, when p = m and q = n, LCmxn,pxq is denoted by LCmxn. Let τ stand for the identity operator on Cmxn. For A LCmxn,pxq, its conjugate operator is a linear operator satisfying the following equality:

For example, if is defined as:: X — AXB, then we have : Y AHYBH.

We note that the linear matrix equations, such as the well-known Lyapunov matrix equation XA + A*X = H, Sylvester equation AX ± XB = C, Stein equation X ± AXB = D, AXB + CXTD = E and [AxXBU A2XB2] = [Ci, C2], which have many applications in system theory (see [7-11, 15, 18] for instance), can all be rewritten as the following linear matrix equations system:

where LCmxn,Pixqi and Et CPixqi, i = 1, 2, ..., r.

Recently, there are many papers considering the matrix equations system (1.1) with various constraints on solutions. For instance, Chu [4], Dai [5] and Chang and Wang [2] considered the symmetric solutions of the matrix equations AX = B, AXB = C and (ATXA, BTXB) = (C, D) respectively. Peng et al. [14] and Qiu et al. [16] discussed the reflexive and anti-reflexive solution of the matrix equations A XB = C and AX = B, XC = D, respectively. Li et al. [10], Yuan and Dai [19] and Zhang et al. [20] considered the generalized reflexive and anti-reflexive solutions of the matrix equations (BX = C, XD = E), AXB = D and AX = B. In these literatures, the necessary and sufficient conditions for the consistency of the constrained matrix equations and the explicit expressions of the constrained solutions were derived by using the singular value decomposition (SVD), generalized singular value decomposition (GSVD), canonical correlation decomposition (CCD) or the generalized inverses of matrices. On the other hand, Peng et al. [13] considered an iterative method for symmetric solutions ofthe system of matrix equations A 1XB 1= C 1,A2XB2 = C2. In Huang [9], an iterative method was constructed to obtain the skew-symmetric solution of the matrix equation AXB = C. Dehghan and Hajarian [6] proposed an iterative algorithm to obtain the reflexive solution (namely the generalized centro-symmetric solution) of the matrix equations AYB = E, CYD = F.

It should be remarked that the following common constraints on the solutions of matrix equations (see [3, 12, 14, 19, 20] for more details):

(1) Symmetric (Skew-symmetric) constraint:

X = XT (X = - XT);

(2) Centrosymmetric (Centroskew symmetric) constraint:

X = SnXSn( — SnXSn);

(3) Reflexive (Anti-reflexive) constraint:

X = PXP (—PXP);

(4) Generalized reflexive (anti-reflexive) constraint:

X = PXQ (—PXQ);

(5) P orthogonal-symmetric (orthogonal-anti-symmetric) constraint:

(PX)T= PX(—PX), here PT = P = P—1 In and QT = Q = Q—1 In, are all special cases of the following constraint:

where LC xn is a self-conjugate involution operator, i.e., =

Hence it motivates us to consider solving the more general matrix equations system (1.1) with the constraint (1.2) and its associated optimal approximate problem (see the following two problems) by iterative methods, which generalize the main results of [6, 9, 13].

Problem I. Let denote the set of constrained matrices defined by (1.2). For given LCmxn,pi xqi, Et Cpi xqi, i = 1, 2, ... , r, find X such that

Problem II. Let S denote the solution set of Problem I, for given C m x n, find S, such that

The rest of the paper is organized as follows. In Section 2, e propose an iterative algorithm to obtain a solution or the least Frobenius norm solution of Problem I and present some basic properties of the algorithm. In Section 3, we consider the iterative method for solving Problem II. Some numerical examples are given in Section 4 to show the efficiency of the proposed iterative method. Finally, we put some conclusions in Section 5.

 

2 Iterative method for solving Problem I

In this section, we first propose an iterative algorithm to solve Problem I, then present some basic properties of the algorithm. We also consider finding the least Frobenius norm solution of Problem I. In the sequel, the least norm solution always means the least Frobenius norm solution.

Algorithm 2.1.

Step 1. Input LCmxn,Pixqi, EiCpixqi, i = 1, 2, ..., r, and an arbitrary X0 ;

Step 2. Compute

Step 3. If = 0 or 0 but Pk = 0, then stop; else, k := k + 1;

Step 4. Compute

Step 5. Go to Step 3.

Some basic properties of Algorithm 2.1 are listed in the following lemmas.

Lemma 2.1. The sequences {Xi} and {Pi} generated by Algorithm 2.1 are contained in the constraint set .

Proof. We prove the conclusion by induction. For i = 0, by = τ and Algorithm 2.1, we have

i.e., P0 .

For i = 1, since x0 , by Algorithm 2.1, we have

Moreover, we have

i.e.,P1 S. Assume that the conclusion holds for i = s (s > 1), i.e., Xs, Ps S. Then

and

from which we getXs+1, Ps+1 .By the principle of induction, we draw the conclusion.

Lemma 2.2. Assume that X* is a solution of Problem I. Then the sequences {Xi}, {} and {Pi} generated by Algorithm 2.1 satisfy the following equality:

Proof. We prove the conclusion by induction. For i = 0, it follows from X*, X0 that X* - X0 . Then by Algorithm 2.1, we have

For i = 1, by Algorithm 2.1 and Lemma 2.1, we have

Assume that the conclusion holds for i = s(s > 1), i.e., X* — Xs, Ps = , then for i = s + 1, we have


This completes the proof by the principle of induction.

Remark 2.1. Lemma 2.2 implies that if Problem I is consistent, then implies that Pi 0. Else if there exists a positive number k such that 0 but Pk =0, then Problem I must be inconsistent. Hence the solvability of Problem I can be determined by Algorithm 2.1 in the absence of roundoff errors.

Lemma 2.3. For the sequences and generated by Algorithm 2.1, it follows that

Proof. By Algorithm 2.1, Lemma 2.1 and , we have


This completes the proof.

Lemma 2.4. Assume that Problem I is consistent. If there exists a positive integer k such that for all i = 0, 1, 2, ... , k, then the sequences and generated byAlgorith 2.1 satisfy

and, (i, j = 0, 1, 2, ..., k, i = j). (2.3)

Proof. Note that holds for arbitrary matrices A and B. We only need to prove the conclusion holds for all 0 < j < i < k. For k = 1, by Lemma 2.3, we have

and

Assume that and = 0 for all 0 <j < s, 0 < s <k, we shall show that = 0 and hold for all 0 < j < s + 1, 0 < s + 1 < k.

By the hypothesis and Lemma 2.3, for the case where 0 < j < s, we have

and

For the case j = s, we have

and

Then by the principle of induction, we complete the proof.

Remark 2.2. Lemma 2.4 implies that when Problem I is consistent, the sequences P0, P1, P2 ... are orthogonal to each other in the finite dimension matrix space . Hence there exists a positive integer t < mn such that Pt = 0. According to Lemma 2.2, we have . Hence the iteration will be terminated in finite steps in the absence of roundoff errors.

Next we consider finding the least norm solution of Problem I. The following lemmas are needed for our derivation.

Lemma 2.5 [1]. Suppose that the consistent system of linear equations Ax =b has a solution x* R (AH), then x*is the unique least norm solution of the syste oflinear equations.

Lemma 2.6 [8]. For LCmxn,pxq, there exists a unique matrix M Cpqxmn, such that vec ((X)) = M vec(X) for all X Cmxn.

According to Lemma 2.6 and the definition of conjugate operator, one can easily obtain the following corollary.

Corollary 2.1. Let and M be the same as those in Lemma 2.4, and be the conjugate operator of ,then vec ((Y)) = MH vec(Y) for all Y Cpxq.

Theorem 2.1. Assume that Problem I is consistent. If we choose the initial matrix , where H is an arbitrary matrix in C plxql, or ore especially, let X0 = 0, then the solution X* obtained by Algorith 2. 1 is the least nor solution.

Proof. Consider the following matrix equations system:

If Problem I has a solution X, then X must be a solution of the system (2.4). Conversely, if the system (2.4) has a solution X, let , then it is easy to verify that is a solution of Problem I. Therefore, the solvability of Problem I is equivalent to the system (2.4).

If we choose the initial matrix , where H is an arbitrary matrix in Cpl xql, by Algorithm 2.1 and Remark 2.2, we can obtain the solution X* of Problem I within finite iteration steps, which can be represented as . Since the solution set of Problem I is a subset of that of the system (2.4), next we shall prove that X* is the least norm solution of the system (2.4), which implies that X* is the least norm solution of Problem I.

Let Ml and N be the matrices such that vec((X)) = Ml vec(X) and vec((X)) = N vec(x)for all X Cmxn, l = 1, 2, ... , r. Then the system (2.4) is equivalent to the following system of linear equations:

Note that

Then it follows from Lemma 2.5 thatX*is the least norm solution ofthe system (2.4), which is also the least norm solution of Problem I.

 

3 Iterative method for solving Problem II

In this section, we consider iterative method for solving the matrix nearness Problem II. Let S denote the solution set of Problem I, for given C mxn and arbitrary X S, we have

Note that

which implies . Then (3.1) reduces to

Hence is equivalent to , where

Let and . Then is a solution of the following constrained matrix equations system:

Hence Problem II is equivalent to finding the least norm solution of above constrained matrix equations system.

Based on the analysis above, we summarize that once the unique least norm solution of above constrained consistent matrix equations system is obtained by applying Algorithm 2.1, the unique solution of Problem II can be obtained, where

 

4 Numerical examples

In this section, we shall give some numerical examples to illustrate the efficiency of Algorithm 2.1. All the tests are performed by MATLAB 7.6 with machine precision around 10—16. Because of the influence of roundoff errors, we regard a matrix X as zero matrix if ║X< 1.0e 010.

Example 4.1. Find the least norm symmetric solution of the following matrix equations system:

where

If we define : X AT X + XT A and : X BXBT, then we have : Y AY + AYT and : Y BTYB. Here we define : X XT. Let X 0= 0, then by using Algorithm 2.1 and iterating 16 steps, we obtain the least norm symmetric solution of the system (4.1) as follows:

with = 1.4901e - 018.

Example 4.2. Let SEdenote the solution set of the matrix equations system (4.1) with symmetric constraint, for given

find , such that .

In order to find the optimal approximation solution to the matrix , let = , , and .

By applying Algorithm 2.1 to the new matrix equations system:

with symmetric constraint, and choosing the initial matrix , we obtain its unique least norm symmetric solution as follows:

with

Then we have

and

 

5 Conclusions

In this paper, an iterative algorithm is constructed to solve a kind of constrained matrix equations system, i.e., the matrix equations system (1.1) with the constraint (1.2), and its associated optimal approximation problem. By this algorithm, for any initial matrix X0satisfying the constraint (1.2), a solution X* can be obtained in finite iteration steps in the absence of roundoff errors, and the least norm solution can be obtained by choosing a special kind of initial matrix.

In addition, using this iterative method, the optimal approximation solution to a given matrix can be derived by first finding the least norm solution of a new corresponding constrained matrix equations system. The given numerical examples show that the proposed iterative algorithm is quite efficient.

Acknowledgement. Research supported by Shanghai Science and Technology Committee (No. 062112065) and Natural Science Foundation of Zhejiang Province (No. Y607136).

 

REFERENCES

[1] A. Ben-Israel and T.N.E. Greville, Generalized Inverse: Theory and Applications, Springer, New York (2003).         [ Links ]

[2] X.W. Chang and J.S. Wang, The symmetric solution of the matrix equations AX + YA = C, AXAT + BYBT = C, and (ATXA, BTXB) = (C, D), Linear Algebra Appl., 179 (1993), 171-189.         [ Links ]

[3] J.L. Chen and X.H. Chen, Special Matrices, Tsinghua University Press, Beijing (2001).         [ Links ]

[4] K.E. Chu, Symmetric solutions of linear matrix equations by matrix decompositions, Linear Algebra Appl., 119 (1989), 35-50.         [ Links ]

[5] H. Dai, On the symmetric solutions of linear matrix equations, Linear Algebra Appl., 131 (1990), 1-7.         [ Links ]

[6] M. Dehghan and M. Hajarian, An iterative algorith for solving a pair of atrix equations AY B = E, CY = F over generalized centro-syetric atrices, Comput. Math. Appl., 56 (2008), 3246-3260.         [ Links ]

[7] G.H. Golub, S. Nash and C. Vanloan, A Hessenberg-Schur method for the problem AX + XB = C, IEEE Trans. Auto. Control., 24 (1979), 909-913.         [ Links ]

[8] R.A. Horn and C.R. Johnson, Topics in Matrix Analysis, Cambridge University Press, New York (1991).         [ Links ]

[9] G.X. Huang, F. Yin and K. Guo, An iterative method for the skew-symmetric solution and the optial approxiate solution of the matrix equation AXB = C, J. Comput. Appl. Math., 212 (2008),231-244.         [ Links ]

[10] F.L. Li, X.Y. Hu and L. Zhang, The generalized anti-reflexive solutions for a class of matrix equations BX = C, XD = E, Comput. Appl. Math., 27 (2008), 31-46.         [ Links ]

[11] S.K. Mitra, Common solutions to a pair of linear matrix equations A1XB1= C1,A2XB2 = C2, Proc. Cambridge Philos. Soc., 74 (1973), 213-216.         [ Links ]

[12] X.Y.Peng, X.Y.Hu and L.Zhang, The orthogonal-symmetric or orthogonal-anti-symmetric least-square solutions ofthe matrix equation, Chinese J. Engi. Math., 23(6) (2006), 10481052.         [ Links ]

[13] Y.X. Peng, X.Y. Hu and L. Zhang, An iterative method for symmetric solutions and optimal approximation solution of the system of matrix equations A1XB1= C1,A2XB2= C 2,Appl. Math. Comput., 183 (2006), 1127-1137.         [ Links ]

[14] Y.X. Peng, X.Y. Hu and L. Zhang, The reflexive and anti-reflexive solutions of the matrix equation AHXB = C, J. Comput. Appl. Math., 200 (2007), 749-760.         [ Links ]

[15] F.X. Piao, Q.L. Zhang and Z.F. Wang, The solution to matrix equation AX + XTC = B, J. Fran. Inst., 344 (2007), 1056-1062.         [ Links ]

[16] Y.Y. Qiu, Z.Y. Zhang and J.F. Lu, The matrix equations AX = B, XC = D with PX = sXP constraint, Appl. Math. Comput., 189 (2007), 1428-1434.         [ Links ]

[17] X.P. Sheng and G.L. Chen, A finite iterative method for solving a pair of linear matrix equations (AXB, CXD) = (E, F), Appl. Math. Comput., 189 (2007), 1350-1358.         [ Links ]

[18] M.H. Wang, X.H. Cheng and M.S. Wei, Iterative algorithms for solving the matrix equation AXB + CXTD = E, Appl. Math. Comput., 187 (2007), 622-629.         [ Links ]

[19] Y.X. Yuan and H. Dai, Generalized reflexive solutions of the matrix equation AXB = D and associated optimal approximation problem, Comput. Math. Appl., 56 (2008), 16431649.         [ Links ]

[20] J.C. Zhang, S.Z. Zhou and X.Y. Hu, The (P, Q) generalized reflexive and anti-reflexive solutions ofthe matrix equation AX = B, Appl. Math. Comput., 209 (2009), 254-258.         [ Links ]

 

 

Received: 13/III/09.
Accepted: 04/VI/09.

 

 

#CAM-77/09.