## Servicios Personalizados

## Articulo

## Indicadores

- Citado por SciELO
- Accesos

## Links relacionados

- Citado por Google
- Similares en SciELO
- Similares en Google

## Compartir

## Computational & Applied Mathematics

##
*versión On-line* ISSN 1807-0302

### Comput. Appl. Math. vol.28 no.3 São Carlos 2009

#### http://dx.doi.org/10.1590/S1807-03022009000300004

**An iterative method for solving a kind of constrained linear matrix equations system**

**Jing Cai ^{I, II}; Guoliang Chen^{I}**

^{I}Department of Mathematics, East China Normal University, Shanghai 200062, P.R. China. E-mail: caijing@hutc.zj.cn

^{II}School of Science, Huzhou Teachers College, Huzhou Zhejiang 313000, P.R. China. E-mail: glchen@math.ecnu.cn

**ABSTRACT**

In this paper, an iterative method is constructed to solve the following constrained linear matrix equations system: [*A*_{1}*(X),A*_{2}*(X),...* ,*A _{r}(X)]=[E1,E2, ...* ,

*E*],

_{r}*X*∈

*I={X*|

*X= U(X)},*where

*A*is a linear operator from

_{i}*C*onto

^{mxn}*C*,

^{pixqi}*E*∈

_{i}*C*1 , 2,.

^{pixqi}, i=*..*,

*r*, and

*U*is a linear self-conjugate involution operator. When the above constrained matrix equations system is consistent, for any initial matrix

*X*∈

_{0}*I*, a solution can be obtained by the proposed iterative method in finite iteration steps in the absence of roundoff errors, and the least Frobenius norm solution can be derived when a special kind of initial matrix is chosen. Furthermore, the optimal approximation solution to a given matrix can be derived. Several numerical examples are given to show the efficiency of the presented iterative method.

**Mathematical subject classification:**15A24, 65D99,65F30.

**Key words:** iterative method, linear matrix equations system, linear operator, least Frobenius norm solution, optimal approximation.

**1 Introduction**

Throughout this paper, we need the following notations. *C ^{mxn}* denotes the set of

*m x n*complex matrices. For a matrix

*A*∈

*C*

^{m}^{xn}, we denote its transpose, conjugate transpose, trace, Frobenius norm and column space by

*A*tr(A), ║

^{T}, A^{H},*A*║and

*R(*A), respectively. Let

*I*and

_{n}*S*denote the

_{n}*n x n*unit matrix and reverse unit matrix respectively. The symbol vec(-) stands for the vec operator, i.e., for

*A = (a*,

_{1}, a_{2}, ...*a*) ∈

_{n}*C*, where

^{mxn}*a*1, 2 ... ,

_{i}(i =*n)*denotes the

*i*th column of

*A*, vec(

*A*) =. In the vector space

*C*, we define inner product as: 〈

^{mxn}*X, Y*〉 =

*tr(Y*for all

^{H}X)*X, Y*∈

*C*. Two matrices

^{mxn}*X*and

*Y*are said to be orthogonal if 〈

*X, Y*〉

*=*0.

Let *LC ^{mxn,pxq}* denote the set of linear operators from

*C*onto

^{mxn}*C*. Particularly, when

^{pxq}*p = m*and

*q = n, LC*is denoted by

^{mxn,pxq}*LC*. Let τ stand for the identity operator on C

^{mxn}^{mxn}. For

*A*∈

*LC*, its conjugate operator is a linear operator satisfying the following equality:

^{mxn,pxq}

For example, if is defined as:: *X — AXB*, then we have : *Y* → *A ^{H}YB^{H}*.

We note that the linear matrix equations, such as the well-known Lyapunov matrix equation *XA* + *A*X = H,* Sylvester equation *AX* ± *XB = C*, Stein equation *X* ± *AXB = D, AXB + CX ^{T}D = E* and [

*A*] = [Ci, C

_{x}XB_{U}A_{2}XB_{2}_{2}], which have many applications in system theory (see [7-11, 15, 18] for instance), can all be rewritten as the following linear matrix equations system:

where ∈*LC ^{mxn,Pixqi}* and

*E*∈

_{t}*C*1, 2, ...,

^{Pixqi}, i =*r*.

Recently, there are many papers considering the matrix equations system (1.1) with various constraints on solutions. For instance, Chu [4], Dai [5] and Chang and Wang [2] considered the symmetric solutions of the matrix equations *AX = B*, *AXB = C* and (*A ^{T}XA, B^{T}XB) = (C, D)* respectively. Peng et al. [14] and Qiu et al. [16] discussed the reflexive and anti-reflexive solution of the matrix equations

*A XB = C*and

*AX = B, XC = D,*respectively. Li et al. [10], Yuan and Dai [19] and Zhang et al. [20] considered the generalized reflexive and anti-reflexive solutions of the matrix equations (

*BX = C, XD = E), AXB = D*and

*AX = B.*In these literatures, the necessary and sufficient conditions for the consistency of the constrained matrix equations and the explicit expressions of the constrained solutions were derived by using the singular value decomposition (SVD), generalized singular value decomposition (GSVD), canonical correlation decomposition (CCD) or the generalized inverses of matrices. On the other hand, Peng et al. [13] considered an iterative method for symmetric solutions ofthe system of matrix equations

*A*1

*XB*1

*= C*1,A

_{2}XB

_{2}= C

_{2}. In Huang [9], an iterative method was constructed to obtain the skew-symmetric solution of the matrix equation

*AXB = C*. Dehghan and Hajarian [6] proposed an iterative algorithm to obtain the reflexive solution (namely the generalized centro-symmetric solution) of the matrix equations

*AYB = E, CYD = F*.

It should be remarked that the following common constraints on the solutions of matrix equations (see [3, 12, 14, 19, 20] for more details):

(1) Symmetric (Skew-symmetric) constraint:

*X = X ^{T} (X = - X^{T}*);

(2) Centrosymmetric (Centroskew symmetric) constraint:

*X = SnXSn( — SnXSn*);

(3) Reflexive (Anti-reflexive) constraint:

*X = PXP (—PXP*);

(4) Generalized reflexive (anti-reflexive) constraint:

*X = PXQ (—PXQ*);

(5) P orthogonal-symmetric (orthogonal-anti-symmetric) constraint:

(*PX) ^{T}= PX(—PX)*, here

*P*≠

^{T}= P = P^{—1}*I*and

_{n}*Q*≠

^{T}= Q = Q^{—1}*I*, are all special cases of the following constraint:

_{n}

where ∈ *LC ^{xn}* is a self-conjugate involution operator, i.e.,

*=*

Hence it motivates us to consider solving the more general matrix equations system (1.1) with the constraint (1.2) and its associated optimal approximate problem (see the following two problems) by iterative methods, which generalize the main results of [6, 9, 13].

**Problem I.** Let denote the set of constrained matrices defined by (1.2). For given ∈ *LC ^{mxn,pi} ^{xqi}, E_{t}* ∈

*C*1, 2, .

^{pi}^{xqi}, i =*.. , r*, find

*X*∈ such that

**Problem II.** Let *S* denote the solution set of Problem I, for given ∈*C ^{m x} ^{n}*, find ∈S, such that

The rest of the paper is organized as follows. In Section 2, e propose an iterative algorithm to obtain a solution or the least Frobenius norm solution of Problem I and present some basic properties of the algorithm. In Section 3, we consider the iterative method for solving Problem II. Some numerical examples are given in Section 4 to show the efficiency of the proposed iterative method. Finally, we put some conclusions in Section 5.

**2 Iterative method for solving Problem I**

In this section, we first propose an iterative algorithm to solve Problem I, then present some basic properties of the algorithm. We also consider finding the least Frobenius norm solution of Problem I. In the sequel, the least norm solution always means the least Frobenius norm solution.

**Algorithm 2.1.**

Step 1. Input ∈*LC ^{mxn,Pixqi}, E_{i}*∈

*C*1, 2, ...,

^{pixqi}, i =*r*, and an arbitrary

*X*

_{0}∈ ;

Step 2. Compute

Step 3. If = 0 or ≠0 but *P _{k} =* 0, then stop; else,

*k := k*+ 1;

Step 4. Compute

Step 5. Go to Step 3.

Some basic properties of Algorithm 2.1 are listed in the following lemmas.

**Lemma 2.1.** *The sequences {X _{i}} and {P_{i}} generated by Algorithm* 2.1

*are contained in the constraint set .*

**Proof.** We prove the conclusion by induction. For *i =* 0, by *=* τ and Algorithm 2.1, we have

i.e., P_{0} ∈ ^{.}

For *i =* 1, since x_{0} ∈ , by Algorithm 2.1, we have

Moreover, we have

i.e.,*P*_{1} ∈ **S**. Assume that the conclusion holds for *i = s (s >* 1), i.e., *X _{s}*,

*P*∈

_{s}**S**. Then

and

from which we get*X*_{s}+_{1}, *P*_{s}+_{1} ∈ .By the principle of induction, we draw the conclusion.

**Lemma 2.2.** *Assume that X* is a solution of Problem I. Then the sequences {X _{i}}, {} and {P_{i}} generated by Algorithm* 2.1

*satisfy the following equality:*

**Proof.** We prove the conclusion by induction. For *i =* 0, it follows from *X*, X _{0}* ∈

*that*

*X* - X*∈ . Then by Algorithm 2.1, we have

_{0}

For *i =* 1, by Algorithm 2.1 and Lemma 2.1, we have

Assume that the conclusion holds for *i = s*(*s >* 1), i.e., 〈*X* — X _{s}, P_{s}*〉

*=*, then for

*i = s +*1, we have

This completes the proof by the principle of induction.

**Remark 2.1.** Lemma 2.2 implies that if Problem I is consistent, then implies that *Pi* ≠0. Else if there exists a positive number *k* such that 0 but *Pk =*0, then Problem I must be inconsistent. Hence the solvability of Problem I can be determined by Algorithm 2.1 in the absence of roundoff errors.

**Lemma 2.3.** *For the sequences and generated by Algorithm* 2.1, *it follows that*

**Proof.** By Algorithm 2.1, Lemma 2.1 and , we have

This completes the proof.

**Lemma 2.4.** *Assume that Problem I is consistent. If there exists a positive integer k such that for all i =* 0, 1, 2, .*.. , k, then the sequences and generated byAlgorith* 2.1 *satisfy*

and, *(i, j =* 0, 1, 2, ..., *k, i = j).* (2.3)

**Proof.** Note that holds for arbitrary matrices *A* and *B*. We only need to prove the conclusion holds for all 0 * < j < i < k*. For

*k =*1, by Lemma 2.3, we have

and

Assume that and *=* 0 for all 0 __<__*j < s,* 0 < *s* __<__*k*, we shall show that *=* 0 and hold for all 0 __<__ *j < s +* 1, 0 < *s +* 1 __<__ *k*.

By the hypothesis and Lemma 2.3, for the case where 0 __<__ *j <* s, we have

and

For the case *j = s*, we have

and

Then by the principle of induction, we complete the proof.

**Remark 2.2.** Lemma 2.4 implies that when Problem I is consistent, the sequences P_{0}, P_{1}, P_{2} ... are orthogonal to each other in the finite dimension matrix space . Hence there exists a positive integer *t* __<__ *mn* such that *P _{t} =* 0. According to Lemma 2.2, we have . Hence the iteration will be terminated in finite steps in the absence of roundoff errors.

Next we consider finding the least norm solution of Problem I. The following lemmas are needed for our derivation.

**Lemma 2.5** [1]. *Suppose that the consistent system of linear equations Ax* =*b has a solution x** ∈*R* (A^{H}), *then x***is the unique least norm solution of the syste oflinear equations.*

**Lemma 2.6** [8]. *For * ∈ *LC ^{mxn,pxq}, there exists a unique matrix M* ∈

*C*vec ((

^{pqxmn}, such that*X)) = M*vec(X)

*for all X*∈

*C*

^{mxn}.According to Lemma 2.6 and the definition of conjugate operator, one can easily obtain the following corollary.

**Corollary 2.1.** *Let and M be the same as those in Lemma* 2.4, *and be the conjugate operator of ,then* vec (*(Y)) = M ^{H}* vec(Y)

*for all Y*∈

*C*

^{pxq}.**Theorem 2.1.** *Assume that Problem I is consistent. If we choose the initial matrix , where H is an arbitrary matrix in C ^{plxql}, or ore especially, let X_{0} =* 0,

*then the solution X*2. 1

^{*}obtained by Algorith*is the least nor solution.*

**Proof.** Consider the following matrix equations system:

If Problem I has a solution X, then *X* must be a solution of the system (2.4). Conversely, if the system (2.4) has a solution X, let , then it is easy to verify that is a solution of Problem I. Therefore, the solvability of Problem I is equivalent to the system (2.4).

If we choose the initial matrix , where *H* is an arbitrary matrix in *C ^{pl} ^{xql}*, by Algorithm 2.1 and Remark 2.2, we can obtain the solution

*X**of Problem I within finite iteration steps, which can be represented as . Since the solution set of Problem I is a subset of that of the system (2.4), next we shall prove that

*X**is the least norm solution of the system (2.4), which implies that

*X*is the least norm solution of Problem I.

^{*}Let *M _{l}* and

*N*be the matrices such that vec((

*X)) = M*vec(

_{l}*X)*and vec(

*(X)) = N*vec(x)for all

*X*∈

*C*1, 2, .

^{mxn}, l =*.. , r*. Then the system (2.4) is equivalent to the following system of linear equations:

Note that

Then it follows from Lemma 2.5 that*X**is the least norm solution ofthe system (2.4), which is also the least norm solution of Problem I.

**3 Iterative method for solving Problem II**

In this section, we consider iterative method for solving the matrix nearness Problem II. Let *S* denote the solution set of Problem I, for given ∈ *C ^{mxn}* and arbitrary

*X*∈

*S,*we have

Note that

which implies . Then (3.1) reduces to

Hence is equivalent to , where

Let and . Then is a solution of the following constrained matrix equations system:

Hence Problem II is equivalent to finding the least norm solution of above constrained matrix equations system.

Based on the analysis above, we summarize that once the unique least norm solution of above constrained consistent matrix equations system is obtained by applying Algorithm 2.1, the unique solution of Problem II can be obtained, where

**4 Numerical examples**

In this section, we shall give some numerical examples to illustrate the efficiency of Algorithm 2.1. All the tests are performed by MATLAB 7.6 with machine precision around 10^{—16}. Because of the influence of roundoff errors, we regard a matrix *X* as zero matrix if ║*X*║ *<* 1.0e *—* 010.

**Example 4.1.** Find the least norm symmetric solution of the following matrix equations system:

where

If we define : *X* → *A ^{T }X + X^{T} A* and :

*X*→

*BXB*, then we have :

^{T}*Y*→

*AY + AY*and

^{T}*: Y*→

*B*. Here we define :

^{T}YB*X*→

*X*. Let

^{T}*X*0

*=*0, then by using Algorithm 2.1 and iterating 16 steps, we obtain the least norm symmetric solution of the system (4.1) as follows:

with = 1.4901*e* - 018.

**Example 4.2.** Let *S _{E}*denote the solution set of the matrix equations system (4.1) with symmetric constraint, for given

find , such that .

In order to find the optimal approximation solution to the matrix , let = , , and .

By applying Algorithm 2.1 to the new matrix equations system:

with symmetric constraint, and choosing the initial matrix , we obtain its unique least norm symmetric solution as follows:

with

Then we have

and

**5 Conclusions**

In this paper, an iterative algorithm is constructed to solve a kind of constrained matrix equations system, i.e., the matrix equations system (1.1) with the constraint (1.2), and its associated optimal approximation problem. By this algorithm, for any initial matrix *X*0satisfying the constraint (1.2), a solution *X** can be obtained in finite iteration steps in the absence of roundoff errors, and the least norm solution can be obtained by choosing a special kind of initial matrix.

In addition, using this iterative method, the optimal approximation solution to a given matrix can be derived by first finding the least norm solution of a new corresponding constrained matrix equations system. The given numerical examples show that the proposed iterative algorithm is quite efficient.

**Acknowledgement.** Research supported by Shanghai Science and Technology Committee (No. 062112065) and Natural Science Foundation of Zhejiang Province (No. Y607136).

**REFERENCES**

[1] A. Ben-Israel and T.N.E. Greville, *Generalized Inverse: Theory and Applications,* Springer, New York (2003). [ Links ]

[2] X.W. Chang and J.S. Wang, *The symmetric solution of the matrix equations AX + YA = C, AXA ^{T} + BYB^{T} = C, and (A^{T}XA, B^{T}XB) = (C, D),* Linear Algebra Appl.,

**179**(1993), 171-189. [ Links ]

[3] J.L. Chen and X.H. Chen, *Special Matrices,* Tsinghua University Press, Beijing (2001). [ Links ]

[4] K.E. Chu, *Symmetric solutions of linear matrix equations by matrix decompositions,* Linear Algebra Appl., **119** (1989), 35-50. [ Links ]

[5] H. Dai, *On the symmetric solutions of linear matrix equations,* Linear Algebra Appl., **131** (1990), 1-7. [ Links ]

[6] M. Dehghan and M. Hajarian, *An iterative algorith for solving a pair of atrix equations AY B = E, CY = F over generalized centro-syetric atrices,* Comput. Math. Appl., **56** (2008), 3246-3260. [ Links ]

[7] G.H. Golub, S. Nash and C. Vanloan, *A Hessenberg-Schur method for the problem AX + XB = C*, IEEE Trans. Auto. Control., **24** (1979), 909-913. [ Links ]

[8] R.A. Horn and C.R. Johnson, *Topics in Matrix Analysis,* Cambridge University Press, New York (1991). [ Links ]

[9] G.X. Huang, F. Yin and K. Guo, *An iterative method for the skew-symmetric solution and the optial approxiate solution of the matrix equation AXB = C,* J. Comput. Appl. Math., **212** (2008),231-244. [ Links ]

[10] F.L. Li, X.Y. Hu and L. Zhang, *The generalized anti-reflexive solutions for a class of matrix equations BX = C, XD = E*, Comput. Appl. Math., **27** (2008), 31-46. [ Links ]

[11] S.K. Mitra, *Common solutions to a pair of linear matrix equations A _{1}XB_{1}= C_{1},A_{2}XB_{2} =* C

_{2}, Proc. Cambridge Philos. Soc.,

**74**(1973), 213-216. [ Links ]

[12] X.Y.Peng, X.Y.Hu and L.Zhang, *The orthogonal-symmetric or orthogonal-anti-symmetric least-square solutions ofthe matrix equation,* Chinese J. Engi. Math., **23**(6) (2006), 10481052. [ Links ]

[13] Y.X. Peng, X.Y. Hu and L. Zhang, *An iterative method for symmetric solutions and optimal approximation solution of the system of matrix equations A _{1}XB_{1}= C_{1},A_{2}XB_{2}=* C

_{2},Appl. Math. Comput.,

**183**(2006), 1127-1137. [ Links ]

[14] Y.X. Peng, X.Y. Hu and L. Zhang, *The reflexive and anti-reflexive solutions of the matrix equation A ^{H}XB = C*, J. Comput. Appl. Math.,

**200**(2007), 749-760. [ Links ]

[15] F.X. Piao, Q.L. Zhang and Z.F. Wang, *The solution to matrix equation AX + X ^{T}C = B,* J. Fran. Inst.,

**344**(2007), 1056-1062. [ Links ]

[16] Y.Y. Qiu, Z.Y. Zhang and J.F. Lu, *The matrix equations AX = B, XC = D with PX = sXP constraint,* Appl. Math. Comput., **189** (2007), 1428-1434. [ Links ]

[17] X.P. Sheng and G.L. Chen, *A finite iterative method for solving a pair of linear matrix equations (AXB, CXD) = (E, F*), Appl. Math. Comput., **189** (2007), 1350-1358. [ Links ]

[18] M.H. Wang, X.H. Cheng and M.S. Wei, *Iterative algorithms for solving the matrix equation AXB + CX ^{T}D = E*, Appl. Math. Comput.,

**187**(2007), 622-629. [ Links ]

[19] Y.X. Yuan and H. Dai, *Generalized reflexive solutions of the matrix equation AXB = D and associated optimal approximation problem,* Comput. Math. Appl., **56** (2008), 16431649. [ Links ]

[20] J.C. Zhang, S.Z. Zhou and X.Y. Hu, *The (P, Q) generalized reflexive and anti-reflexive solutions ofthe matrix equation AX = B,* Appl. Math. Comput., **209** (2009), 254-258. [ Links ]

Received: 13/III/09.

Accepted: 04/VI/09.

#CAM-77/09.