Acessibilidade / Reportar erro

Joint Approximate Diagonalization of Symmetric Real Matrices of Order 2

ABSTRACT

The problem of joint approximate diagonalization of symmetric real matrices is addressed. It is reduced to an optimization problem with the restriction that the matrix of the similarity transformation is orthogonal. Analytical solutions are derived for the case of matrices of order 2. The concepts of off-diagonalizing vectors, matrix amplitude, which is given in terms of the eigenvalues, and partially complementary matrices are introduced. This leads to a geometrical interpretation of the joint approximate diagonalization in terms of eigenvectors and off-diagonalizing vectors of the matrices. This should be helpful to deal with numerical and computational procedures involving high-order matrices.

Keywords:
joint approximate diagonalization; eigenvectors; optimization

RESUMO

Este trabalho aborda o problema da diagonalização conjunta aproximada de uma coleção de matrizes reais e simétricas. A otimização é realizada com a restrição de que a matriz de transformação de semelhança seja ortogonal. As soluções são apresentadas de forma analítica para matrizes de ordem 2. São introduzidos os conceitos de vetor anti-diagonalizante, amplitude de uma matriz, que é expressa em termos dos autovalores, e matrizes parcialmente complementares. Isto permite fazer uma interpretação geométrica da diagonalização conjunta aproximada, em termos dos autovetores e dos vetores anti-diagonalizantes das matrizes. Esta contribuição deve auxiliar na melhoria de procedimentos numéricos e computacionais envolvendo matrizes de ordem maior que 2.

Palavras-chave:
diagonalização conjunta aproximada; autovetores; otimização

1 INTRODUCTION

Linear Algebra has many applications in science and engineering 22 [] Howard Anton & Chris Rorres. Elementary Linear Algebra - Applications Version, volume 10 (John Wiley & Sons, 2010).,66 [] Augusto V. Cardona & José V.P. de Oliveira. Solução ELT AN para o problema de transporte com fonte. Trends in Applied and Computational Mathematics, 10(2) (2009), 125-134.,77 [] Augusto V. Cardona, R. Vasques & M.T. Vilhena. Uma nova versão do método LT AN. Trends in Applied and Computational Mathematics, 5(1) (2004), 49-54.. In particular, the calculation of eigenvalues and eigenvectors of a linear operator allows one to find the main directions of a rotating body, the normal modes of an oscillating mechanical and/or electrical system and the stationary states of a quantum system. Such a calculation leads to a similarity transformation that produces a diagonal representation of the linear operator, thus the process is called as a diagonalization.

There are cases where several linear operators are relevant in the analysis of the system under investigation. When the operators commute, they may be diagonalized by the same similarity transformation. This problem has been numerically addressed by Bunse-Gerstner et al. 55 [] Angelika Bunse-Gerstner, Ralph Byers & Volker Mehrmann. Numerical methods for simultaneous diagonalization. SIAM Journal on Matrix Analysis and Applications, 14(4) (1993), 927-949.. Their algorithm is an extension of the Jacobi technique that generates a sequence of similarity transformations that are plane rotations. Moreover, Lathauwer 1111 [] Lieven De Lathauwer. A link between the canonical decomposition in multilinear algebra and simultaneous matrix diagonalization. SIAM Journal on Matrix Analysis and Applications, 28(3) (2006), 642-666. established a link between the canonical decomposition of higher-order tensors and simultaneous matrix diagonalization.

In the case of noncommuting operators, researchers try to find a compromise solution that nearly diagonalizes the matrices representing the operators. Several methods for joint approximate diagonalization have been proposed in the literature. They differ in how the optimization problem is formulated and solved, and in the conditions for both the diagonalizing matrix and the set of matrices representing the operators. For instance, one may look for the minimum of the sum of the squared absolute values of the off-diagonal terms of all the transformed matrices.

Cardoso and Souloumiac 88 [] Jean-François Cardoso & Antoine Souloumiac. Jacobi angles for simultaneous diagonalization. SIAM Journal on Matrix Analysis and Applications, 17(1) (1996), 161-164. approached the simultaneous diagonalization problem by iterating plane rotations. They complemented the method of Bunse-Gerstner et al. 55 [] Angelika Bunse-Gerstner, Ralph Byers & Volker Mehrmann. Numerical methods for simultaneous diagonalization. SIAM Journal on Matrix Analysis and Applications, 14(4) (1993), 927-949. by giving a closed-form expression for the optimal Jacobi angles. Pham 1818 [] Dinh Tuan Pham. Joint approximate diagonalization of positive definite Hermitian matrices. SIAM Journal on Matrix Analysis and Applications, 22(4) (2001), 1136-1152. provided an iterative algorithm to jointly and approximately diagonalize a set of Hermitian positive definite matrices. The author minimizes an objective function involving the determinants of the transformed matrices. Vollgraf et al. 2020 [] Roland Vollgraf & Klaus Obermayer. Quadratic optimization for simultaneous matrix diagonalization. IEEE Transactions on Signal Processing, 54(9) (2006), 3270-3278. used a quadratic diagonalization algorithm, where the global optimization problem is divided into a sequence of second-order problems. In the work by Joho 1414 [] Marcel Joho. Newton Method for Joint Approximate Diagonalization of Positive Definite Hermitian Matrices. SIAM Journal on Matrix Analysis and Applications, 30(3) (2008), 1205-1218. the joint diagonalization problem of positive definite Hermitian matrices is considered. The authors propose an algorithm based on the Newton method, allowing the diagonalizing matrix to be complex, nonunitary, and even rectangular. One of the contributions of their work is the derivation of the Hessian function in closed form for every diagonalizing matrix and not only at the critical points. Tichavskỳ and Yeredor 1919 [] Petr Tichavskỳ & Arie Yeredor. Fast approximate joint diagonalization incorporating weight matrices. Signal Processing, IEEE Transactions on 57(3) (2009), 878-891. proposed a low-complexity approximate joint diagonalization algorithm, which incorporates nontrivial block-diagonal weight matrices into a weighted least-squares criterion. Glashoff and Bronstein 1212 [] Klaus Glashoff and Michael M. Bronstein. Matrix commutators: their asymptotic metric properties and relation to approximate joint diagonalization. Linear Algebra and its Applications, 439(8) (2013), 2503-2513. analyzed the properties of the commutator of two Hermitian matrices and established a relation to the joint approximate diagonalization of the matrices. Congedo et al. 1010 [] Marco Congedo, Bijan Afsari, Alexandre Barachant & Maher Moakher, Approximate joint diagonalization and geometric mean of symmetric positive definite matrices. PloS one, 10(4) (2015). explored the connection between the estimation of the geometric mean of a set of symmetric positive definite matrices and their approximate joint diagonalization.

An important application of the joint approximate diagonalization problem is Blind Source Separation (BSS), treated by Belouchrani et al. 33 [] Adel Belouchrani, Karim Abed-Meraim, Jean-François Cardoso & Eric Moulines. A blind source separation technique using second-order statistics. Signal Processing, IEEE Transactions on 45(2) (1997), 434-444., Albera et al. 11 [] Laurent Albera, Anne Ferréol, Pierre Comon & Pascal Chevalier. Blind Identification of Overcomplete MixturEs of sources (BIOME). Linear Algebra and its Applications, 391 (2004), 3-30., Yeredor 2121 [] Arie Yeredor. Non-orthogonal joint diagonalization in the least-squares sense with application in blind source separation. Signal Processing, IEEE Transactions on 50(7) (2002), 1545-1553., McNeill and Zimmerman 1616 [] S.I. McNeill & D.C. Zimmerman. A framework for blind modal identification using joint approximate diagonalization. Mechanical Systems and Signal Processing, 22(7) (2008), 1526-1548., Chabriel et al. 99 [] Gilles Chabriel, Martin Kleinsteuber, Eric Moreau, Hao Shen, Petr Tichavskỳ & Arie Yeredor. Joint matrices decompositions and blind source separation: A survey of methods, identification, and applications. Signal Processing Magazine, IEEE, 31(3) (2014), 34-43., and Boudjellal et al. 44 [] Abdelwaheb Boudjellal, A. Mesloub, Karim Abed-Meraim & Adel Belouchrani. Separation of dependent autoregressive sources using joint matrix diagonalization. Signal Processing Letters, IEEE 22(8) (2015), 1180-1183.. Besides this, in solid-state physics, the search for maximally-localized Wannier functions may be reduced to a joint diagonalization problem 1313 [] François Gygi, Jean-Luc Fattebert & Eric Schwegler. Computation of Maximally Localized Wannier Functions using a simultaneous diagonalization algorithm. Computer Physics Communications, 155(1) (2003), 1-6.. In such a case, one has to deal with three matrices of infinite order.

In the present work, analytical solutions for the problem of joint approximate diagonalization are given for a set of symmetric real matrices of order 2. This leads to a new and deeper geometrical interpretation of the diagonalization process that should improve numerical and computational procedures required to deal with larger matrices. Several pairs of matrices are investigated in order to clarify the role played by the amplitudes and the main directions of each operator. In this respect, the introduction of the concepts of off-diagonalizing vectors, matrix amplitude and partially complementary matrices proves to be very helpful.

The structure of the manuscript is as follows: § 2 discusses the main concepts and procedures for the case of a single matrix, § 3 sets up the optimization problem for several symmetric real matrices, § 4 presents an analytical solution for the particular case of several matrices of order 2, and § 5 focuses on a pair of 2×2 matrices and discusses the geometrical aspects of the procedure.

The main findings of the work are summarized in § 6.

2 A SINGLE MATRIX OF ORDER N: EIGENVECTORS AND OFF-DIAGONALIZING VECTORS

The diagonalization of a real symmetric square matrix 𝕄 can be viewed as an optimization problem. One should find a nonsingular real square matrix 𝕌 of the same order, such that the product 𝕄' = 𝕌−1 𝕄𝕌 is a diagonal matrix 1515 [] Steven J. Leon. Linear Algebra with Applications, Eighth edition (Pearson, 2010).,1717 [] Anthony J. Pettofrezzo. Matrices and Transformations (Dover Publications, Inc., 1966).. Let us define a function, denoted by the term "off ", that gives the sum of the squared values of the off-diagonal entries of each square matrix. 𝕄' is diagonal when off (𝕄') = 0. Since every real symmetric square matrix 𝕄 can be diagonalized, the function f (𝕄, 𝕌) = off (𝕌−1 𝕄 𝕌) has a global minimum and its value is zero. The diagonalization of 𝕄 is then reduced to finding the minimizing matrix 𝕌.

The columns of the diagonalizing matrix 𝕌 are eigenvectors of the matrix 𝕄. To each of those vectors corresponds an eigenvalue in the main diagonal of 𝕄' 1515 [] Steven J. Leon. Linear Algebra with Applications, Eighth edition (Pearson, 2010).. Moreover, the eigenvectors of different eigenvalues are known to be orthogonal. In the case of a degenerate eigenvalue of multiplicity d, a set of d orthogonal eigenvectors may be chosen. Therefore, one may look for a minimizing matrix 𝕌 having orthogonal columns. Additionally, the columns may be normalized while remaining eigenvectors of 𝕄. In this way, the study may be restricted to matrices 𝕌, with transpose denoted by , such that

(2.1)

This means 𝕌 is an orthogonal matrix. Therefore, the search for the minimum may be restricted to the set O of orthogonal matrices of order n.

Taking (2.1) into account, the objective function may be written as

(2.2)

As f is a function of n 2 entries of 𝕌, it is a polynomial of fourth degree in . Since O is a compact subset of , the existence of both the minimum and the maximum values of the continuous function f(𝕄, 𝕌) is guaranteed. Any column of a maximizing matrix 𝕌 will be called as an off-diagonalizing vector of 𝕄. One may also say that such a matrix 𝕌 off-diagonalizes 𝕄.

To understand this, one may consider a real symmetric matrix of order 2, given by

(2.3)

Since 𝕌 is an orthogonal matrix, it may be written in the form

(2.4)

where θ and θ' give the directions of the vectors in the first and second columns of 𝕌. Taking into account the fact that such vectors are orthogonal, one may take θ' = θ ± π/2. As a result, the transformation matrix has the form (see Ref. 22 [] Howard Anton & Chris Rorres. Elementary Linear Algebra - Applications Version, volume 10 (John Wiley & Sons, 2010).))

(2.5)

and the objective function becomes

(2.6)

When a = c and b = 0, the objective function vanishes everywhere. This is because the matrix is a scalar multiple of the identity matrix and such a matrix commutes with every square matrix M. Instead, when ac or b ≠ 0, the objective function oscillates between zero and its maximum value. The values of θ leading to such extreme values can be obtained from the derivative of f(𝕄, 𝕌) with respect to θ . However, the simplicity of this function allows the optimization process to be performed algebraically. The vector (ac, b) is the product of its norm, , with the unit vector (cos(2ϕ), sin(2ϕ)), where ϕ is a real number fulfilling

(2.7)

Therefore, the objective function may be written as

(2.8)

where

(2.9)

The objective function oscillates harmonically with both the mean value and the amplitude given by C. It should be noted that C vanishes when a = c and b = 0.

When C ≠ 0, the eigenvectors (off-diagonalizing vectors) of M are along the directions given by

(2.10)

where q is an even (odd) integer. The corresponding optimal value of the objective function are fmin = 0 and fmax = 2C. Moreover, each off-diagonalizing vector bisects the angle between two orthogonal eigenvectors, and conversely.

It is very interesting to note that the amplitude C may be easily expressed in terms of the trace, a + c, and the determinant, acb 2 /4, of 𝕄, namely

(2.11)

Since the trace and the determinant are invariant under the similarity transformation given by 𝕌, the matrix 𝕄 has the same amplitude as its diagonalized form

(2.12)

where λ 1 and λ 2 are the eigenvalues of M. Therefore, C equals half the maximum value of f(𝔻, 𝕌). According to Eq. (2.6), this is given by

(2.13)

Then, the amplitude of 𝕄 is also given by

(2.14)

Of course, since Tr(𝕄) = λ 1 + λ 2 and det(𝕄) = λ 1 λ 2, the latter equation is equivalent to Eq. (2.11).

In order to simplify the equations for matrices of order n, it is useful to recall the Frobenius norm of a real square matrix 𝕄, whose square is given by the sum of the squares of the entries mij of the matrix, that is,

(2.15)

For a symmetric matrix 𝕄, the squared norm is the trace of 𝕄2, that is,

(2.16)

Since 𝕄𝕌 is symmetric, we have

Moreover,

(2.17)

where

(2.18)

Therefore, the maximum (minimum) value of f(𝕄, 𝕌) occurs when g(𝕄, 𝕌) reaches its minimum (maximum) value. Such extreme values should be found under the restriction given by (2.1).

For the matrix of order 2 in Eq. (2.3) one may write

(2.19)

Then, the minimum value of g(𝕄, 𝕌) in reached when the cosine of 4ϕ) equals −1. Such a minimum is given by

(2.20)

From this, one may draw two interesting conclusions. On the one hand, after off-diagonalization, 𝕄' = 𝕄𝕌 has a null diagonal if and only of c = −a. This means that the off-diagonalization process is not perfect for most matrices. On the other hand, gmin = g(𝕄, 𝕀) when (a + c)2 /2 = a2 + c2, that is, c = a. Matrices satisfying this condition are as off-diagonal as they can be transformed into.

3 JOINT APPROXIMATE DIAGONALIZATION OF SEVERAL MATRICES

When one considers two real symmetric matrices 𝕄1 and 𝕄2, the existence of a common diagonalizing matrix 𝕌 is equivalent to the condition 𝕄1 𝕄2 = 𝕄2 𝕄1. Hence, several matrices have a common diagonalizing matrix whenever each pair of them commute.

In the present work, we consider a set of K noncommuting real symmetric matrices 𝕄1, 𝕄2,... , 𝕄K of order n. Such matrices cannot be diagonalized by the same orthogonal matrix 𝕌. Then, one may look for 𝕌 leading to the minimum value of

(3.1)

where f has the meaning of Eq. (2.2). The minimum is not zero, thus the process can be called as a joint approximate diagonalization of the matrices under consideration. The optimization process for an arbitrary value of n requires numerical iterative procedures 1818 [] Dinh Tuan Pham. Joint approximate diagonalization of positive definite Hermitian matrices. SIAM Journal on Matrix Analysis and Applications, 22(4) (2001), 1136-1152.. Therefore, the next sections focus on the case n = 2.

4 JOINT APPROXIMATE DIAGONALIZATION OF SEVERAL MATRICES OF ORDER 2

Similarly to (2.3), the k-th matrix of the set is written as

(4.1)

Then according to (2.5) and (2.6), one has

(4.2)

where

(4.3)

and

(4.4)

Comparing with Eq. (2.9), we note that Ck is the amplitude of 𝕄k . In analogy with Eq. (2.8), one may also write

(4.5)

where

(4.6)

The objective function (3.1) is then written as

(4.7)

where A = Ak, B = Bk and C = Ck . These parameters may be expressed in terms of the K -vectors a 1 , ..., aK ), b 1 , ..., bK) and c 1 , ..., cK), namely = ( = ( = (

(4.8)

The function F(𝕌) will be the constant value C when A = B = 0, that is,

(4.9)

In this case, no matrix 𝕌 is able to decrease the joint off-diagonal measure of the set of matrices. In other cases, there is an angle ϕ such that

(4.10)

(4.11)

The minimizing values of 𝕌 are given by

(4.12)

where q is an integer. Moreover, the minimum value of the objective function is

(4.13)

It is worthy noting that all matrices will be diagonalized when F min = 0. This occurs when , that is, when (aj − cj)bk = (ak − ck)bj for every j and k. Since

(4.14)

Fmin = 0 when the matrices commute pairwise.

It is also useful to note that, from Eqs. (3.1) and (4.5), the objective function of the joint-approximate diagonalization may be written as

(4.15)

5 JOINT APPROXIMATE DIAGONALIZATION OF TWO MATRICES OF ORDER 2: FOUR CASES

For two matrices 𝕄1 and 𝕄2, the objective function in Eq. (4.15) is a constant when the amplitudes C 1 and C 2 satisfy the equations

(5.1)

If at least one of the amplitudes is not zero, then the determinant of the system should be zero. This means sin[4(ϕ 2ϕ 1)] = 0, i.e., ϕ 2ϕ 1 = Eq. (5.1), the amplitudes should fulfill the condition C1 + (−1)p C2 = 0. Since the amplitudes are non-negative numbers, one arrives at the following conditions: (i) C1 = C2 and (ii) p should be an odd integer. Condition (i) means that the matrices have equal amplitudes, while condition (ii) states that the eigenvectors of 𝕄1 are off-diagonalizing vectors of 𝕄2, and conversely. For short, it will be said that two matrices obeying condition (ii) are partially complementary. Furthermore, when (i) and (ii) are satisfied, the matrices may be told as fully complementary, because in such a case the objective function is a constant., where p is an integer. Moreover, from

In the following subsections, the four cases which differ in whether the matrices are partially complementary or have different amplitudes are illustrated and discussed.

5.1 Non partially complementary matrices of different amplitudes

In this subsection we consider the matrices

(5.2)

a1c 1)(a 2c 2) + b 1 b 2 = 2 ≠ 0. The main directions of these matrices are displayed as dashed and dotted lines in Figure 1. and 2. They are not partially complementary because (with amplitudes

Figure 1:
The directions of the columns of the minimizing matrix 𝕌(θ) in solid lines, and the main directions of the matrices 𝕄1 and 𝕄2, in dashed and dotted lines. The matrices, given by Eq. (5.2), are not partially complementary and have different amplitudes.

In this case, according to Eqs. (4.11) and (4.12), the minimizing values of θ are given by θ = Figure 1. They are contained in the smallest angles formed by the main directions of 𝕄1 and 𝕄2. Moreover, they are closer to the directions of the matrix with larger amplitude, namely 𝕄2. This is also apparent in Figure 2, where the objective function F(𝕌) is displayed as a function of θ. The objective functions f(𝕄1, 𝕌) and f(𝕄2, 𝕌) of the separate diagonalization of the matrices are also shown. It is seen that the minimization procedure has lowered the value of the objective function from its initial value F(𝕀) = 4 to its minimum value Fmin = 2. arctan , with integer q. The corresponding directions are shown as solid lines in

Figure 2:
For the matrices of Figure 1, the objective function F(𝕌), in solid line, and the functions f(𝕄1, 𝕌) e f(𝕄2, 𝕌), in dashed and dotted lines.

5.2 Non partially complementary matrices of equal amplitudes

Now we consider the matrices

(5.3)

whose amplitudes equal . The matrices are not partially complementary, in fact, (a1 − c1) (a2 − c2) + b1b2 = 3 ≠ 0.

In this case, the minimizing angles are θ = Figure 3, the solid lines given by such directions are bisectrices of the the smaller angles defined by the main directions of 𝕄1 and 𝕄2. This is clearly shown in Figure 4, where the objective functions F(𝕌), f(𝕄1, 𝕌) and f(𝕄2, 𝕌) are given as a function of the angle θ ., where q is an integer. In

Figure 3:
The directions of the columns of the minimizing matrix 𝕌(θ) in solid lines, and the main directions of the matrices 𝕄1 and 𝕄2, in dashed and dotted lines. The matrices, given by Eq. (5.3), are not partially complementary and have equal amplitudes.

Figure 4:
For the matrices of Figure 3, the objective function F(𝕌), in solid line, and the functions f(𝕄1, 𝕌) e f(𝕄2, 𝕌), in dashed and dotted lines.

5.3 Partially complementary matrices with different amplitudes

It is also interesting to consider the matrices

(5.4)

which have amplitudes a 1c 1)(a 2c 2) + b 1 b 2 = 0. The angles θ = − arctan and 5. The matrices are partially complementary, since (, where q is an integer, minimize the objective function F(𝕌).

In this case, as shown in Figure 5, the minimizing directions coincide with the main directions of the matrix having larger amplitude, namely 𝕄2. Figure 6 displays the objective functions F(𝕌), f(𝕄1, 𝕌) and f(𝕄2, 𝕌) as a function of θ . One may note that the latter two functions oscillate in complete out of phase, that is, the angles producing the maximum value for one matrix yield the minimum value for the other. Therefore, the term of larger amplitude is dominant in the sum F(𝕌).

Figure 5:
The directions of the columns of the minimizing matrix 𝕌(θ) in solid lines, and the main directions of the matrices 𝕄1 and 𝕄2, in dashed and dotted lines. The matrices, given by Eq. (5.4), are partially complementary and have different amplitudes.

Figure 6:
For the matrices of Figure 5, the objective function F(𝕌), in solid line, and the functions f(𝕄1, 𝕌) e f (𝕄2, 𝕌), in dashed and dotted lines.

5.4 Fully complementary matrices

Finally, we consider the matrices

(5.5)

whose amplitudes equal a 1c 1) (a 2c 2) + b 1 b 2 = 0, and have equal amplitudes, they are fully complementary.. Since these matrices are partially complementary, that is, (

This is the case where the objective function F(𝕌) remains constant, as displayed in Figure 7. Therefore, one is not able to decrease the joint off-diagonal measure of the pair of matrices. Since no special value of θ exist, a figure similar to Figures 1, 3 and 5 would not be meaningful in this case.

Figure 7:
The objective function F(𝕌), in solid line, and the functions f (𝕄1 , 𝕌) e f (𝕄2 , 𝕌), in dashed and dotted lines. The matrices, given by Eq. (5.5), are fully complementary.

6 CONCLUSIONS

We have dealt with the problem of joint approximate diagonalization of a set of symmetric real matrices. The problem has been reduced to the search for the orthogonal transformation matrix that minimizes the joint off-diagonal sums of squares of the matrices.

Analytical expressions have been given for the case of a set of matrices of order 2. For the particular case of two matrices, the discussions were performed after introducing the concept of off-diagonalizing vectors. The latter are the columns of an orthogonal matrix that off-diagonalizes a given matrix. When the eigenvectors of one of the matrices are off-diagonalizing vectors of the other, we say that the matrices are partially complementary. Moreover, the sum of the squared off-diagonal entries of a transformed matrix oscillates harmonically, as a function of the rotation angle. The amplitude of the oscillation is one fourth of the squared difference between the eigen-values of the matrix. The results and discussions are presented for several cases, differing on whether the matrices are partially complementary and/or have equal amplitudes. The case where both situations apply deserves special attention because the joint approximate diagonalization has no effect, in other words, the objective function is constant. We say that such matrices are fully complementary.

We note that the joint approximate diagonalization is often applied to large matrices, and the numerical and computational aspects have been the main focus of precedent works. In contrast, our thorough discussion of matrices of order 2 has shed light on the geometrical meaning of the procedure. The introduction of the concepts of off-diagonalizing vectors, matrix amplitude and complementary matrices have been very useful and should find additional applications in Linear Algebra and other branches of science. Hopefully, the work will encourage the treatment of both complex and high-order matrices.

ACKNOWLEDGMENTS

The authors are grateful to the research group MApliC/Unesp for useful discussions.

REFERENCES

  • 1
    [] Laurent Albera, Anne Ferréol, Pierre Comon & Pascal Chevalier. Blind Identification of Overcomplete MixturEs of sources (BIOME). Linear Algebra and its Applications, 391 (2004), 3-30.
  • 2
    [] Howard Anton & Chris Rorres. Elementary Linear Algebra - Applications Version, volume 10 (John Wiley & Sons, 2010).
  • 3
    [] Adel Belouchrani, Karim Abed-Meraim, Jean-François Cardoso & Eric Moulines. A blind source separation technique using second-order statistics. Signal Processing, IEEE Transactions on 45(2) (1997), 434-444.
  • 4
    [] Abdelwaheb Boudjellal, A. Mesloub, Karim Abed-Meraim & Adel Belouchrani. Separation of dependent autoregressive sources using joint matrix diagonalization. Signal Processing Letters, IEEE 22(8) (2015), 1180-1183.
  • 5
    [] Angelika Bunse-Gerstner, Ralph Byers & Volker Mehrmann. Numerical methods for simultaneous diagonalization. SIAM Journal on Matrix Analysis and Applications, 14(4) (1993), 927-949.
  • 6
    [] Augusto V. Cardona & José V.P. de Oliveira. Solução ELT AN para o problema de transporte com fonte. Trends in Applied and Computational Mathematics, 10(2) (2009), 125-134.
  • 7
    [] Augusto V. Cardona, R. Vasques & M.T. Vilhena. Uma nova versão do método LT AN Trends in Applied and Computational Mathematics, 5(1) (2004), 49-54.
  • 8
    [] Jean-François Cardoso & Antoine Souloumiac. Jacobi angles for simultaneous diagonalization. SIAM Journal on Matrix Analysis and Applications, 17(1) (1996), 161-164.
  • 9
    [] Gilles Chabriel, Martin Kleinsteuber, Eric Moreau, Hao Shen, Petr Tichavskỳ & Arie Yeredor. Joint matrices decompositions and blind source separation: A survey of methods, identification, and applications. Signal Processing Magazine, IEEE, 31(3) (2014), 34-43.
  • 10
    [] Marco Congedo, Bijan Afsari, Alexandre Barachant & Maher Moakher, Approximate joint diagonalization and geometric mean of symmetric positive definite matrices. PloS one, 10(4) (2015).
  • 11
    [] Lieven De Lathauwer. A link between the canonical decomposition in multilinear algebra and simultaneous matrix diagonalization. SIAM Journal on Matrix Analysis and Applications, 28(3) (2006), 642-666.
  • 12
    [] Klaus Glashoff and Michael M. Bronstein. Matrix commutators: their asymptotic metric properties and relation to approximate joint diagonalization. Linear Algebra and its Applications, 439(8) (2013), 2503-2513.
  • 13
    [] François Gygi, Jean-Luc Fattebert & Eric Schwegler. Computation of Maximally Localized Wannier Functions using a simultaneous diagonalization algorithm. Computer Physics Communications, 155(1) (2003), 1-6.
  • 14
    [] Marcel Joho. Newton Method for Joint Approximate Diagonalization of Positive Definite Hermitian Matrices. SIAM Journal on Matrix Analysis and Applications, 30(3) (2008), 1205-1218.
  • 15
    [] Steven J. Leon. Linear Algebra with Applications, Eighth edition (Pearson, 2010).
  • 16
    [] S.I. McNeill & D.C. Zimmerman. A framework for blind modal identification using joint approximate diagonalization. Mechanical Systems and Signal Processing, 22(7) (2008), 1526-1548.
  • 17
    [] Anthony J. Pettofrezzo. Matrices and Transformations (Dover Publications, Inc., 1966).
  • 18
    [] Dinh Tuan Pham. Joint approximate diagonalization of positive definite Hermitian matrices. SIAM Journal on Matrix Analysis and Applications, 22(4) (2001), 1136-1152.
  • 19
    [] Petr Tichavskỳ & Arie Yeredor. Fast approximate joint diagonalization incorporating weight matrices. Signal Processing, IEEE Transactions on 57(3) (2009), 878-891.
  • 20
    [] Roland Vollgraf & Klaus Obermayer. Quadratic optimization for simultaneous matrix diagonalization. IEEE Transactions on Signal Processing, 54(9) (2006), 3270-3278.
  • 21
    [] Arie Yeredor. Non-orthogonal joint diagonalization in the least-squares sense with application in blind source separation. Signal Processing, IEEE Transactions on 50(7) (2002), 1545-1553.

Publication Dates

  • Publication in this collection
    Jan-Apr 2016

History

  • Received
    23 Oct 2015
  • Accepted
    15 Jan 2016
Sociedade Brasileira de Matemática Aplicada e Computacional Rua Maestro João Seppe, nº. 900, 16º. andar - Sala 163 , 13561-120 São Carlos - SP, Tel. / Fax: (55 16) 3412-9752 - São Carlos - SP - Brazil
E-mail: sbmac@sbmac.org.br