Acessibilidade / Reportar erro

Block triangular preconditioner for static Maxwell equations

Abstract

In this paper, we explore the block triangular preconditioning techniques applied to the iterative solution of the saddle point linear systems arising from the discretized Maxwell equations. Theoretical analysis shows that all the eigenvalues of the preconditioned matrix arestrongly clustered. Numerical experiments are given to demonstrate the efficiency of the presented preconditioner. Mathematical subject classification: 65F10.

Maxwell equations; preconditioner; Krylov subspace method; saddle point system


Block triangular preconditioner for static Maxwell equations* * This research was supported by 973 Program (2007CB311002) and NSFC Tianyuan Mathematics Youth Fund (11026040, 11026083).

Shi-Liang WuI; Ting-Zhu HuangII; Liang LiII

ISchool of Mathematics and Statistics, Anyang Normal University, Anyang, Henan, 455002, PR China. E-mail: wushiliang1999@126.com

IISchool of Mathematics Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan, 611731, PR China. E-mails: tzhuang@uestc.edu.cn, tingzhuhuang@126.com

ABSTRACT

In this paper, we explore the block triangular preconditioning techniques applied to the iterative solution of the saddle point linear systems arising from the discretized Maxwell equations. Theoretical analysis shows that all the eigenvalues of the preconditioned matrix arestrongly clustered. Numerical experiments are given to demonstrate the efficiency of the presented preconditioner.

Mathematical subject classification: 65F10.

Keywords: Maxwell equations, preconditioner, Krylov subspace method, saddle point system.

1 Introduction

We consider the block triangular preconditioner for linear systems arising from the finite element discretization of the following static Maxwell equations: find u and p such that

where Ω ⊂

2 is a simply connected domain with connected boundary ∂ Ω, and n represents the outward unit normal vector on ∂ Ω; u is vector field, p is the Lagrange multiplier and the datum f is given generic source.

There are a large variety of schemes for solving the Maxwell equations, such as the edge finite element method [1, 2, 6], the domain decomposition method [5, 9], the algebraic multigrid method [3] and so on.

Using finite element discretization with Nédélec elements of the first kind [4, 11, 7] for the approximation of the vector field and the standard nodal elements for the multiplier, we obtain the approximate solution of (1.1) by solving the following saddle point linear systems:

where un and pm are finite arrays denoting the finite element approximations, gn is the load vector connected with the datum f. The matrix An×n corresponding to the discrete curl-curl operator is symmetric positive semidefinite with nullity m, Bm×n is a discrete divergence operator with rank(B) = m. Specifically, one can see [4, 7, 11] for details.

The form of (1.2) frequently occurs in a large number of applications, such as the (linearized) Navier-Stokes equations [21], the time-harmonic Maxwell equations [7, 8, 10], the linear programming (LP) problem and the quadratic programming (QP) problem [17, 20]. At present, there usually exist four kinds of preconditioners for the saddle point linear systems (1.2): block diagonal preconditioner [22, 23, 24, 25], block triangular preconditioner [15, 16, 26, 27, 28, 37], constraint preconditioner [29, 30, 31, 32, 33] and Hermitian and skew-Hermitian splitting (HSS) preconditioner [34]. One can [12] for a general discussion.

Recently, Rees and Greif [17] presented the following triangular preconditioner:

where W is a symmetric positive definite matrix and k ≠ 0. It was shown that if A is symmetric positive semidefinite with nullity q (q < m), then the preconditioned matrix has five distinct eigenvalues: 1 with algebraic multiplicity n - m, with algebraic multiplicity 2q and

with algebraic multiplicity 2(m - q) where η > 0 is the generalized eigenvalues of ηAx = BTW-1Bx. Obviously, if m = q, the preconditionedmatrix has three distinct eigenvalues: 1 and . This is favorable to Krylov subspace methods, which rely on the matrix-vector products and the number of distinct eigenvalues of the preconditioned matrix [13, 19]. It is well-known fact that the preconditioning technique attempts to make the spectral property better to improve the rate of convergence of Krylov subspacemethods [14].

In the light of the preconditioning idea, this paper is devoted to giving the new block triangular preconditioners for the linear systems (1.2). It is shown that, in contrast to the block triangular preconditioner Rk, all the eigenvalues of the proposed new preconditioned matrices are more strongly clustered. Numerical experiments show that the new preconditioners are slightly more efficient than the preconditioner Rk.

The remainder of this paper is organized as follows. In Section 2, the newblock triangular preconditioners are presented and algebraic properties arederived in detail. In Section 3, a single column nonzero (1,2) block preconditioner is presented. In Section 4, numerical experiments are presented. Finally, in Section 5 some conclusions are drawn.

2 Block triangular preconditioner

To study the block triangular preconditioners for solving (1.2) conveniently, we consider the following saddle point linear systems:

where An×n is assumed to be symmetric positive semidefinite with highly nullity and Bm×n(m < n). We assume that is nonsingular, from which it follows that

Now we are concerned with the following block triangular matrix as a preconditioner:

where U, W m×m are symmetric positive definite matrices.

Proposition 2.1. Let be a basis of the null space of B. Then thevectors (xi , 0) are n - m linear independent eigenvectors of with eigenvalue 1.

Proof. The eigenvalue problem of is

Then

Ax + BTy = λ(A + BTU-1B)x + λBTy,

Bx = λWy.

From the nonsingularity of it follows that λ ≠ 0 and x ≠ 0. Substituting y = λ-1W-1Bx into the first block row, we get

Assume that x = xi ≠ 0 is a null vector of B. Then (2.3) simplifies into

( λ2 - λ)Axi = 0.

Since a nonzero null vector of B cannot be a null vector of A by (2.2) and is nonsingular, the following natural property is derived:

Ax, x › > 0 \textfor all 0 ≠ x ∈ ker(B).

It follows that Axi ≠ 0 and λ = 1. Since Bxi = 0, it follows that y = 0 and λ = 1 is an eigenvalue of with algebraic multiplicity (at least) n - m, whose associated eigenvectors are (xi, 0), i = 1, 2, ..., n - m.

Remark 2.1. From Proposition 2.1, it is easy to get that has at least n - m eigenvalues equal to 1 regardless of U and W. The stronger clustering of the eigenvalues can be obtained by choosing two specific matrices such as U = W.

To this end, we consider the following indefinite block triangular matrix as a preconditioner:

where Wm×m is a symmetric positive definite matrix and s > 0. The next lemma provides that all the eigenvalues of the preconditioned matrix are strongly clustered, whose proof is similar to that of Theorem 2.4 in [36].

Lemma 2.1. Suppose that A is symmetric positive semidefinite with nullity r (r < m), B has full rank and λ is an eigenvalue of with eigenvector (v, q). Then λ = 1 is an eigenvalue of with multiplicity n, and λ = is an eigenvalue with multiplicity r. The remaining m - r eigenvalues are

where µ are the nonzero generalized eigenvalues of

Assume, in addition, that is a base of the null space of A; is a base of the null space of B; is a set of linearly independent vectors that complete null (A) ∪ null(B) to a basis of n. Then a set of linear independent eigenvectors corresponding to λ = 1 can be found: the n - m vectors (yi , 0), the r vectors (xi , -W-1Bxi) and the m - r vectors (zi , -W-1Bzi). The r vectors (xi , -s W-1Bxi) are eigenvectors associated with λ = .

Proof. Let λ be an eigenvalue of with eigenvector (v, q). Then

which can be rewritten into

Since is nonsingular, it is not difficult to get that λ ≠ 0 and v ≠ 0. By (2.6), we get

q = - λ-1W-1Bv.

Substituting it into (2.5) yields

If λ = 1, then (2.7) is satisfied for any arbitrary nonzero vector vn, and hence (v, -W-1Bv) is an eigenvector of .

If x ∈ null(A), then from (2.7) we obtain

( λ - 1)(s λ - 1)BTW-1Bx = 0,

from which it follows that λ = 1 and λ = are eigenvalues associated with (x, -W-1Bx) and (x, -s W-1i), respectively.

Assume that λ ≠ 1. Combining (2.4) and (2.7) yields

λ2 - λ = µ(-s λ2 + (1 + s) λ - 1).

It is easy to see that the rest m - r eigenvalues are

A specific set of linear independent eigenvectors for λ = 1 and λ = can be readily found. From (2.2), it is not difficult to see that (yi, 0), (xi, -W-1Bxi) and (zi,-W-1Bzi) are eigenvectors associated with λ = 1. The r vectors (xi,-s W-1Bxi) are eigenvectors associated with λ = .

Remark 2.2. (2.8) gives an explicit formula in terms of the generalized eigenvalues of (2.7) and becomes tightly clustered as µ → ∞. To illustrate this, we examine the case s = 1, i.e., H1. We have λ = 1 with multiplicity n + r. The rest m - r eigenvalues are

Since λ is a strictly increasing function of µ on (0, ∞), it is easy to find that the remaining eigenvalues λ → 1 as µ → ∞. In [17], authors considered k = -1, i.e., R-1 and obtained five distinct eigenvalues: λ = 1 (with multiplicity n - m), λ± = (each with multiplicity q), the remaining eigenvalues are

which lie in the intervals

Obviously, the eigenvalues of our preconditioned matrix are more clustered than those stated in [17]. That is, the preconditioner H1 is slightly better than R-1 from the viewpoint of eigenvalue clustering. In fact, it may lead to the ill-conditioning of H1 as µ → ∞. Golub et al. [18] considered the minimizing of the condition number of the (1,1) block of H1. The simplest choice is that W-1 = γI( γ > 0), which leads to all the eigenvalues that are not equal to 1 are

where δ is the positive generalized eigenvalue of δAx = BTBx. Obviously, the parameter γ should be chosen to be large such that the eigenvalues are strongly clustered, but not too large such that the (2,2) block of H1 is too near singular.

From (2.2), it is to get that the nullity of A must be m at most. Lemma 2.1 shows that the higher it is, the more strongly the eigenvalues are clustered. Combining Lemma 2.1 with (1.2), the following theorem is given:

Theorem 2.1. Suppose that A is symmetric positive semidefinite with nullity m. Then the preconditioned matrix has precisely two eigenvalues: λ = 1, of multiplicity n, and λ = , of multiplicity m. Moreover, if s = 1, then the preconditioned matrix has precisely one eigenvalue: λ = 1 with multiplicity n + m.

Remark 2.3. The important consequence of Theorem 2.1 is that the preconditioned matrix have minimal polynomials of degree at most 2. Therefore, a Krylov subspace method like GMRES applied to a preconditioned linearsystems with coefficient matrix converges in 2 iterations or less, in exact arithmetic [38]. By the above discussion, the choice of the optimal parameter s of the preconditioner Hs is equal to 1. Investigating the preconditioner Rk, it is very difficult to determine the optimal parameter k.

Next, we consider the positive definite block triangular preconditioner as follows:

where Wm×m is a symmetric positive definite matrix and h > 0.

Similarly, we can get the following results.

Lemma 2.2. Suppose that A is symmetric positive semidefinite with nullity r (r < m), B has full rank and λ is an eigenvalue of with eigenvector (v, q). Then λ = 1 is an eigenvalue of with multiplicity n, and λ = - is an eigenvalue with multiplicity r. The remaining m - r eigenvalues are

where µ are defined by (2.4). In addition, , and are defined by Lemma 2.1. Then a set of linear independent eigenvectors corresponding to λ = 1 can be found: the n-m vectors (yi , 0), the r vectors (xi , W-1Bxi) and the m - r vectors (zi , W-1Bzi). The r vectors (xi , -hW-1Bxi) are eigenvectors associated with λ = -.

Theorem 2.2. Suppose that A is symmetric positive semidefinite with nullity m. Then the preconditioned matrix has precisely two eigenvalues: λ = 1, of multiplicity n, and λ = -, of multiplicity m. Moreover, if h = 1, the preconditioned matrix has precisely two eigenvalues: λ = 1, of multiplicity n, and λ = -1, of multiplicity m.

From Theorem 2.2, it is not difficult to find that the choice of the optimal parameter h( > 0) of the preconditioner Th is equal to 1.

3 A single column nonzero (1,2) block preconditioner

We consider the following single column nonzero (1,2) block preconditioner:

where bi denotes the column i of BT, and ei is the i-th column of the m×m identify matrix,

It is not difficult to find that A + BT

B is nonsingular because A is symmetric positive semidefinite and is symmetric positive definite.

The spectral properties of T-1are presented in the following theorem:

Theorem 3.1. The preconditioned matrix T-1 has λ = 1 with multiplicity n and λ = -1 with multiplicity m - 1. Corresponding eigenvectors can be explicitly found in terms of the null space and column space of A.

Proof. Let λ be any eigenvalue of T-1, and z = (x, y) be the corresponding eigenvector. Then T-1z = λz, i.e.,

Let QR = [Y Z][RT 0T]T be an orthogonal factorization of BT, where Rm×m is upper triangular, Yn×m, and Zn×(n-m) is a basis of the null space of B. Premultiplying (3.1) by the nonsingular and square matrix

and postmultiplying by its transpose gives

By inspection, we check λ = 1, which reduces the above equation to

Immediately, there exist n - m corresponding eigenvectors of the form (xz , xy , y) = (u, 0, 0) for (n - m) linearly independent vectors u. At the same time, we can find that there have m linearly independent eigenvectors, corresponding to λ = 1, which can be written (xz , xy , y) = . That is, there exist n linearly independent eigenvectors corresponding to λ = 1.

It is not difficult to get that there exist m - 1 eigenvectors corresponding to λ = -1. Indeed, substituting λ = -1 requires finding a solution to

Vectors xz , xy , y can be found to solve this equation. Consider any x* = + in the null space of A. Then A+A = 0, and we are left with finding a y such that

for the fixed . Further, we get

By (3.3), we get y = . Substituting it into (3.2) requires = 0. In general, we can find exactly m - 1 eigenvectors orthogonal to ri. That is, there are m - 1 eigenvectors of the form (xz , xy , y) = , where is orthogonal to ri, corresponding to λ = -1.

Remark 3.1. The following preconditioner was considered in [17], that is,

where W = γI ( γ > 0) and . In practice, the preconditioner can be with riskiness. In fact, if A is a symmetric positive semidefinite matrix with highly nullity, then A + B may become singular because I - ei is symmetric positive semidefinite. In our numerical experiments, we find that the preconditioner for solving (1.2) leads to the deterioration of performance when i = 1. In this case, the preconditioner is singular.

4 Numerical experiments

In this section, two examples are given to demonstrate the performance of our preconditioning approach. In our numerical experiments, all the computations are done with MATLAB 7.0. The machine we have used is a PC-Intel(R), Core(TM)2 CPU T7200 2.0 GHz, 1024M of RAM. The initial guess is takento be

x(0) = 0

and the stopping criterion is chosen as follows:

||b - x(k)||2< 10-6||b||2.

Example 1. We consider the two-dimensional static Maxwell equations (1.1) in an L-shaped domain ([-1,1]×[-1,1]-[-1,0]×[0,1]). For simplicity, we take a finite element subdivision like Figure 1. Information on sparsity of the relevant matrices is given in Table 1. The test problem is set up so that theright hand side function is equal to 1 throughout the domain.


Here we mainly test four preconditioners: R-1, H1, T1 and T. From Remark 2.2, based on the condition number of the matrix, it ensures that the normof the augmenting term is not too small in comparison with A [35], we set W-1 = . One can see [35] for details.

It is well known that the eigenvalue distribution of the preconditioned matrix gives important insight in the convergence behavior of the preconditioned Krylov subspace methods. For simplicity, we investigate the eigenvalue distribution of the preconditioned matrices and . Figure 2 plots the eigenvalues of the preconditioned matrices and for 16×16 grid, where left corresponds to and right corresponds to . It is easy to see that the clustering of the eigenvalues of is more stronger than that of in Figure 2.


To investigating the performance of the above four preconditioners, in our numerical experiments some Krylov subspace methods with BiCGStab and GMRES( ℓ) are adopted. As is known, there is no general rule to choose the restart parameter ℓ ( ℓ

n + m). This is mostly a matter of experience. To illustrate the efficiency of our methods, we take ℓ = 20. In Tables 2 and 3, we present some results to illustrate the convergence behaviors of BiCGStab and GMRES(20) preconditioned by R-1, H1, T1 and T, respectively. Here i of T is equal to 1. Figures 3 and 4 correspond to Tables 2 and 3, which show the iteration numbers and relative residuals of preconditioned BiCGStab and GMRES(20) employed to solve the saddle point linear systems (1.2), where left in Figures 3-4 corresponds to BiCGStab and right in Figures 3-4 corresponds to GMRES(20). The purpose of these experiments is just to investigate the influence of the eigenvalue distribution on the convergence behavior of BiCGStab and GMRES(20) iterations. "IT" denotes the number of iteration. "CPU(s)" denotes the time (in seconds) required to solve a problem.



From Tables 2-3, it is not difficult to see that the exact preconditioners R-1, H1, T1 and T are in relation to the CPU time, and the iteration numbers of the exact preconditioners R-1, H1, T1 and T are insensitive to the changes in the mesh size by using BiCGStab and GMRES(20) to solve the saddle point linear systems (1.2). Although the exact preconditioners R-1, H1, T1 and T are quite competitive in terms of convergence rate, robustness and efficiency, the preconditioner H1 outperforms the preconditioners R-1, T1 and T from iteration number and CPU time. Compared with the preconditioners R-1, T1 and T, the preconditioner H1 may be the 'best' choice. Comparing the performance of BiCGStab to the performance of GMRES(20) is not within our stated goals, but having results using more than one Krylov solver allows us to confirm the consistency of convergence behavior for most problems.

Example 2. A matrix from the UF Sparse Matrix Collection [39].

The test matrix is GHSindef/k1san, coming from UF Sparse Matrix Collection, which is an ill-conditioned matrix from Aug. system modelling the underground of Strazpod Ralskem mine by MFE. The characteristics of the test matrix are listed in Table 4. The numerical results from using the BiCGStab andGMRES(20) methods preconditioned by the above four preconditioners to solve the corresponding saddle point linear systems are given in Table 5. Figure 5 is in concord with Table 5, where left in Figure 5 corresponds to BiCGStab andright in Figure 5 corresponds to GMRES(20).


From Table 5, it is easy to see that the preconditioners R-1, H1, T1 and T are really efficient when BiCGStab and GMRES(20) methods are used to solve the saddle point systems with the coefficient matrix being GHSindef/k1 san. It is not difficult to find that the preconditioner H1 are superior to the preconditioners R-1, T1 and T from iteration number and CPU time under certain conditions. That is, the preconditioner H1 is quite competitive in terms of convergence rate, robustness and efficiency.

5 Conclusion

In this paper, we have proposed three types of block triangular preconditioners for iteratively solving linear systems arising from finite element discretization of the Maxwell equations. The preconditioners have the attractive property to improve the eigenvalue clustering of the coefficient matrix. Furthermore, numerical experiments confirm the effectiveness of our preconditioners.

In fact, in Section 2, our methodology can extend the unsymmetrical case,that is, the (1,2) block and the (2,1) block of the saddle point systems are unsymmetrical.

Acknowledgments. The authors would like to express their great gratitude to the referees and J.M. Martínez for comments and constructive suggestions; there were very valuable for improving the quality of our manuscript.

Received: 14/VIII/10.

Accepted: 01/VII/11.

#CAM-245/10.

  • [1] Z. Chen, Q. Du and J. Zou, Finite element methods with matching and nonmatching meshes for Maxwell equations with discontinuous coeffcients. SIAMJ. Numer. Anal., 37 (1999), 1542-1570.
  • [2] J.P. Ciarlet and J. Zou, Fully discrete finite element approaches for time-dependent Maxwell's equations Numer. Math., 82 (1999), 193-219.
  • [3] J. Gopalakrishnan, J.E. Pasciak and L.F. Demkowicz, Analysis of a multigrid algorithm for time harmonic Maxwell equations. SIAM J. Numer. Anal., 42(2004), 90-108.
  • [4] P. Monk, Finite Element Methods for Maxwell's Equations. Oxford University Press, New York (2003).
  • [5] J. Gopalakrishnan and J. Pasciak, Overlapping Schwarz preconditioners for indefnite time harmonic Maxwell's equations. Math. Comp., 72 (2003), 1-16.
  • [6] P. Monk, Analysis of a fnite element method for Maxwell's equations. SIAM J. Numer. Anal., 29 (1992), 32-56.
  • [7] C. Greif and D. Schötzau, Preconditioners for the discretized time-harmonic Maxwell equaitons in mixed form. Numer. Lin. Alg. Appl., 14 (2007), 281-297.
  • [8] C. Greif and D. Schötzau, Preconditioners for saddle point linear systems with highly singular (1,1) blocks. ETNA, 22 (2006), 114-121.
  • [9] A. Toselli, Overlapping Schwarz methods for Maxwell's equations in three dimensions. Numer. Math., 86 (2000), 733-752.
  • [10] Q. Hu and J. Zou, Substructuring preconditioners for saddle-point problemsarising from Maxwell's equations in three dimensions Math. Comput., 73 (2004), 35-61.
  • [11] J.C. Nédélec, Mixed finite elements in R3. Numer. Math., 35 (1980), 315-341.
  • [12] M. Benzi, G.H. Golub and J. Liesen, Numerical solution of saddle point problems. Acta Numerica., 14 (2005), 1-137.
  • [13] Y. Saad, Iterative Methods for Sparse Linear Systems Second edition, SIAM, Philadelphia, PA (2003).
  • [14] A. Greenbaum, Iterative Methods for Solving Linear Systems Frontiers in Appl. Math., 17, SIAM, Philadelphia (1997).
  • [15] M. Benzi and J. Liu, Block preconditioning for saddle point systems with indefinite (1,1) block Inter. J. Comput. Math., 84 (2007), 1117-1129.
  • [16] M. Benzi and M.A. Olshanskii, An augmented lagrangian-based approach to the Oseen problem. SIAM J. Sci. Comput., 28 (2006), 2095-2113.
  • [17] T. Rees and C. Greif, A preconditioner for linear systems arising from interior point optimization methods SIAM J. Sci, Comput., 29(2007), 1992-2007.
  • [18] G.H. Golub, C. Greif and J.M. Varah, An algebraic analysis of a block diagonal preconditoner for saddle point systems. SIAM J. Matrix Anal. Appl., 27 (2006), 779-792.
  • [19] J.W. Demmel, Applied Numerical Linear Algebra. SIAM, Philadelphia (1997).
  • [20] S. Cafieri, M. D'Apuzzo, V. De Simone and D. Di Serafino, On the iterative solution of KKT systems in potential reduction software for large-scale quadratic problems. Comput. Optim. Appl., 38 (2007), 27-45.
  • [21] H.C. Elman, Preconditioning for the steady-state Navier-Stokes equations withlow viscosity. SIAM J. Sci. Comput., 20 (1999), 1299-1316.
  • [22] T. Rusten and R. Winther, A preconditioned iterative method for saddle point problems. SIAM J. Matrix Anal. Appl., 13 (1992), 887-904.
  • [23] D. Silvester and A.J. Wathen, Fast iterative solution of stabilized Stokes systems, Part II: Using general block preconditioners SIAM J. Numer. Anal., 31 (1994), 1352-1367.
  • [24] A.J. Wathen, B. Fischer and D. Silvester, The convergence rate of the minimal residual method for the Stokes problem Numer. Math., 71 (1995), 121-134.
  • [25] A. Klawonn, An optimal perconditioner for a class of saddle point problems with a penalty term. SIAM J. Sci. Comput., 19 (1998), 540-552.
  • [26] V. Simoncini, Block triangular preconditioners for symmetric saddle-point problems. Appl. Numer. Math., 49 (2004), 63-80.
  • [27] A. Klawonn, Block-triangular preconditioners for saddle point problems with a penalty term. SIAM J. Sci. Comput., 19 (1998), 172-184.
  • [28] P. Krzyzanowski, On block preconditioners for nonsymmetric saddle point problems. SIAM J. Sci. Comput., 23 (2001), 157-169.
  • [29] H.S. Dollar and A.J. Wathen, Approximate factorization constraint preconditioners for saddle-point matrices. SIAM J. Sci. Comput., 27 (2006), 1555-1572.
  • [30] H.S. Dollar, Constraint-style preconditioners for regularized saddle point problems. SIAM J. Matrix Anal. Appl., 29 (2007), 72-684.
  • [31] E. De Sturler and J. Liesen, Block-diagonal and constraint preconditioners for nonsymmetric indefinite linear Systems, Part I: Theory SIAM J. Sci. Comput., 26 (2005), 1598-1619.
  • [32] A. Forsgren, P.E. Gill and J.D. Griffin, Iterative solution of augmented systems arising in interior methods. SIAM J. Optim., 18 (2007), 666-690.
  • [33] C. Keller, N.I.M. Gould and A.J. Wathen, Constraint preconditioning for indefinite linear systems. SIAM J. Matrix Anal. Appl., 21 (2000), 1300-1317.
  • [34] V. Simoncini and M. Benzi, Spectral properties of the Hermitian and skew-Hermitian splitting preconditioner for saddle point problems. SIAM J. Matrix Anal. Appl., 26 (2004), 377-389.
  • [35] G.H. Golub and C. Greif, On solving block-structured indefinite linear systems. SIAM J. Sci. Comput., 24 (2003), 2076-2092.
  • [36] Z.H. Cao, Augmentation block preconditioners for saddle point-type matrices with singular (1,1) blocks. Numer. Linear Algebra Appl., 15 (2008), 515-533.
  • [37] S.L. Wu, T.Z. Huang and C.X. Li, Generalized block triangular preconditoner for symmetric saddle point problems. Computing, 84 (2009), 183-208.
  • [38] I.C.F. Ipsen, A note on preconditioning nonsymmetric matrices. SIAM J. Matrix Anal. Appl., 23 (2001), 1050-1051.
  • [39] UF Sparse Matrix Collection: Garon Group. http://www.cise.ufl.edu/research/sparse/matrices
  • *
    This research was supported by 973 Program (2007CB311002) and NSFC Tianyuan Mathematics Youth Fund (11026040, 11026083).
  • Publication Dates

    • Publication in this collection
      06 Jan 2012
    • Date of issue
      2011

    History

    • Received
      14 Aug 2010
    • Accepted
      01 July 2011
    Sociedade Brasileira de Matemática Aplicada e Computacional Sociedade Brasileira de Matemática Aplicada e Computacional - SBMAC, Rua Maestro João Seppe, nº. 900 , 16º. andar - Sala 163, 13561-120 São Carlos - SP Brasil, Tel./Fax: 55 16 3412-9752 - São Carlos - SP - Brazil
    E-mail: sbmac@sbmac.org.br