Acessibilidade / Reportar erro

PRECONDITIONING ISSUES IN THE NUMERICAL SOLUTION OF NONLINEAR EQUATIONS AND NONLINEAR LEAST SQUARES

Abstract

Second order methods for optimization call for the solution of sequences of linear systems. In this survey we will discuss several issues related to the preconditioning of such sequences. Covered topics include both techniques for building updates of factorized preconditioners and quasi-Newton approaches. Sequences of unsymmetric linear systems arising in Newton-Krylov methods will be considered as well as symmetric positive definite sequences arising in the solution of nonlinear least-squares by Truncated Gauss-Newton methods.

preconditioning; factorization updates; quasi-Newton updates


1 INTRODUCTION

We consider sequences of linear systems of the form:

where Ak ∈ ℝn×n is large, sparse, nonsingular and bk is the corresponding right-hand side. The matrix Ak and the vector bk change from one system to the next and they are not available simultaneously. Newton-type methods for optimization problems require the solution of linear systems of the form (1), whose characteristics of the matrices Ak depend on the class of problems considered and on the specific method used. Here, we will focus on sequences arising in optimization methods for nonlinear systems and nonlinear least-squares problems and assume to use a Krylov subspace method to solve the linear systems.

It is well-known that a clever solution of the linear algebra phase required in most nonlinear optimization methods is crucial for their practical implementation[20[20] D'APUZZO M, DE SIMONE V & DI SERAFINO D. 2010. On mutual impact of numerical linear algebra and large-scale optimization with focus on interior point methods. Computational Optimization and Applications, 45:283-310.] and it is widely recognized that preconditioning is a critical ingredient for an efficient iterative solution of linear systems. There is a continuum of choices for preconditioning of the sequence {Ak} which range from freezing the preconditioner computed for the first matrix of the sequence to recomputing a preconditioner from scratch for each matrix. Recomputing the preconditioner for each matrix of the sequence may be too expensive and pointless accurate. On the other hand, freezing the preconditioner may yield a low convergence or a failure of the Krylov solver. A compromise between these two approaches is given by updating techniques where efficient preconditioners for the current system are derived from previous systems of the sequence in a cheap way. This way, the expensive computation of a new preconditioner is avoided and some of the computational effort is shared among the linear systems of the sequence. The expected performance of an updated preconditioner in terms of linear iterations are in between those of the frozen and the recomputed preconditioners, while an overall saving in terms of computational time is expected. We note that updated preconditioners may deteriorate in practice along the sequence and that therefore the definition of suitable strategies for adaptively refresh the preconditioner when needed represents a decisive issue.

In this semi-survey we focus on updating procedures giving rise to implicit and/or explicit preconditioners. A preconditioner is implicit if its application, within each step of the Krylov method, requires the solution of a linear system. For example, preconditioners based on incomplete factorization of the matrix Ak are of implicit kind. On the other hand, an explicit preconditioner provides an approximation of the inverse of Ak and the preconditioning operation reduces to computing matrix-vector products. Approximate inverse preconditioners fall in this class[10[10] BENZI M & TŮMA M. 1998. A sparse approximate inverse preconditioner for nonsymmetric linear systems. SIAM J. Sci. Comput., 19:968-994.,11[11] BENZI M & TŮMA M. 1999. A Comparative Study of Sparse Approximate Inverse Preconditioners. Appl. Numer.Math., 30:305-340.]. Both quasi-Newton-type updates[12[12] BERGAMASCHI L, BRU R, MARTÍNEZ A & PUTTI M. 2006. Quasi-Newton preconditioners for the inexact Newton method. Electronic Trans. Num. Anal., 23:76-87.,13[13] BERGAMASCHI L, BRU R & MARTÍNEZ A. 2011. Low-rank update of preconditioners for the inexact Newton method with SPD Jacobian. Mathematical and Computer Modelling, 54:1863-1873.,34[34] MARTÍNEZ JM. 1993. A theory of secant preconditioners.Math. Comp., 60:681-698.,35[35] MARTÍNEZ JM 1995. An extension of the theory of secant preconditioners, in: Linear/Nonlinear Iterative Methods and Verification of Solution, Matsuyama, 1993. J. Comput. Appl. Math., 60:115-125.,37[37] MORALES JL & NOCEDAL J. 2000. Automatic preconditioning by limited memory quasi-Newton updating. SIAM J. Optim. 10:1079-1096.] and fully-algebraic procedures like those in[2[2] BELLAVIA S, BERTACCINI D & MORINI B. 2011. Nonsymmetric preconditioner updates in Newton- Krylov methods for nonlinear systems. SIAM J. Sci. Comput., 33:2595-2619.,8[8] BELLAVIA S, MORINI B & PORCELLI M. 2014. New updates of incomplete LU factorizations and applications to large nonlinear systems. Optimization Methods and Software, 29(2):321-340.,15[15] BIRKEN P, DUINTJER TEBBENS J, MEISTER A & TŮMA M 2008. Preconditioner updates applied to CFD model problems. Appl. Numer. Math. 58:1628-1641.,24[24] DUINTJER TEBBENS J & TŮMA M. 2007. Efficient Preconditioning of Sequences of Nonsymmetric Linear Systems. SIAM J. Sci. Comput., 29:1918-1941.,25[25] DUINTJER TEBBENS J & TŮMA M 2010. Preconditioner Updates for Solving Sequences of Linear Systems in Matrix-free Environment. Numer. Linear Algebra Appl., 17:997-1019.] will be covered. Methods falling in the first class update the preconditioners by matrices of small rank, using quasi-Newton formula. The obtained preconditioners are of explicit kind. Methods in the second class build implicit and/or explicit preconditioners starting from an incomplete factorization of a specific matrix of the sequence or of its inverse. Then, preconditioners for the matrices of the sequences are obtained by cheap updates of such a factorization. We will also consider situation where the matrix Ak is not available, but matrix-vector products with Ak can be computed or approximated by operators. In other words matrix-free approaches are also discussed.

Other approaches are possible, we mention preconditioners based on recycled Krylov information[27[27] FASANO G & ROMA M. 2013. Preconditioning Newton-Krylov Methods in Non-Convex Large Scale Optimization. Computational Optimization and Applications 56:253-290.,33[33] LOGHIN D, RUIZ D & TOUHAMI A. 2006. Adaptive preconditioners for nonlinear systems of equations. Journal of Computational and Applied Mathematics, 189:362-374.,41[41] PARKS ML, DE STURLER E, MACKEY G, JOHNSON DD & MAITI S. 2006. Recycling Krylov subspaces for sequences of linear systems.SIAM J. Sci. Comput., 28:1651-1674.] and Incremental ILU preconditioners[18[18] CALGARO C, CHEHAB JP & SAAD Y 2010. Incremental incomplete ILU factorizations with applications. Numerical Linear Algebra with Applications, 17:811-837.]. We do not cover these approaches in this semi-survey.

The paper is organized as follows. In Section 2 we describe classical optimization algorithms for nonlinear systems and nonlinear least-squares problems which give rise to the sequences we are interested in. Section 3 is devoted to the description of nearly matrix-free preconditioning strategy. In Sections 4 and 5 we focus on preconditioners based on updating a given incomplete factorizations and based on quasi-Newton-type updates, respectively. Finally, Sections 6 and Section 7 concern the analysis of the practical implementation and of the numerical behaviour of the described strategies.

Notations

Throughout the paper, the subscript k will denote an iteration counter and for any function h,hk will be the shorthand for h(xk). Unless explicitly stated,║·║denotes an arbitrary vector norm and induced matrix norm. The entries of a matrix A ∈ ℝn×n are denoted either as aij or (A)ij. Given a nonnegative integer γ, [A]γ ∈ ℝn×n indicates the band matrix obtained extracting from A the main diagonal and γ upper and lower diagonals. If γ = 0, [A]0 is the diagonal matrix formed from the elements of the diagonal of A. The off-diagonal part A - [A]0 of A is denoted by the symbol off (A). If A is lower (upper) triangular, [A]γ is obtained extracting from A the main diagonal and γ lower (upper) diagonals. Similarly, off (A) is formed by the lower (upper) extra diagonals of A. We borrow notations used by MATLAB in linear algebra: diag, tril, triu. Given a vector v the notation (v)i denotes the i-th component of v, when clear from the context the brackets are dropped. Finally, Pk and Bk denote, respectively implicit and explicit preconditioners for Ak,that is PkAk, while Bk ⋍ Ak-1.

2 APPLICATIONS

In this section we briefly describe optimization methods for nonlinear systems and nonlinear least-squares problems. We focus on the linear algebra phase enlightening that sequences of linear systems of the form (1) arise and describing the special structure of the matrices involved. Then, we turn our attention to the following two classes of problems:

  • Nonlinear systems:

    where Θ : ℝ n → ℝn is continuously differentiable.

  • Nonlinear least-squares problems

    where F : ℝn → ℝm is continuously differentiable.

Let Θ' and F' be the Jacobian matrices of Θ and F, respectively.

2.1 Sequence arising in Newton-Krylov methods

Newton-Krylov methods[6[6] BELLAVIA S& MORINI B. 2006. Subspace trust-region methods for large bound-constrained nonlinear equations. SIAM Journal on Numerical Analysis, 44:1535-1555.,17[17] BROWN PN& SAAD Y. 1994. Convergence theory of nonlinear Newton-Krylov algorithms. SIAM J. Optim., 4:297-330.,21[21] DEMBO RS, EISENSTAT SC & STEIHAUG T. 1982. Inexact Newton methods. SIAM Journal on Numerical Analysis, 19:400-408.] applied to the nonlinear system (2), require the solution of the following sequence of linear systems:

where xk is the current iterate. Such linear systems are approximately solved by a Krylov method. More precisely, an Inexact-Newton step skI satisfying.

is computed. In (5) ηk is the so-called forcing term. It is used to control the accuracy in the solution of the system (4) and its choice influences the convergence rate of the Newton-Krylov procedure. These methods are often embedded in suitable globalization strategies [1[1] BELLAVIA S & BERRONE S. 2007. Globalization strategies for Newton-Krylov methods for stabilized FEM discretization of Navier-Stokes equations. Journal of Computational Physics, 226:2317-2340.,5[5] BELLAVIA S & MORINI B. 2001. A globally convergent Newton-GMRES subspace method for systems of nonlinear equations. SIAM J. Sci. Comput., 23:940-960.,26[26] EISENSTAT SC& WALKER HF. 1994. Globally convergent inexact Newton methods. SIAM J. Optim. 4:393-422.].

Then, a sequence of the form (1) has to be solved, with

Generally, the Jacobians Θ' are sparse unsymmetric matrices but there are also a number of applications such as e.g., unconstrained optimization, or discretization of nonlinear PDEs with Picard linearization, where Θ'k are symmetric.

By continuity, matrix Θ'k varies slowly from one step k to the next whenever the iterates xk and xk+1 are sufficiently close; e.g this is the case when the iterates are close enough to a solution. The action of the Jacobians times a vector v required by the Krylov solver is usually provided by an operator or is approximated by finite-differences, i.e.

where є is a proper chosen positive constant. This is computed at cost of one Θ-evaluation. We refer to [16[16] BROWN PN. 1987. A local convergence theory for combined inexact-Newton/finite-difference projection methods. SIAM J. Numer. Anal., 24:407-434.] for details on the approximation of Jacobian-vector product by finite-differences and on the convergence analysis for variants of Newton-Krylov methods that employ such approximation.

2.2 Truncated Gauss-Newton methods for nonlinear least-squares

Consider now the solution of the nonlinear least-squares problem (3) by a Truncated Gauss- Newton method [28[28] FASANO G, LAMPARIELLO F & SCIANDRONE M. 2006. A Truncated Nonmonotone Gauss-Newton Method for Large-Scale Nonlinear Least-Squares Problems., Computational Optimization and Applications 34:343-358.,29[29] GRATTON S, LAWLESS AS & NICHOLS NK. 2007. Approximate Gauss-Newton Methods for Nonlinear Least Squares Problems. SIAM J. Optim. 18:106-132.,42[42] PORCELLI M. 2013. On the convergence of an inexact Gauss-Newton trust-region method for nonlinear least-squares problems with simple bounds. Optimization Letters, 7(3):447-465.]. At each iteration k, a search direction SkI is computed by approximately minimizing the quadratic model

over ℝn and then a moving step to update the current iterate xk is obtained by imposing a suitable sufficient decrease condition [28[28] FASANO G, LAMPARIELLO F & SCIANDRONE M. 2006. A Truncated Nonmonotone Gauss-Newton Method for Large-Scale Nonlinear Least-Squares Problems., Computational Optimization and Applications 34:343-358.,42[42] PORCELLI M. 2013. On the convergence of an inexact Gauss-Newton trust-region method for nonlinear least-squares problems with simple bounds. Optimization Letters, 7(3):447-465.].

For the computation of the step skI, a CG-like method [30[30] HESTENES MR. 1975. Pseudoinverses and conjugate gradients. Communications of the ACM, 18:40-43.,40[40] PAIGE CC & SAUNDERS MA. 1982. LSQR: An algorithm for sparse linear equations and sparse least squares. TOMS, 8:43-71.] is applied to the normal equations

until a prescribed reduction of the value of ∇mk is produced, i.e.

where ηk ∈(0, 1) is the forcing term.

Summarizing, in a Truncated Gauss-Newton framework a sequence of the form (1) is generated with

We remark that the matrices Ak are symmetric positive semidefinite for general nonlinear least squares problems but that they are positive definite whenever mn and F'k is full rank. Note that if CG is initialized with the null vector, CG terminates in a finite number of iterations computing the minimum norm step (the Newton step), see [30[30] HESTENES MR. 1975. Pseudoinverses and conjugate gradients. Communications of the ACM, 18:40-43.]. We further underline that CG-like methods require the action of both F'kT and F'k. The action of F'k on a vector v can be approximated by finite differences, employing formula (6) replacing Θ by F and Θ' by F'.

3 NEARLY MATRIX-FREE PRECONDITIONING

Krylov methods applied to linear systems (1) can be implemented in a matrix-free manner, i.e. without requiring the computation of the matrix Ak, provided that an operator performing the product of Ak times a vector is available. Then, in principle, Newton-Krylov methods for nonlinear systems as well as Truncated Gauss-Newton methods for unconstrained minimization can be implemented in a matrix-free manner whenever such operator is available. As already noted, for sequences arising in the Newton-Krylov framework a possible choice is to approximate the action of the Jacobian on a vector via the finite differences operator.

On the other hand, when an algebraic preconditioner is used, the full matrix Ak is needed and if it is not available it has to be computed or approximated via the operator performing the product of Ak times a vector. This task is usually expensive. For example, in the Newton-Krylov context, when the Jacobian is not explicitly available, the most relevant part of the computational time is devoted to the approximation of the Jacobian by finite-differences. Therefore, in the optimization community there has been interest in updating strategy that can be implemented in a nearly matrix-free manner, that is close to true matrix-free settings. Specifically, nearly matrix-free preconditioning has the following properties: a few full matrices are formed; for preconditioning most systems of the sequence, matrices that are reduced in complexity with respect to the full Ak 's are required; matrix-vector product approximations by finite differences can be used; see [25[25] DUINTJER TEBBENS J & TŮMA M 2010. Preconditioner Updates for Solving Sequences of Linear Systems in Matrix-free Environment. Numer. Linear Algebra Appl., 17:997-1019.,32[32] KNOLL DA & KEYES DE. 2004. Jacobian-free Newton-Krylov methods, a survey of approaches and applications. J. Comput. Phys., 193:357-397.].

Note that a preconditioning strategy that requires only the computation of selected elements of Ak, can be considered nearly-matrix free whenever the function that performs the matrix-vector product is separable. More precisely, let F be the function that, evaluated at v ∈ ℝn, provides the product of Ak times v. F is said separable if its evaluation can be easily separated in the evaluation of its function components, i.e. computing one component of F costs about an n-th part of the full function evaluation [25[25] DUINTJER TEBBENS J & TŮMA M 2010. Preconditioner Updates for Solving Sequences of Linear Systems in Matrix-free Environment. Numer. Linear Algebra Appl., 17:997-1019.]. When F is the finite-differences operator, F is separable whenever the nonlinear function itself is separable.

In the situation where F is separable and only selected entries of the current matrix Ak are required to build the preconditioner, the preconditioning strategy may be considered nearly matrixfree as the evaluation of selected elements of Ak can be considerably less costly than forming the whole matrix. In fact, note that the entry (Ak)ij is given by the i-th component of the vector Ak ej,where ej is the j -th vector of the canonical base. Then,

and the cost of evaluating a selected entry of Ak corresponds approximately to the n-th part of the cost of performing one matrix-vector product. This implies that evaluating the diagonal of Ak costs about as performing one matrix-vector product.

4 UPDATING OF PRECONDITIONER'S FACTORIZATION

The approaches proposed in [2[2] BELLAVIA S, BERTACCINI D & MORINI B. 2011. Nonsymmetric preconditioner updates in Newton- Krylov methods for nonlinear systems. SIAM J. Sci. Comput., 33:2595-2619.,8[8] BELLAVIA S, MORINI B & PORCELLI M. 2014. New updates of incomplete LU factorizations and applications to large nonlinear systems. Optimization Methods and Software, 29(2):321-340.,15[15] BIRKEN P, DUINTJER TEBBENS J, MEISTER A & TŮMA M 2008. Preconditioner updates applied to CFD model problems. Appl. Numer. Math. 58:1628-1641.,24[24] DUINTJER TEBBENS J & TŮMA M. 2007. Efficient Preconditioning of Sequences of Nonsymmetric Linear Systems. SIAM J. Sci. Comput., 29:1918-1941.,25[25] DUINTJER TEBBENS J & TŮMA M 2010. Preconditioner Updates for Solving Sequences of Linear Systems in Matrix-free Environment. Numer. Linear Algebra Appl., 17:997-1019.] focus on updating the factorization of a preconditioner for a specific matrix As of the sequence with the aim of building preconditioners for the subsequent matrices. These processes give rise to factorized preconditioners that are products of matrices easy to invert. Clearly, in all the proposed procedure the updating is obtained at a low computational cost, significantly lower than the cost of computing a new preconditioner from scratch. In what follows, we will refer to As as the seed matrix, to Ps and Bs as the seed preconditioner in the implicit and explicit form, respectively. Furthermore, it is assumed that the matrix As is available and an incomplete factorization process of As and As-1 can be carried out without breakdowns.

The preconditioner updates in [2[2] BELLAVIA S, BERTACCINI D & MORINI B. 2011. Nonsymmetric preconditioner updates in Newton- Krylov methods for nonlinear systems. SIAM J. Sci. Comput., 33:2595-2619.,8[8] BELLAVIA S, MORINI B & PORCELLI M. 2014. New updates of incomplete LU factorizations and applications to large nonlinear systems. Optimization Methods and Software, 29(2):321-340.,15[15] BIRKEN P, DUINTJER TEBBENS J, MEISTER A & TŮMA M 2008. Preconditioner updates applied to CFD model problems. Appl. Numer. Math. 58:1628-1641.,24[24] DUINTJER TEBBENS J & TŮMA M. 2007. Efficient Preconditioning of Sequences of Nonsymmetric Linear Systems. SIAM J. Sci. Comput., 29:1918-1941.,25[25] DUINTJER TEBBENS J & TŮMA M 2010. Preconditioner Updates for Solving Sequences of Linear Systems in Matrix-free Environment. Numer. Linear Algebra Appl., 17:997-1019.] are inspired by the attempt to cheaply approximate the ideal update to an existing preconditioner [24[24] DUINTJER TEBBENS J & TŮMA M. 2007. Efficient Preconditioning of Sequences of Nonsymmetric Linear Systems. SIAM J. Sci. Comput., 29:1918-1941.]. Let

be an incomplete ILU factorization of As where D is diagonal and L and U are lower and upper triangular matrices, respectively, with unit main diagonal [44[44] SAAD Y 2003. Iterative Methods for Sparse Linear Systems. 2nd edition., SIAM 2003.].

Then, As ≈ LDU and using the equality Ak = As + (Ak - As) we get

Therefore, the matrix

represents the ideal updated preconditioner in the sense that, for any matrix norm ║·║, its accuracy ║AkPk I║ for Ak is equal to the accuracy ║As − Ps[24[24] DUINTJER TEBBENS J & TŮMA M. 2007. Efficient Preconditioning of Sequences of Nonsymmetric Linear Systems. SIAM J. Sci. Comput., 29:1918-1941.].

It is evident that the ideal update is not suitable for practical use. In general, the matrix L-1 (Ak -As) U-1 is expensive to build; moreover Pk I is dense and its factorization is impractical. Both structured and unstructured updates have been proposed where approximations of the factors of Pk I are employed. The term structured refers to the fact that the approximations used have a special structure. Unstructured updatings based on Gauss-Jordan transformations have been proposed in [24[24] DUINTJER TEBBENS J & TŮMA M. 2007. Efficient Preconditioning of Sequences of Nonsymmetric Linear Systems. SIAM J. Sci. Comput., 29:1918-1941.]. Here, we will deal with updating preconditioner strategies which use structured approximation of the factors of Pk and differ in the way the term L-1(Ak - As)U-1 is approximated. These strategies are nearly-matrix free whenever the function F performing matrix-vector products is separable.

In particular, the inverse of matrices L and/or U are approximated by simpler matrices obtained sparsifying them dropping some of their elements or retaining only their main diagonal and/or a few upper and lower diagonals. This is motivated by large class of matrices where L-1 and U-1 contain many entries which are small in magnitude. Classes of matrices with decaying inverses, i.e. with entries of the inverse that tend to zero away from the main diagonal, were detected and analyzed in several papers: banded symmetric positive definite and indefinite matrices [22[22] DEMKO S, MOSS WF & SMITH PW. 1984. Decay rates for inverses of band matrices. Mathematics of Computation, 43:491-499.,36[36] MEURANT G. 1992. A review on the inverse of symmetric tridiagonal and block tridiagonal matrices. SIAM J. Matrix. Anal. Appl., 13:707-728.]; nonsymmetric block tridiagonal banded matrices [38[38] NABBEN R. 1999. Decay rates of the inverse of nonsymmetric tridiagonal and band matrices. SIAM J. Matrix Anal. Appl., 20:820-837.]; matrices of the form ƒ (A) where A is symmetric and banded and ƒ is analytic [9[9] BENZI M & GOLUB GH. 1999. Bounds of the entries of matrix functions with applications to preconditioning. BIT, 39:417-438.]. Typically, the rate of decay of the entries of A-1 is fast if A is diagonally dominant. As an example, we plot in Figure 1 the sparsity pattern (on the left) and the wireframe mesh (on the right) of the 2D Laplacian matrix and its inverse, respectively. Note that 2D Laplacian matrix is banded (tridiagonal) and diagonally dominant.

Figure 1
The 2D Laplacianmatrix A: sparsity pattern of A (on the left) and wireframe mesh of its inverse A−1 (on the right) (n = 2500).

4.1 Updates based on triangular approximation of the current matrix

A first approach in the definition of nearly matrix-free updating procedures is due to Duintjer Tebbens and Tůma in [24[24] DUINTJER TEBBENS J & TŮMA M. 2007. Efficient Preconditioning of Sequences of Nonsymmetric Linear Systems. SIAM J. Sci. Comput., 29:1918-1941.,25[25] DUINTJER TEBBENS J & TŮMA M 2010. Preconditioner Updates for Solving Sequences of Linear Systems in Matrix-free Environment. Numer. Linear Algebra Appl., 17:997-1019.].

The practical updating techniques proposed in [24[24] DUINTJER TEBBENS J & TŮMA M. 2007. Efficient Preconditioning of Sequences of Nonsymmetric Linear Systems. SIAM J. Sci. Comput., 29:1918-1941.] are based on the implicit preconditioner Ps given in (9). In order to obtain a factorized preconditioner for Ak, in (11) the matrix Ak - As is replaced by a nonsingular and easily to be inverted approximation and either L-1 or U-1 are approximated by the identity matrix. The approximation consists of two steps. First, the ideal update (10) is approximated by

or

Note that, in the first case L it is assumed to be close to the identity matrix and L-1 in (10), is neglected while, in the second case U-1 is discarded relying on the assumption that U is close to the identity matrix.

Several options have been proposed in [24[24] DUINTJER TEBBENS J & TŮMA M. 2007. Efficient Preconditioning of Sequences of Nonsymmetric Linear Systems. SIAM J. Sci. Comput., 29:1918-1941.] to replace DU + (Ak - As) or LD + (Ak -As) by a nonsingular and easily invertible approximation. One option is to approximate Ak - As in (12) or (13) by its upper/lower triangular part which gives rise to

respectively. This way the preconditioner is available in a factorized form and is obtained employing information on Ak reduced in complexity with respect to the full matrix. The cost for applying this preconditioning is the solution of two triangular systems. Clearly, PkU and PkL are expected to be accurate when the factor U and L are close to the identity matrix and tril (Ak - As) or triu (Ak - As) are useful approximation of Ak - As, that is one triangular part of Ak - As contains more relevant information than the other part. A typical situation of this kind arises when matrices come from upwind/downwind discretization schemes [24[24] DUINTJER TEBBENS J & TŮMA M. 2007. Efficient Preconditioning of Sequences of Nonsymmetric Linear Systems. SIAM J. Sci. Comput., 29:1918-1941.] and in compressible flow problems [15[15] BIRKEN P, DUINTJER TEBBENS J, MEISTER A & TŮMA M 2008. Preconditioner updates applied to CFD model problems. Appl. Numer. Math. 58:1628-1641.].

In order to use the above triangular updates the upper or lower triangular part of Ak is needed. Interestingly, the explicit computation of such triangular parts may be avoided and nearly matrixfree implementations of this approach are possible. Two possible approaches have been proposed in [25[25] DUINTJER TEBBENS J & TŮMA M 2010. Preconditioner Updates for Solving Sequences of Linear Systems in Matrix-free Environment. Numer. Linear Algebra Appl., 17:997-1019.]. The first approach gives rise to a partial matrix estimation problem whose solution produces an approximation of the required triangular part. This proposal has been enhanced in [43[43] XU W & COLEMAN TF. 2014. Solving Nonlinear Equations with the Newton-Kyrlov Method Based on Automatic Differentiation. Journal of Optimization Methods and Software 29(1):88-101.] where a two-sided bicolouring method is used. The second approach, that we will now briefly describe, avoids matrix estimation and assumes separability of the function F performing matrix vector products. Consider the update (15). The application of PkL requires the solution of the lower triangular systems

where the matrix E = LD - tril(As) is known. Then, the vector z is computed as follows:

The term Σj<i(Ak)ijzj required in the above computation is the i-th component of the vector Akz, with z = (zi, z2,...,zi-1, 0,...,0)T and therefore

Then, the application of the preconditioner in a matrix-free manner can be performed as follows. First, before solving the linear system, the diagonal of Ak is computed. Then, each application of the preconditioner within the Krylov solver is performed exploiting (16). This calls for the evaluation of n components of the function F and if F is separable, it is accomplished at the cost of one F-evaluations. Therefore, this updating procedure can be performed in a nearly matrixfree manner requiring one F-evaluation for computing the diagonal of Ak and one F-evaluation at each iteration of the Krylov solver, in order to apply the preconditioner. In the solution of nonlinear systems (4), when F is the finite difference operator and Θ is separable, this cost is approximately the cost of one Θ-evaluations for each iteration of the Krylov solver.

4.2 Updates based on banded approximation of the current matrix

In this subsection we review the updating strategies proposed in [2[2] BELLAVIA S, BERTACCINI D & MORINI B. 2011. Nonsymmetric preconditioner updates in Newton- Krylov methods for nonlinear systems. SIAM J. Sci. Comput., 33:2595-2619.,8[8] BELLAVIA S, MORINI B & PORCELLI M. 2014. New updates of incomplete LU factorizations and applications to large nonlinear systems. Optimization Methods and Software, 29(2):321-340.] which differ from those discussed in the previous section. In fact, the strategies in [24[24] DUINTJER TEBBENS J & TŮMA M. 2007. Efficient Preconditioning of Sequences of Nonsymmetric Linear Systems. SIAM J. Sci. Comput., 29:1918-1941.,25[25] DUINTJER TEBBENS J & TŮMA M 2010. Preconditioner Updates for Solving Sequences of Linear Systems in Matrix-free Environment. Numer. Linear Algebra Appl., 17:997-1019.] are based on the observation that PkI requires the knowledge of L-1 and U-1 and avoid the computation of such inverses approximating one of them with the identity matrix. On the other hand, the idea in [2[2] BELLAVIA S, BERTACCINI D & MORINI B. 2011. Nonsymmetric preconditioner updates in Newton- Krylov methods for nonlinear systems. SIAM J. Sci. Comput., 33:2595-2619.,8[8] BELLAVIA S, MORINI B & PORCELLI M. 2014. New updates of incomplete LU factorizations and applications to large nonlinear systems. Optimization Methods and Software, 29(2):321-340.] is to retain more information on the inverse of L or U and to exploit information from both the lower and upper triangular parts of the current matrix instead of discarding either the lower or the upper part as in the approach previously described.

The techniques proposed in [8[8] BELLAVIA S, MORINI B & PORCELLI M. 2014. New updates of incomplete LU factorizations and applications to large nonlinear systems. Optimization Methods and Software, 29(2):321-340.] attempt to approximate PkI by extracting banded parts of the matrices appearing in it. In particular, the matrix Ak - As in PkI is replaced by the band matrix [Ak - As]β. Let us consider, first, the case β > 0. In this case, band approximate inverses of L and U are built in order to replace L-1 and U-1 in (10). An algorithm for computing exactly the matrices [L-1]γ and [U-1]γ without the complete inversion of L and U has been proposed in [8[8] BELLAVIA S, MORINI B & PORCELLI M. 2014. New updates of incomplete LU factorizations and applications to large nonlinear systems. Optimization Methods and Software, 29(2):321-340.] and we refer to [8[8] BELLAVIA S, MORINI B & PORCELLI M. 2014. New updates of incomplete LU factorizations and applications to large nonlinear systems. Optimization Methods and Software, 29(2):321-340.] for its description. Here, we underline that each row of [U-1]γ (column of [L-1]γ can be computed independently from the others and the computation can be carried out in parallel. Once [L-1]γ and[U-1]γ have been calculated, the seed preconditioner LDU is updated as follows:

giving rise to an implicit preconditioner for Ak. In case the value δ = 0 is used, i.e. only the diagonal of the difference Ak - As is retained, the ideal preconditioner (10) simplifies to

where

and the updating procedure is carried out without involving L-1 and U-1.

In this situation, in order to obtain a factorized preconditioner and make LDU + Σk of practical use, in [8[8] BELLAVIA S, MORINI B & PORCELLI M. 2014. New updates of incomplete LU factorizations and applications to large nonlinear systems. Optimization Methods and Software, 29(2):321-340.] the following strategy is proposed to compute a cheap approximate factorization of the form

Given the seed preconditioner Ps = LDU the factors Lk,Dk,Uk in (18) are obtained updating the seed preconditioner factorization as follows

The previous definition is motivated by the desire for having preconditioners that can be applied at the same cost as the seed preconditioner and that mimic in some sense the behavior of the corresponding matrices LDU + Σk. In fact, by the form of the diagonal scaling matrix Sk, the off diagonal parts of Lk and Uk decrease in absolute value as the entries of Σk increase, i.e. when the diagonal of LDU + Σk tends to dominate over the remaining entries. On the other hand, when the entries of Σk are small then LDU + Σk is close to LDU,Sk is close to the unit matrix, and the factors Lk and Uk are close to L and U respectively. Moreover, the sparsity pattern of the factors of Ps is preserved and the cost to form PkD is low since the computation of Dk is negligible while the computation of Lk and Uk consists in scaling the nonzero entries of L and U. Finally, by construction, (Sk)ii ∈(0,1], i = 1,..., n, and this ensures that the conditioning of the matrices Lk and Uk is at least as good as the conditioning of L and U, respectively [4[4] BELLAVIA S, DE SIMONE V, DI SERAFINO D & MORINI B. 2012. A preconditioning framework for sequences of diagonally modified linear systems arising in optimization. SIAM J. Numer. Anal., 50:3280-3302.].

We underline that the above described updating strategy represents a generalization to the unsymmetric case of approaches proposed in [3[3] BELLAVIA S DE SIMONE V, DI SERAFINO D & MORINI B. 2011. Efficient preconditioner updates for shifted linear systems. SIAM J. Sci. Comput., 33:1785-1809.,4[4] BELLAVIA S, DE SIMONE V, DI SERAFINO D & MORINI B. 2012. A preconditioning framework for sequences of diagonally modified linear systems arising in optimization. SIAM J. Numer. Anal., 50:3280-3302.]. In fact, in [3[3] BELLAVIA S DE SIMONE V, DI SERAFINO D & MORINI B. 2011. Efficient preconditioner updates for shifted linear systems. SIAM J. Sci. Comput., 33:1785-1809.,4[4] BELLAVIA S, DE SIMONE V, DI SERAFINO D & MORINI B. 2012. A preconditioning framework for sequences of diagonally modified linear systems arising in optimization. SIAM J. Numer. Anal., 50:3280-3302.] updating techniques for diagonally modified symmetric positive definite systems have been developed.

A further strategy is presented in [2[2] BELLAVIA S, BERTACCINI D & MORINI B. 2011. Nonsymmetric preconditioner updates in Newton- Krylov methods for nonlinear systems. SIAM J. Sci. Comput., 33:2595-2619.] where sparse approximations of the matrices L-1 and U-1 are employed. Such approximations are built computing an incomplete factorization for As-1 of the form

where D is a diagonal matrix, and W and Z are sparse unit upper triangular matrices. Note that W and Z are sparse approximations to U-1 and L-1, respectively and

is an explicit preconditioner for As. A possibilityto constructthissparse approximateinverse preconditioner is to use the incomplete generalized Gram-Schmidt orthogonalization process with respect to the bilinear form associated to As given in [10[10] BENZI M & TŮMA M. 1998. A sparse approximate inverse preconditioner for nonsymmetric linear systems. SIAM J. Sci. Comput., 19:968-994.]. Alternatively, one can first compute an ILU factorization of As and then approximately invert L and U [11[11] BENZI M & TŮMA M. 1999. A Comparative Study of Sparse Approximate Inverse Preconditioners. Appl. Numer.Math., 30:305-340.].

Exploiting factorization (22) and the relation As ≈ Z-T DW-1, equality Ak = As + (Ak -As) yields

Then,

represents the explicit ideal preconditioner.

Since the mid-term D + ZT(Ak - As) W may be full, the above ideal preconditioner cannot be used in practice. Then, in [2[2] BELLAVIA S, BERTACCINI D & MORINI B. 2011. Nonsymmetric preconditioner updates in Newton- Krylov methods for nonlinear systems. SIAM J. Sci. Comput., 33:2595-2619.] cheap approximations of D+ZT(Ak -As)W have been considered. In particular, A k- As is replaced by the band matrix [Ak - As]δ, and Z T [Ak - As)]δ W by the band matrix [ZT(Ak - As)δW]β, for some nonnegative β and δ. Therefore, the updated explicit preconditioner for Ak is

Similar ideas have been employed to build updating strategies for preconditioning sequence of linear systems arising in deblurring and denoising image restoration [7[7] BERTACCINI D & SGALLARI F. 2003. Updating preconditioners for nonlinear deblurring and denoising image restoration. Applied Numerical Mathematics, 60:994-1006.].

We further observe that in (24) as well as in (17) when β = 0, the middle factor in PkB and BkE is diagonal and the application of the preconditioner is straightforward; on the other hand if β > 0 the application of BkE and of PkB requires the solution of one banded linear system. In terms of computational cost, only small values of β and δ are viable and if β is nonzero, direct methods for banded systems are convenient. Finally, in both approaches the main diagonal and δ lower and upper diagonals of Ak are needed for building the preconditioner. In a matrix-free setting one possibility is to compute the nonzero entries of the matrix [Ak]δ exploiting the separability of the matrix-vector operator, as described in Section 3. A less expensive approach is based on the observation that in both cases [Ak]δ is needed to compute the product [Ak]δ R,where R is the sparse triangular matrix W in (24) or the triangular, band matrix [(U-1)]γ in (17). Then, letting Rj be the j-th column of R and

we can observe that φj has at most δ + j nonzero entries in case R = W and at most δ +γ nonzero entries in case R = [U-1]γ. These nonzero entries can be computed as follows. Note that

where L = {l : l ∈ [lm, lM], lm = max{i − δ, 0}, lM = min{i + δ, n}}. Then

Therefore, the computation of each nonzero entry of φj requires the evaluation of one component of the function F.

4.3 Accuracy of the updated preconditioners

Intuitively, the effectiveness of the updated preconditioners described in the previous subsection depends on the accuracy of the seed preconditioner, on the magnitude of the discarded quantities in the approximation of Ak -As and on the quality of the approximations to L-1 and/or U-1.

The following theorem summarizes the above considerations and the theoretical results given in [2[2] BELLAVIA S, BERTACCINI D & MORINI B. 2011. Nonsymmetric preconditioner updates in Newton- Krylov methods for nonlinear systems. SIAM J. Sci. Comput., 33:2595-2619.,8[8] BELLAVIA S, MORINI B & PORCELLI M. 2014. New updates of incomplete LU factorizations and applications to large nonlinear systems. Optimization Methods and Software, 29(2):321-340.,24[24] DUINTJER TEBBENS J & TŮMA M. 2007. Efficient Preconditioning of Sequences of Nonsymmetric Linear Systems. SIAM J. Sci. Comput., 29:1918-1941.] regarding the preconditioners PkL, PkU, PkB and BkE. Similar considerations on preconditioner PkD will follow. Concerning the accuracy of PkL and PkU, in the following analysis we limit ourselves to consider the update PkL in (15). Obviously, analogous results apply to PkU in (14). Let us introduce the following functions

for any matrix A and γ ≥ 0, and

Note that o(Ak -As) represents the discarded quantity in the approximation of Ak - As.

Theorem 4.1.Let Pk be either PkL or PkBor (BkE)-1. Then,

where c and ek take the following form. If Pk = PKL, then

if Pk = PKB, then c = 1 and

if Pk = (BkE)−1, then

Proof. Note that

Let us consider the quantity Ps - Pk for each preconditioner. Focusing on (15) we have

and the thesis trivially follows from (27). Concerning preconditioner (17),

Then, the thesis follows from (27) and the definition of ek.

Finally, let us consider the case Pk = (BkE)-1. Since Ps = Z-T DW-1, from (24) it follows:

and (27) yields the thesis.

The previous theorem enlightens that ║Ak - Pk║ depends on the accuracy of the seed preconditioner, on the discarded quantity o(Ak - As) and on the magnitude of ek. Note that, considering preconditioner PkL, ek is small whenever U is close to the identity matrix. On the other hand, in the approach employing PkB the magnitude of ek depends on the magnitude of the discarded quantities in the approximation of L-1 and U -1 as well as in the approximation of the matrix [L−1]γ [Ak - As]δ (U-1)]γ. Finally, when preconditioner BkE is considered the more accurate is the approximation [ZT [Ak - As]δ W )]β to ZT [Ak − As]δ W, the smaller is ek.

With this result at hand, it is possible to show that the updated preconditioners have the potential to be more effective than reusing Ps (frozen preconditioner). This is shown in the next lemma, whose proof follows the lines of Lemma 2.1 in [24[24] DUINTJER TEBBENS J & TŮMA M. 2007. Efficient Preconditioning of Sequences of Nonsymmetric Linear Systems. SIAM J. Sci. Comput., 29:1918-1941.].

Lemma 4.2.Let Ps, Pk,c and ek be given as in Theorem 4.1. Assume that Ps satisfies

for some positive є. Then

Proof. Condition (28) provides the following bound

This fact along with (26) yields

which completes the proof.

Note that the coefficient of ║Ak - Ps║ in (29) can be smaller than 1 provided that the discarded quantity ║o(Ak - As)║ and ek are small.

Let us now consider the preconditioner Pk = PkD. It is possible to prove that also in this case (26) holds with c = 1 and

with ĉ = 2║off (L)║║off (U)║+ 3(║off (L)║ + ║off (U)║). In fact we have o(Ak − As) = off (Ak − As) and

By [8[8] BELLAVIA S, MORINI B & PORCELLI M. 2014. New updates of incomplete LU factorizations and applications to large nonlinear systems. Optimization Methods and Software, 29(2):321-340., Theorem 3.1] it follows

║LDU + Σk − Pk║≤ ek,

with ek given in (30). Then, since Ak - As - Σk = off (Ak - As) we get (26).

We conclude this section observing that if ║·║ denotes the 1 or the infinity norm, inequality (26) yields

where C is a strictly positive constant. This is due to the fact that by the properties of the 1 or the infinity norm, ║[A]γ║≤║A║and for any matrix A and nonnegative γ and║o(Ak - As)║ ≤║ Ak - As║. Then, ║Ak - Pk║ is small whenever Ps is an accurate preconditioner for As and if Ak is close to As. In other words the updating procedures are expected to be effective in case of slowly varying sequences. This is likely to appear in sequences arising in the last iterations of a Newton-Krylov solver, as observed in Subsection 2.1. On the other hand, this observation suggests that the performance of an update preconditioner may deteriorate when Ak is not close to As and therefore it is advisable to combine the updating procedures with adaptive technique for refreshing the preconditioner and building a new reference matrix As and a new seed preconditioner (see Section 6).

5 QUASI-NEWTON UPDATES

The main idea of the works [12[12] BERGAMASCHI L, BRU R, MARTÍNEZ A & PUTTI M. 2006. Quasi-Newton preconditioners for the inexact Newton method. Electronic Trans. Num. Anal., 23:76-87.,13[13] BERGAMASCHI L, BRU R & MARTÍNEZ A. 2011. Low-rank update of preconditioners for the inexact Newton method with SPD Jacobian. Mathematical and Computer Modelling, 54:1863-1873.,34[34] MARTÍNEZ JM. 1993. A theory of secant preconditioners.Math. Comp., 60:681-698.,35[35] MARTÍNEZ JM 1995. An extension of the theory of secant preconditioners, in: Linear/Nonlinear Iterative Methods and Verification of Solution, Matsuyama, 1993. J. Comput. Appl. Math., 60:115-125.,37[37] MORALES JL & NOCEDAL J. 2000. Automatic preconditioning by limited memory quasi-Newton updating. SIAM J. Optim. 10:1079-1096.] is to build a sequence of preconditioners by imposing the secant condition as typical of the implementation of quasi-Newton methods.

For sequences of the form (4) arising in the solution of nonlinear systems using a Newton-like method, we describe the Broyden-type rank-one updates proposed in [12[12] BERGAMASCHI L, BRU R, MARTÍNEZ A & PUTTI M. 2006. Quasi-Newton preconditioners for the inexact Newton method. Electronic Trans. Num. Anal., 23:76-87.] and its variant for the special case where the Jacobian is SPD [13[13] BERGAMASCHI L, BRU R & MARTÍNEZ A. 2011. Low-rank update of preconditioners for the inexact Newton method with SPD Jacobian. Mathematical and Computer Modelling, 54:1863-1873.], while regarding sequences where the matrices Ak are SPD, as those arising in Truncated Gauss-Newton method for nonlinear least-squares, we review a different approach based on L-BFGS updating described in [37[37] MORALES JL & NOCEDAL J. 2000. Automatic preconditioning by limited memory quasi-Newton updating. SIAM J. Optim. 10:1079-1096.].

5.1 Low rank updates for sequences arising in Newton-Krylov methods

The use of structured quasi-Newton formulae as preconditioners for sequences of the form (4) arising in Newton-Krylov solvers was firstly proposed in some pioneering papers by Martinez, see e.g. [34[34] MARTÍNEZ JM. 1993. A theory of secant preconditioners.Math. Comp., 60:681-698.,35[35] MARTÍNEZ JM 1995. An extension of the theory of secant preconditioners, in: Linear/Nonlinear Iterative Methods and Verification of Solution, Matsuyama, 1993. J. Comput. Appl. Math., 60:115-125.].

In particular, Martinez introduced a new class of preconditioners that are obtained combining a classical incomplete factorization preconditioner for Θ'k with the "structured least-change secant update" (LCSU) framework. Then, in this approach a preconditioner PkLCSU for the kth system of the sequence (4) has the form

with Ck including partial information on Θ'k and Ek is updated using LCSU techniques [19[19] DENNIS JE & SCHNABEL RB. 1983. Unconstrained Optimization and Nonlinear Equations. SIAM, 1983.]. In order to use PkLCSU as preconditioner, its inversion must be inexpensive and one may consider Ck as an incomplete factorization of Θ'k and Ek as a low-rank matrix such that PkLCSU solves a secant equation.

Following this approach, the computation of PkLCSU is performed assuming that a preconditioner of choice Ck is built at every Newton iteration and enriched with the matrix Ek that is obtained by low-rank update of Ek-1. The main focus of [34[34] MARTÍNEZ JM. 1993. A theory of secant preconditioners.Math. Comp., 60:681-698.,35[35] MARTÍNEZ JM 1995. An extension of the theory of secant preconditioners, in: Linear/Nonlinear Iterative Methods and Verification of Solution, Matsuyama, 1993. J. Comput. Appl. Math., 60:115-125.] is on convergence analysis of a Newton-Krylov method incorporating the preconditioner explicitly in the definition of the trial point. In other words, a modification of the classical Newton-Krylov method is proposed where the trial step that solves PkLCSU s = -Θk is attempted and the convergence properties of the resulting method are studied, while the accuracy of the preconditioner is not analyzed.

On the other hand, Bergamaschi et al. proposed in [12[12] BERGAMASCHI L, BRU R, MARTÍNEZ A & PUTTI M. 2006. Quasi-Newton preconditioners for the inexact Newton method. Electronic Trans. Num. Anal., 23:76-87.], quasi-Newton updating techniques that belong to the class (32) focusing on the accuracy of the preconditioner at each nonlinear iteration and on the numerical implementation issues.

Given a seed matrix Θ's and either an explicit or an implicit seed preconditioner, the strategy proposed in [12[12] BERGAMASCHI L, BRU R, MARTÍNEZ A & PUTTI M. 2006. Quasi-Newton preconditioners for the inexact Newton method. Electronic Trans. Num. Anal., 23:76-87.] consists in updating this preconditioner using rank-one updates. Such an update is obtained imposing a secant equation that takes into account both the Newton step computed at the previous nonlinear iteration and the difference between the last two consecutive function values.

Let Pk be a preconditioner for Θ'k,let sk be the k-th Newton step and let yk = Θk+1k.Then the new updated preconditioner Pk+1 ≈Θ'k+1 is obtained by imposing the secant condition

Pk+1sk = yk.

In the general unsymmetric case, Pk+1 is uniquely determined by choosing the closest matrix to the current Pk satisfying the secant condition, i.e.

where ║·║F is the Frobenius norm. This yields the classical Broyden formula:

and therefore Pk+1 is obtained updating the previous preconditioner via Broyden-type rank-one updates.

Note that in this approach the computation of an incomplete factorization of the Jacobian at each Newton-Krylov iteration is not needed. In fact, preconditioner Pk belongs to the class (32) according to the following choice of the matrices Ck and Ek: Ck = 0 and Ek = Pk, except for the seed iteration k = s,where Cs = Ps and Es = 0.

Using the Sherman-Morrison formula the inverse Bk+1 of Pk+1 can be obtained by

In [12[12] BERGAMASCHI L, BRU R, MARTÍNEZ A & PUTTI M. 2006. Quasi-Newton preconditioners for the inexact Newton method. Electronic Trans. Num. Anal., 23:76-87.] the accuracy of the preconditioner in terms of the distance

I − Pk−1 Θ'k

is analyzed. The derivation of this bound differs in the classical analysis of Broyden methods as the trial step here is computed solving the Newton equation with the true Jacobian, while in Broyden method the trial step solves Pks = -Θk. The analysis is carried out under classical hypothesis on Θ for the convergence of Newton or Newton-like approaches and it is proved assuming that the seed Jacobian is Θ'0. Then, if the initial guess x0 is sufficiently close to the solution x0 of (2) and the initial seed preconditioner P0 is sufficiently close both to Θ'0 and to Θ' (x*), then ║I -Pk-1 Θ'k can be tuned to any fixed accuracy. In particular the following result is proved, see [12, Theorem 3.6].

Theorem 5.1.Assume that (2) has a solution x*, that Θ' (x) is Lipschitz continuous and Θ' (x*) is nonsingular. Let α = ║Θ' (x*)-1║. Then, fixed δ1 ∈(0, 1/α) and δ1 > 0, there exist δ, δPP strictly positive such that if x0 - x*║≤δ,║P0 - Θ'0║ ≤ δP and ║ P0 -Θ'(x*)║≤ δp, then

In [13[13] BERGAMASCHI L, BRU R & MARTÍNEZ A. 2011. Low-rank update of preconditioners for the inexact Newton method with SPD Jacobian. Mathematical and Computer Modelling, 54:1863-1873.] the previous approach is specialized to sequences arising in nonlinear systems where the Jacobian are SPD matrices. In this case, the updating strategy is performed using BFGS rank-two updates so that the symmetry and the positive definiteness of the preconditioners is preserved. Then, given an SPD seed preconditioner Θ's, Pk+1 and its inverse Bk+1 are computed as follows

and

Note that if P0 is SPD then Pk is SPD under the condition skTyk > 0 [31[31] KELLEY CT. 1995. Iterative Methods for Optimization., SIAM 1995.]. A result on the accuracy of the BFGS preconditioner (35) analogous to Theorem 5.1 was proved in [13, Theorem 3.6] with the further assumption that P0 is SPD.

Interestingly, these ideas have been also used for preconditioning the sequence of linear systems arising in Newton methods for large symmetric eigenproblems [14[14] BERGAMASCHI L & MARTÍNEZ A. 2013. Parallel RFSAI-BFGS preconditioners for large symmetric eigenproblems. Journal of Applied Mathematics, 2013:(10 pages).].

The implementation of the above procedures can be made in a complete matrix-free manner without computing the preconditioner explicitly. In fact, assume to have a seed Jacobian Θ's ≡Θ'k and the corresponding seed preconditioner Bs, then the application of the preconditioner Bk in (34) and (35) can be performed using a recursive formula which require the computation of one matrix vector products with Bs, and a number of scalar products and daxpy operations that depends on k - k.

Therefore, if Bs is in the form of an approximate inverse preconditioner, its application is straightforward and Bk results a "pure explicit preconditioner". Else if Bs is given as the inverse of an incomplete factorization, then Bk ≈Θ'k -1 and its application requires the solution of two triangular sparse linear systems.

It should be underlined that quasi-Newton like preconditioners suffer from two main drawbacks, namely the increasing cost of memory for saving yk and sk and the increasing CPU time to apply the preconditioner. Moreover, the accuracy result given in Theorem 5.1 holds provided that x0 is sufficiently close to x*. Then, in practice restart procedures must be used especially if the number of nonlinear iterations is high. We postpone the discussion on this topic to Section 6.

As a final comment we note that these techniques are closely related to the Newton-Krylov setting, while the factorization updating approaches described in Section 4 can be applied to sequences arising in a more general context.

5.2 Limited-memory quasi-Newton updates for SPD sequences

In this section we consider the solution of sequences of systems (1) where the matrices Ak are SPD and assume to use the PCG method for approximately solving the linear systems. We describe a class of preconditioners, given in [37[37] MORALES JL & NOCEDAL J. 2000. Automatic preconditioning by limited memory quasi-Newton updating. SIAM J. Optim. 10:1079-1096.], built exploiting information collected by the CG method applied to the previous system of the sequence. This approach results in a limitedmemory BFGS technique that can be applied to a general sequence of SPD linear systems, as those arising in the solution of nonlinear least-squares. In [37[37] MORALES JL & NOCEDAL J. 2000. Automatic preconditioning by limited memory quasi-Newton updating. SIAM J. Optim. 10:1079-1096.] special attention is devoted to the context of solution of large-scale unconstrained minimization problems where the matrices Ak corresponds to the Hessian of the objective function evaluated at the current point.

The basic idea is to observe that solving the SPD system Aks = bk is equivalent to minimizing the convex quadratic function

whose gradient is given by the residual r(s) = Aks - bk of the linear system.

Then, if a BFGS formula is computed exploiting information collected by a minimization process for qk, then an SPD approximation of the inverse of the Hessian of qk, that is Ak, is obtained. This approximation can be used as an explicit preconditioner for the minimization of the subsequent system Ak+1s = bk+1.

Let {si} and {ri} denotes the sequence of iterates and residuals generated by the CG method applied to the solution of the linear system Aks = bk and assume that the following m vectorpairs (wi, yi) have been computed and stored:

Then, the limited memory BFGS approximation to the inverse of the Hessian Ak of qk(s), built using the m vector-pairs (36), is given by [39[39] NOCEDAL J& WRIGHT SJ. 1999. Numerical Optimization. Springer Ser. Oper. Res., Springer-Verlag, New York.]

with

and H to be

where l denotes the last correction pair generated in the previous CG cycle. Then Bk = H (m) is used as an approximate inverse preconditioner for the next system of the sequence. The same scheme can be repeated for the subsequent systems of the sequence always using CG information of the mostly recently solved system. In this strategy the first system of the sequence is solved by the unpreconditioned CG method and the m vectors-pairs (36) are computed and stored.

The parameter m determines the amount of memory in the preconditioner and is normally chosen to be much smaller than the number of variables so that the cost of applying the preconditioner is not too large. There exist different strategies to choose the m correction pairs to be used at each iteration. One considers simply the last m pairs, another saves vectors which are approximatively randomly distributed throughout the CG run.

The effectiveness of the automatic preconditioner (37) has been extensively investigated in [37[37] MORALES JL & NOCEDAL J. 2000. Automatic preconditioning by limited memory quasi-Newton updating. SIAM J. Optim. 10:1079-1096.] including experiments within a Hessian-free Newton method for solving unconstrained optimization problem and tested on different strategies for choosing the m pairs (36). Results seem to indicate that the limited memory BFGS preconditioner may be useful when the coefficient matrices Ak are not very sparse or when Ak is not explicitly available and products of Ak times vectors are expensive to compute.

6 REFRESH AND RESTART STRATEGIES

Generally, the updating procedures presented in this survey require the use of restarting techniques to refresh the seed preconditioner.

In particular, the effectiveness of preconditioners presented in Section 4 depends on the difference between Ak and As (see (31)) and when this difference becomes large, the quality of the updated preconditioner Pk can deteriorate and the convergence of the linear solver can slow down. To handle this issue, in the proposals [2[2] BELLAVIA S, BERTACCINI D & MORINI B. 2011. Nonsymmetric preconditioner updates in Newton- Krylov methods for nonlinear systems. SIAM J. Sci. Comput., 33:2595-2619.,8[8] BELLAVIA S, MORINI B & PORCELLI M. 2014. New updates of incomplete LU factorizations and applications to large nonlinear systems. Optimization Methods and Software, 29(2):321-340.] the quality of the preconditioner is checked out. More precisely, if the linear solver fails in computing an inexact Newton step with the prescribed accuracy a new reference matrix and a new preconditioner are initialized. In [15[15] BIRKEN P, DUINTJER TEBBENS J, MEISTER A & TŮMA M 2008. Preconditioner updates applied to CFD model problems. Appl. Numer. Math. 58:1628-1641.] periodic re-factorizations of the seed preconditioner are used. Moreover, it is also noted that in CFD applications parts of the sequence of linear systems show large entries in the difference matrices and in other parts system matrices are very close. In the latter case the use of the updating procedure seems to be unproductive as the frozen preconditioner is powerful. Then, in [15[15] BIRKEN P, DUINTJER TEBBENS J, MEISTER A & TŮMA M 2008. Preconditioner updates applied to CFD model problems. Appl. Numer. Math. 58:1628-1641.] an adaptive technique to avoid unnecessary updating is adopted.

On the other hand, algorithms based on quasi-Newton formula typically suffer from the increasing cost of memory for saving the work vectors yk and sk. Moreover, if the number of linear systems to be solved is high (e.g. more than ten nonlinear iterations are performed), the application of the preconditioner may be too heavy to be counterbalanced by a reduction in the iterations number of the iterative linear solver. In [37[37] MORALES JL & NOCEDAL J. 2000. Automatic preconditioning by limited memory quasi-Newton updating. SIAM J. Optim. 10:1079-1096.] the memory cost is implicitly kept low by using the limited memory implementation. The option followed in [12[12] BERGAMASCHI L, BRU R, MARTÍNEZ A & PUTTI M. 2006. Quasi-Newton preconditioners for the inexact Newton method. Electronic Trans. Num. Anal., 23:76-87.,13[13] BERGAMASCHI L, BRU R & MARTÍNEZ A. 2011. Low-rank update of preconditioners for the inexact Newton method with SPD Jacobian. Mathematical and Computer Modelling, 54:1863-1873.] consists in recomputing the seed preconditioner after a fixed number kmax of iterations, see Section 7. The fine tuning of kmax is not a trivial task and it has been addressed in [13[13] BERGAMASCHI L, BRU R & MARTÍNEZ A. 2011. Low-rank update of preconditioners for the inexact Newton method with SPD Jacobian. Mathematical and Computer Modelling, 54:1863-1873.]. The numerical experiments performed there suggest that kmax belonging to the interval [1[1] BELLAVIA S & BERRONE S. 2007. Globalization strategies for Newton-Krylov methods for stabilized FEM discretization of Navier-Stokes equations. Journal of Computational Physics, 226:2317-2340.,5[5] BELLAVIA S & MORINI B. 2001. A globally convergent Newton-GMRES subspace method for systems of nonlinear equations. SIAM J. Sci. Comput., 23:940-960.] might be a good choice to get a sufficiently accurate preconditioner and to keep the overall overhead of the algorithm low.

7 NUMERICAL ILLUSTRATION

In this section we give a numerical illustration of the typical behaviour of updated preconditioners presented in this survey. We focus on sequences arising in Newton-Krylov methods for nonlinear systems of the form (4) and we selected two preconditioning strategies. One is the strategy based on the updating of the factorized approximate inverse seed preconditioner Bk E given in (24) (FINVUP strategy); the other is the Broyden-type updating given in (34) (BroyUP strategy). The reported results are illustrative of the behaviour of the updating strategies falling in the two classes reviewed in this survey. We compare the behaviour of these updating procedures with those of the frozen preconditioner (Freeze strategy) and the recomputed preconditioner (Recomp strategy). In the Freeze strategy the preconditioner is computed only for the first reference matrix Θ' 0 and reused in all the subsequent nonlinear iterations; in the Recomp strategy a preconditioner for each linear system of the sequence is recomputed from scratch.

BiCGSTAB is used as Krylov solver and the linear inner iterations are stopped when condition (5) is met. We refer the reader to [2[2] BELLAVIA S, BERTACCINI D & MORINI B. 2011. Nonsymmetric preconditioner updates in Newton- Krylov methods for nonlinear systems. SIAM J. Sci. Comput., 33:2595-2619.,12[12] BERGAMASCHI L, BRU R, MARTÍNEZ A & PUTTI M. 2006. Quasi-Newton preconditioners for the inexact Newton method. Electronic Trans. Num. Anal., 23:76-87.] for details on the implementation of the Newton-Krylov method.

Concerning the FINVUP preconditioner, we summarize the numerical experience given in [2[2] BELLAVIA S, BERTACCINI D & MORINI B. 2011. Nonsymmetric preconditioner updates in Newton- Krylov methods for nonlinear systems. SIAM J. Sci. Comput., 33:2595-2619.]. The seed Jacobian used was the first reference matrix of the sequence, and an approximate sparse inverse preconditioner Bs is constructed using an incomplete LDU factorization with drop tolerance of the seed matrix followed by the approximate inversion of the factors L and U. A drop tolerance has been used to perform the approximate inversion of the triangular factors.

Four problems widely used in literature were considered: the Nonlinear Convection-Diffusion problem (NCD), the Flow in a Porous Medium problem (FPM), the CounterCurrent Reactor problem (CCR) and the 2D Driven Cavity problem (2DC). Three of them, namely problems NCD, FPM and 2DC, arise from the discretization of PDE problems. In all four problems, the dimension n of the nonlinear systems was varied considering values ranging from 6400 to 62500. NCD has been solved setting the Reynolds number equal to 250, 500, 1000 while 2DC has been solved with Reynolds numbers 200, 250, 300 and 350. All these settings gave rise to a testing set made of 22 problems.

The update of the preconditioner was performed allowing for a diagonal approximation (δ = 0 and β = 0) in (24) when sequences arising in the solution of problems (FPM) and (CCR) are solved, while tridiagonal banded approximation (δ = 1and β = 1) are used in the solution of sequences arising in (FPM)and (2DC) problems.

To compare the overall computational effort of the three preconditioning strategies (Freeze, Recomp, FINVUP), the performance profiles proposed by Dolan and Moré [23[23] DOLAN ED & MORÉ JJ. 2002. Benchmarking optimization software with performance profiles. Mathematical Programming, 91:201-213.] was used. For each problem P in the testing set and each Algorithm A, let LI P,A denote the total number of linear iterations required to solve problem P using Algorithm A and LI P be number of linear iterations required by the best algorithm to solve problem P, i.e. the algorithm which uses the fewest linear iterations. Then, the linear iterations performance profile is defined for the algorithm A as

Analogously, we define the CPU time performance profile φA(τ), τ ≥ 1 measuring the efficiency of the algorithm A in terms of the employed CPU time.

Here we use two different quantities to measure the computational effort of each strategy: the number of BiCGSTAB iterations performed and the execution time needed to solve each test (see Fig. 2).

Figure 2
Performance profile in terms of linear iterations (top) and execution time (bottom).

The performance profiles show the typical behaviour of a factorization updating strategy: the Recomp strategy is the most effective in terms of linear iterations, while FINVUP outperforms the Freeze approach. The situation is quite different when we consider execution time. The repeated computation of the factorization of the preconditioner is relatively expensive and therefore the recomputation strategy is in general less time efficient than updating. In fact, FINVUP is the most efficient in the solution of 88% of the tests. In the solution of the 91% of tests FINVUP requires a computational effort that is within a factor 3 within the best solver. Note also that FINVUP is the most reliable strategy as it solved all the tests.

In Figure 3 we focus on the sequence arising in the solution of the NCD with n = 22500 and Re = 500. We plot, for each nonlinear iteration, the number of linear iterations performed by BiCGSTAB coupled with the Frozen and with the FINVUP strategy. The figure clearly shows that the performance of the frozen preconditioner deteriorates soon during the solution of the sequence and is rather unpredictable, while the updating strategy deteriorates only in the last two iterations. Note the seed preconditioner has never been recomputed.

Figure 3
Problem NCD with n = 22500 and Re = 500: comparison, in terms of linear iterations between the Frozen and the FINVUP strategy.

Next we focus on the Broyden-typeupdate (34) and we showsome of the results obtained in [12[12] BERGAMASCHI L, BRU R, MARTÍNEZ A & PUTTI M. 2006. Quasi-Newton preconditioners for the inexact Newton method. Electronic Trans. Num. Anal., 23:76-87.]. Specifically, we consider the sequence arising in the solution of the classical Bratu problem. In [12[12] BERGAMASCHI L, BRU R, MARTÍNEZ A & PUTTI M. 2006. Quasi-Newton preconditioners for the inexact Newton method. Electronic Trans. Num. Anal., 23:76-87.] the problem has been discretized by 2D Mixed Finite Elements yielding to a sequence of systems with n = 28600. Concerning the employed restart strategy, this is ruled by the choice of the parameter kmax which represents the the maximum number of rank one corrections allowed. Each kmax nonlinear iterations, the last kmax computed vectors sj, yj, j = k - kmax + 1,..., k needed to compute the current preconditioner are replaced with the last computed sk, yk and a new seed preconditioner Bs is formed.

In Table 1 we show the value of kmax, the number of nonlinear iterations (NLI), the number of BiCGSTAB iterations (LI), the total CPU time in seconds. The first columns of the table refer to tests where the ILU(0) factorization is used to compute the seed preconditioner Bs and the last columns refer to the case where the AINV factorization with drop tolerance 0.1 is used for computing Bs-1. Small values of the allowed quasi-Newton corrections kmax are reported and compared with the choice kmax = ∞ which corresponds to not employ the restarting procedure and with the Frozen and Recomp strategy.

Table 1
Results of BroyUP preconditioner on the Bratu problem.

With both seed preconditioners (ILU(0) or AINV(0.1)) the use of BroyUP produces an acceleration in terms of both number of iterations and CPU time. Moreover, the restarting strategy enhances the performance of the overall algorithm since the choice kmax = ∞ yields to the less efficient implementation. This is not surprising since the results of Theorem 5.1 only hold when x0 is near the solution. With the restarting strategy, the seed preconditioner is computed every kmax iterations, thus taking advance from the nonlinear convergence. We note that these updating strategies outperform the Recomp strategy also in terms of linear iterations. This seems to be due to the fact that, as opposite to the preconditioner updating factorizations, the preconditoner is periodically recomputed and enriched by Broyden-rank one information on the current Jacobian.

8 CONCLUSION

The numerical solution of the linear algebra phase is crucial for effectiveness of optimization methods and often dominates the computational time, especially when large-scale problems are considered. Thus, the effectiveness of optimization algorithms is highly dependent on reliable and effective tools for numerical linear algebra. In this paper we have attempted to highlight some of the developments that have taken place in recent years in preconditioning techniques for sequences of linear systems arising in optimization methods. In particular we focused on preconditioner updating techniques for sequences arising in the solution of nonlinear systems and nonlinear least-squares problems by Newton-Krylov methods. We described two main classes of updating preconditioners, that is the class based on updating available factorizations and the class based on quasi-Newton formula. We provided a uniform analysis of their accuracy showing their advantages and their drawbacks. Finally, the implementation of both classes has been discussed in a matrix-free setting and an overview of the practical behaviour of these strategies has been reported.

ACKNOWLEDGMENTS

The authors would like to thank the anonymous referee. This work was supported by National Group of Computing Science (GNCS-INDAM): GNCS 2013 Projects "Metodi numerici e software per l'ottimizzazione su larga scala con applicazioni all'Image Processing" and "Strategie risolutive per sistemi lineari tipo KKT con uso di informazioni strutturali", GNCS 2013-2014 Project "Identificazione automatica dei parametri algoritmici ottimali".

REFERENCES

  • [1]
    BELLAVIA S & BERRONE S. 2007. Globalization strategies for Newton-Krylov methods for stabilized FEM discretization of Navier-Stokes equations. Journal of Computational Physics, 226:2317-2340.
  • [2]
    BELLAVIA S, BERTACCINI D & MORINI B. 2011. Nonsymmetric preconditioner updates in Newton- Krylov methods for nonlinear systems. SIAM J. Sci. Comput., 33:2595-2619.
  • [3]
    BELLAVIA S DE SIMONE V, DI SERAFINO D & MORINI B. 2011. Efficient preconditioner updates for shifted linear systems. SIAM J. Sci. Comput., 33:1785-1809.
  • [4]
    BELLAVIA S, DE SIMONE V, DI SERAFINO D & MORINI B. 2012. A preconditioning framework for sequences of diagonally modified linear systems arising in optimization. SIAM J. Numer. Anal., 50:3280-3302.
  • [5]
    BELLAVIA S & MORINI B. 2001. A globally convergent Newton-GMRES subspace method for systems of nonlinear equations. SIAM J. Sci. Comput., 23:940-960.
  • [6]
    BELLAVIA S& MORINI B. 2006. Subspace trust-region methods for large bound-constrained nonlinear equations. SIAM Journal on Numerical Analysis, 44:1535-1555.
  • [7]
    BERTACCINI D & SGALLARI F. 2003. Updating preconditioners for nonlinear deblurring and denoising image restoration. Applied Numerical Mathematics, 60:994-1006.
  • [8]
    BELLAVIA S, MORINI B & PORCELLI M. 2014. New updates of incomplete LU factorizations and applications to large nonlinear systems. Optimization Methods and Software, 29(2):321-340.
  • [9]
    BENZI M & GOLUB GH. 1999. Bounds of the entries of matrix functions with applications to preconditioning. BIT, 39:417-438.
  • [10]
    BENZI M & TŮMA M. 1998. A sparse approximate inverse preconditioner for nonsymmetric linear systems. SIAM J. Sci. Comput., 19:968-994.
  • [11]
    BENZI M & TŮMA M. 1999. A Comparative Study of Sparse Approximate Inverse Preconditioners. Appl. Numer.Math., 30:305-340.
  • [12]
    BERGAMASCHI L, BRU R, MARTÍNEZ A & PUTTI M. 2006. Quasi-Newton preconditioners for the inexact Newton method. Electronic Trans. Num. Anal., 23:76-87.
  • [13]
    BERGAMASCHI L, BRU R & MARTÍNEZ A. 2011. Low-rank update of preconditioners for the inexact Newton method with SPD Jacobian. Mathematical and Computer Modelling, 54:1863-1873.
  • [14]
    BERGAMASCHI L & MARTÍNEZ A. 2013. Parallel RFSAI-BFGS preconditioners for large symmetric eigenproblems. Journal of Applied Mathematics, 2013:(10 pages).
  • [15]
    BIRKEN P, DUINTJER TEBBENS J, MEISTER A & TŮMA M 2008. Preconditioner updates applied to CFD model problems. Appl. Numer. Math. 58:1628-1641.
  • [16]
    BROWN PN. 1987. A local convergence theory for combined inexact-Newton/finite-difference projection methods. SIAM J. Numer. Anal., 24:407-434.
  • [17]
    BROWN PN& SAAD Y. 1994. Convergence theory of nonlinear Newton-Krylov algorithms. SIAM J. Optim., 4:297-330.
  • [18]
    CALGARO C, CHEHAB JP & SAAD Y 2010. Incremental incomplete ILU factorizations with applications. Numerical Linear Algebra with Applications, 17:811-837.
  • [19]
    DENNIS JE & SCHNABEL RB. 1983. Unconstrained Optimization and Nonlinear Equations. SIAM, 1983.
  • [20]
    D'APUZZO M, DE SIMONE V & DI SERAFINO D. 2010. On mutual impact of numerical linear algebra and large-scale optimization with focus on interior point methods. Computational Optimization and Applications, 45:283-310.
  • [21]
    DEMBO RS, EISENSTAT SC & STEIHAUG T. 1982. Inexact Newton methods. SIAM Journal on Numerical Analysis, 19:400-408.
  • [22]
    DEMKO S, MOSS WF & SMITH PW. 1984. Decay rates for inverses of band matrices. Mathematics of Computation, 43:491-499.
  • [23]
    DOLAN ED & MORÉ JJ. 2002. Benchmarking optimization software with performance profiles. Mathematical Programming, 91:201-213.
  • [24]
    DUINTJER TEBBENS J & TŮMA M. 2007. Efficient Preconditioning of Sequences of Nonsymmetric Linear Systems. SIAM J. Sci. Comput., 29:1918-1941.
  • [25]
    DUINTJER TEBBENS J & TŮMA M 2010. Preconditioner Updates for Solving Sequences of Linear Systems in Matrix-free Environment. Numer. Linear Algebra Appl., 17:997-1019.
  • [26]
    EISENSTAT SC& WALKER HF. 1994. Globally convergent inexact Newton methods. SIAM J. Optim. 4:393-422.
  • [27]
    FASANO G & ROMA M. 2013. Preconditioning Newton-Krylov Methods in Non-Convex Large Scale Optimization. Computational Optimization and Applications 56:253-290.
  • [28]
    FASANO G, LAMPARIELLO F & SCIANDRONE M. 2006. A Truncated Nonmonotone Gauss-Newton Method for Large-Scale Nonlinear Least-Squares Problems., Computational Optimization and Applications 34:343-358.
  • [29]
    GRATTON S, LAWLESS AS & NICHOLS NK. 2007. Approximate Gauss-Newton Methods for Nonlinear Least Squares Problems. SIAM J. Optim. 18:106-132.
  • [30]
    HESTENES MR. 1975. Pseudoinverses and conjugate gradients. Communications of the ACM, 18:40-43.
  • [31]
    KELLEY CT. 1995. Iterative Methods for Optimization., SIAM 1995.
  • [32]
    KNOLL DA & KEYES DE. 2004. Jacobian-free Newton-Krylov methods, a survey of approaches and applications. J. Comput. Phys., 193:357-397.
  • [33]
    LOGHIN D, RUIZ D & TOUHAMI A. 2006. Adaptive preconditioners for nonlinear systems of equations. Journal of Computational and Applied Mathematics, 189:362-374.
  • [34]
    MARTÍNEZ JM. 1993. A theory of secant preconditioners.Math. Comp., 60:681-698.
  • [35]
    MARTÍNEZ JM 1995. An extension of the theory of secant preconditioners, in: Linear/Nonlinear Iterative Methods and Verification of Solution, Matsuyama, 1993. J. Comput. Appl. Math., 60:115-125.
  • [36]
    MEURANT G. 1992. A review on the inverse of symmetric tridiagonal and block tridiagonal matrices. SIAM J. Matrix. Anal. Appl., 13:707-728.
  • [37]
    MORALES JL & NOCEDAL J. 2000. Automatic preconditioning by limited memory quasi-Newton updating. SIAM J. Optim. 10:1079-1096.
  • [38]
    NABBEN R. 1999. Decay rates of the inverse of nonsymmetric tridiagonal and band matrices. SIAM J. Matrix Anal. Appl., 20:820-837.
  • [39]
    NOCEDAL J& WRIGHT SJ. 1999. Numerical Optimization. Springer Ser. Oper. Res., Springer-Verlag, New York.
  • [40]
    PAIGE CC & SAUNDERS MA. 1982. LSQR: An algorithm for sparse linear equations and sparse least squares. TOMS, 8:43-71.
  • [41]
    PARKS ML, DE STURLER E, MACKEY G, JOHNSON DD & MAITI S. 2006. Recycling Krylov subspaces for sequences of linear systems.SIAM J. Sci. Comput., 28:1651-1674.
  • [42]
    PORCELLI M. 2013. On the convergence of an inexact Gauss-Newton trust-region method for nonlinear least-squares problems with simple bounds. Optimization Letters, 7(3):447-465.
  • [43]
    XU W & COLEMAN TF. 2014. Solving Nonlinear Equations with the Newton-Kyrlov Method Based on Automatic Differentiation. Journal of Optimization Methods and Software 29(1):88-101.
  • [44]
    SAAD Y 2003. Iterative Methods for Sparse Linear Systems 2nd edition., SIAM 2003.

Publication Dates

  • Publication in this collection
    Sep-Dec 2014

History

  • Received
    10 Oct 2013
  • Accepted
    04 Jan 2014
Sociedade Brasileira de Pesquisa Operacional Rua Mayrink Veiga, 32 - sala 601 - Centro, 20090-050 Rio de Janeiro RJ - Brasil, Tel.: +55 21 2263-0499, Fax: +55 21 2263-0501 - Rio de Janeiro - RJ - Brazil
E-mail: sobrapo@sobrapo.org.br