SciELO - Scientific Electronic Library Online

 
vol.33 issue1Introduction to bosonizationAn introduction to numerical methods in low-dimensional quantum systems author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Brazilian Journal of Physics

Print version ISSN 0103-9733

Braz. J. Phys. vol.33 no.1 São Paulo Mar. 2003

http://dx.doi.org/10.1590/S0103-97332003000100003 

Introduction to quantum Monte Carlo simulations for fermionic systems

 

 

Raimundo R. dos Santos

Instituto de Física, Universidade Federal do Rio de Janeiro, Caixa Postal 68528, 21945-970, Rio de Janeiro, RJ, Brazil

Address to correspondence

 

 


ABSTRACT

We tutorially review the determinantal Quantum Monte Carlo method for fermionic systems, using the Hubbard model as a case study. Starting with the basic ingredients of Monte Carlo simulations for classical systems, we introduce aspects such as importance sampling, sources of errors, and finite-size scaling analyses. We then set up the preliminary steps to prepare for the simulations, showing that they are actually carried out by sampling discrete Hubbard-Stratonovich auxiliary fields. In this process the Green's function emerges as a fundamental tool, since it is used in the updating process, and, at the same time, it is directly related to the quantities probing magnetic, charge, metallic, and superconducting behaviours. We also discuss the as yet unresolved ‘minus-sign problem', and two ways to stabilize the algorithm at low temperatures.


 

 

1  Introduction

In dealing with systems of many interacting fermions, one is generally interested in their collective properties, which are suitably described within the framework of Statistical Mechanics. Unlike insulating magnets, in which the spin degrees of freedom are singled out, the interplay between charge and spin is responsible for a wealth of interesting phenomena (orbital degrees of freedom may also be included, but they add enormously to complexity, and shall not be considered here). Typical questions one asks about a system are related to its magnetic state (Is it magnetic? If so, what is the arrangement?), to its charge distribution, and to whether it is insulating, metallic, or superconductor.

A deeper understanding of the interplay between spin and charge degrees of freedom can be achieved through models, which, while capturing the basic physical mechanisms responsible for the observed behaviour, should be simple enough to allow calculations of quantities comparable with experiments. The simplest model describing interacting fermions on a lattice is the single band Hubbard model [1], defined by the grand-canonical Hamiltonian

where t is the hopping integral (which sets the energy scale, so we take t = 1 throughout this paper), U is the on-site Coulomb repulsion, m is the chemical potential controlling the fermion density, and iruns over the sites of a d-dimensional lattice; for the time being, we consider hopping between nearest neighbours only, as denoted by á ¼ñ. The operators and cis respectively create and annihilate a fermion with spin s on the (single) orbital centred at i, while nis º ci s. The Hubbard model describes the competition between opposing tendencies of itinerancy (driven by the hopping term), and localization (driven by the on-site repulsion). For a half-filled band (one fermion per site), it can be shown [2] that in the limit of strong repulsion the Hubbard Hamiltonian reduces to that of an isotropic antiferromagnetic Heisenberg model with an exchange coupling J = 4t2/U.

If we let the coupling U assume negative values, one has the attractive Hubbard model. Physically, this local attraction can have its origin in the coupling of fermions to extended (through polaron formation) or local phonons (such as vibrational modes of chemical complexes) [3]. Given that its weak coupling limit describes the usual BCS theory, and that the real-space pairing is more amenable to numerical calculations, this model has also been useful in elucidating several properties of both conventional and high-temperature superconductors [3].

The Hubbard model [even in its simplest form, Eq. (1)] is only exactly soluble in one dimension, through the Bethe ansatz; correlation functions, however, are not directly available. In higher dimensions one has to resort to approximation schemes, and numerical techniques such as Quantum Monte Carlo (QMC) simulations have proven to be crucial in extracting information about strongly correlated fermions.

Since the first Monte Carlo method for classical systems was devised in the early 1950's [4,5,6], several QMC algorithms have been proposed. Their classification is varied, and depends on which aspect one wishes to single out. For instance, they can be classified according to whether the degrees of freedom lie in the continuum or on a lattice; or whether it is a ground-state or a finite-temperature framework; or whether it is variational or projective; or even according to some detailed aspect of their implementation, such as if an auxiliary field is introduced, or if a Green's function is constructed by the power method. Excellent broad overviews of these algorithms are available in the literature, such as Refs. [7,8], so here we concentrate on the actual details of the grand-canonical formulation with auxiliary fields leading to fermionic determinants [9]. We will pay special attention to the implementations and improvements achieved over the years [10-14]. We will also have in mind primarily the Hubbard model, but will not discuss at length the results obtained; instead, illustrative references will be given to guide the reader to more detailed analyses, and we apologize for the omitions of many relevant papers, which was dictated by the need to keep the discussion focused on this particular QMC implementation, and not on the Hubbard (or any other) model.

In line with the tutorial purpose of this review, we introduce in Sec.  the basic ingredients of Monte Carlo simulations, illustrated for 'classical' spins. In this way, we have the opportunity to draw the attention of the inexperienced reader to the importance of thorough data analyses, common to both classical and quantum systems, before embarking into the more ellaborate quantum formalism. Preliminary manipulations, approximations involved, and the natural appearance of Green's functions in the framework are then discussed in Sec. . In Sec. , we describe the updating process, as well as the wide range of average quantities available to probe different physical properties of the systems. We then address two of the main difficulties present in the simpler algorithm presented so far: the still unresolved 'minus-sign problem' (Sec. ), and the instabilities at low-temperatures, (Sec. ) for which two successful solutions are discussed. Conclusions and some perspectives are then presented in Sec. 7.

 

2  Monte Carlo simulations for 'classical' spins

Our aim is to calculate quantities such as the partition function (or the grand-partition function) and various averages, including correlation functions; these quantities are obtained by summing over all configurations of the system. For definiteness, let us consider the case of the Ising model,

where J is the exchange coupling and = ±1.

For a lattice with Ns sites there are 2Ns possible configurations in phase space. Clearly all these configurations are not equally important: recall that the probability of occurrence of a given configuration, C, with energy E(C), is µ exp[-E(C)/kBT]. Therefore, from the numerical point of view, it is not efficient to waste time generating all configurations. Accordingly, the basic Monte Carlo strategy consists of an importance sampling of configurations [5,6]. This, in turn, can be implemented with the aid of the Metropolis algorithm  [4], which generates, in succession, a chain of the most likely configurations (plus fluctuations; see below). Starting from a random spin configuration C º |s1, s2,¼,sNsñ, one can imagine a walker visiting every lattice site and attempting to flip the spin on that site. To see how this is done, let us suppose the walker is currently on site i, and call C¢ the configuration obtained from C by flipping si. These configurations differ in energy by DE = E(C¢)-E(C) = 2Jsisj, where the prime in the sum restricts j to nearest neighbour sites of i. The ratio between the corresponding Boltzmann factors is then

Thus, if DE < 0, C¢ should be accepted as a new configuration in the chain. On the other hand, if DE > 0 the new configuration is less likely, but can still be accepted with probability r ¢; this possibility simulates the effect of fluctuations. Alternatively [6], one may adopt the heat-bath algorithm, in which the configuration C¢ is accepted with probability 

In both cases, the local character of the updating of the configuration should be noted; that is, the acceptance of the flip does not influence the state of all other spins. As we will see, this is not true for quantum systems.

The walker then moves on, and attempts to flip the spin on the next site through one of the processes just described. After sweeping through the whole lattice, the walker goes back to the first site, and starts attempting to flip spins on all sites again. One should make sure that the walker sweeps through the lattice many times so that the system thermalizes at the given temperature; this warming-up stage takes typically between a few hundred and a few thousand sweeps, but it can be very slow in some cases.

Once the system has warmed-up, we can start 'measuring' average values. Suppose that at the end of the z-th sweep we have stored a value Az for the quantity A. It would therefore seem natural that after Na such sweeps, an estimate for the thermodynamic average áAñ should be given by

However, the Az 's are not independent random variables, since, by construction, the configurations in the chain are somewhat correlated. One can decrease correlations between measurements by forming a group of G averages, 1, 2,¼ ,G, leading to a final average

Ideally, one should alternate Nn sweeps without taking measurements with Na sweeps in which measurements are taken.

Once the Ag's can be considered as independent random variables, the central limit theorem [15] applies, and the statistical errors in the average of A are estimated as

In addition, systematic errors must be taken into consideration, the most notable of which are finite-size effects. Indeed, one should have in mind that usually there are two important length scales in these calculations: the linear size, L, and the correlation length of the otherwise infinite system, x ~ |T-Tc|n. A generic thermodynamic quantity, XL(T), then scales as [16]

where x is a critical exponent to be defined below, and f(z) is a scaling function with very specific behaviours in the limits z << 1 and z >> 1. In the former limit it should reflect the fact that the range of correlations is limited by the finite system size, and one must have 

so that an explicit L-dependence appears. On the other hand, when correlations do not detect the finiteness of the system, the scaling function must restore the usual size-independent form,

this defines the critical exponent x. This finite-size scaling (FSS) theory can be used in the analysis of data, thus setting the extrapolation towards the thermodynamic limit on firmer grounds.

As a final remark, we should mention for completeness that the actual implementation of the Monte Carlo method for classical spins can be optimized in several aspects, such as using bit-strings to represent states [17], and broad histograms to collect data [18].

 

3  Determinantal Quantum Monte Carlo: Preliminaries

The above discussion on the Ising model was tremendously simplified due to the fact that the eigenstates of the Hamiltonian are given as products over single particle states. Quantum effects manifest themselves in the fact that different terms in the Hamiltonian do not commute. In the case of the Hubbard model, the interaction and hopping terms do not commute with each other, and, in addition, hopping terms involving the same site also do not commute with each other. Consequently, the particles lose their individuality since they are correlated in a fundamental way.

One way to overcome these non-commuting aspects [9] is to notice that the Hamiltonian contains terms which are bilinear in fermion operators (namely, the hopping and chemical potential terms) and a term with four operators (the interaction term). Terms in the former category can be trivially diagonalized, but not those in the latter. Therefore, when calculating the grand partition function,

where, as usual, r stands for a sum over all numbers of particles and over all site occupations, one must cast the quartic term into a bilinear form.

To this end, we first separate the exponentials with the aid of the Suzuki-Trotter decomposition scheme [19], which is based on the fact that

for A and B generic non-commuting operators. Calling and , respectively the bilinear and quartic terms in the Hubbard Hamiltonian, we introduce a small parameter D through b = M  D, and apply the Suzuki-Trotter formula as

The analogy with the path integral formulation of Quantum Mechanics suggests that the above procedure amounts to the imaginary-time interval (0,b) being discretized into M slices separated by the interval D .

In actual calculations at a fixed inverse temperature b, we typically set D = and choose M = b/D . As evident from Eq. (13), the finiteness of D is also a source of systematic errors; these errors can be downsized by obtaining estimates for successively smaller values of D (thus increasing the number, M, of time slices) and extrapolating the results to D ® 0. Given that this complete procedure can be very time consuming, one should at least check that the results are not too sensitive to D by performing calculations for two values of D and comparing the outcomes.

Having separated the exponentials, we can now workout the quartic terms in . We first recall a well known trick to change the exponential of the square of an operator into an exponential of the operator itself, known as the Hubbard-Stratonovich (HS) transformation, 

at the price of introducing an auxiliary degree of freedom (field) x, which couples linearly to the original operator A; this result is immediately obtained by completing the squares in the integrand. However, before a transformation of this kind can be applied to the quartic term of the Hubbard Hamiltonian, squares of operators must appear in the argument of the exponentials. Since for fermions one has º ns = 0,1 (here we omit lattice indices to simplify the notation), the following identities – in terms of the local magnetization, m º n­n¯, and of the local charge, n º n­ + n¯ – will suit our purposes:

The following points are worth making about Eqs. (15), in relation to the HS transformation: (i) an auxiliary field will be introduced for each squared operator appearing on the RHS's above; (ii) the auxiliary field will couple to the local magnetization and to the local charge when, respectively, Eqs. (15a) and (15b) are used; (iii) Eqs. (15a) and (15b) are respectively used for the repulsive and for the attractive models.

Instead of the continuous auxiliary field of Eq. (14), in simulations it is more convenient to work with discrete Ising variables, s = ±1 [10]. Inspired by Eq. (14), using Eq. (15a), and taking into account that s2 º 1, it is straightforward to see that

where the grouping in the last equality factorizes the contributions from the two fermion spin channels, s = +,– (respectively corresponding to s = ­, ¯), and 

In the attractive case, the coupling to the charge [Eq. (15b)] is used in order to avoid a complex HS transformation; we then get

with l still being given by Eq. (17).

The HS transformations then replace the on-site interaction by a fluctuating field s coupled to the magnetization or to the charge, in the repulsive or attractive cases, respectively. As a consequence, the argument of the exponential depends explicitly on s in the former case, but not in the latter. As we will see below, this very important difference is responsible for the absence of the 'minus-sign problem' in the attractive case.

We now replace the on-site interaction on every site of the space-time lattice by Eqs. (16) or (18), leading to the sought form in which only bilinear terms appear in the exponential. For the repulsive case we get

where the traces are over Ising fields and over fermion occupancies on every site, and the product from = M to 1 simply reflects the fact that earlier 'times' appear to the right. The time-slice index appears through the HS field si() in 

which are the elements of the Ns× Ns diagonal matrix Vs (). One also needs the Ns×Ns hopping matrix K, with elements

For instance, in one-dimension, and with periodic boundary conditions, one has an L×L matrix,

With bilinear forms in the exponential, the fermions can be traced out of Eq. (19) according to the development of Appendices A and B. From Eq. (124), with the spin indices reintroduced, and with the identification eD h s() º eD K eD V s (), we have

where we have defined

in which the dependence with the auxiliary Ising spins has not been explicitly written, but should be understood, since they come in through the matrix Vs (). Introducing

we arrive, finally, at

where the last equality defines an effective 'density matrix', r({s}).

We have then expressed the grand partition function as a sum over Ising spins of a product of determinants. If the quantity under the Tr were positive definite, it could be used as a Boltzmann weight to perform importance sampling over the Ising configurations. However, the product of determinants can indeed be negative for some configurations, giving rise to the 'minus-sign problem'; see Sec. 5.

In order to implement the above framework, the need for another approximation is evident from Eq. (24): one needs to evaluate the exponential of the hopping matrix, which, in the general case, is neither an analytically simple operation, nor efficiently implemented numerically. By considering again a one-dimensional system, we see that different powers of K, as given by Eq. (22), generate many different matrices. However, we can introduce a 'checkerboard break-up' by writing 

such that K(a) involves hoppings between sites 1 and 2, 3 and 4, ¼, while K(b) involves hoppings between sites 2 and 3, 4 and 5,¼ . We now invoke the Suzuki-Trotter decomposition scheme to write 

which leads to systematic errors of the same order as before. With this choice of break-up, even powers of a = a,b, become multiples of the identity matrix, and we end up with a simple expression, 

which is very convenient for numerical calculations. The breakup of Eq. (28) can be generalized to three dimensions as follows

where the separation along each cartesian direction is similar to that for the one-dimensional case.

And, finally, we must discuss the calculation of average values. For two operators A and B, their equal-'time' correlation function is 

If we now define the fermion average - or Green's function - for a given configuration of the HS fields as 

the correlation function becomes 

At this point it should be stressed the important role played by the Green's functions in the simulations. Firstly, according to Eq. (33), the average value of an operator is straighforwardly obtained by sampling the corresponding Green's function over the HS configurations weighted by r({s}). Secondly, as it will become apparent in Sec. , the single particle Green's function, á cis ñ {s},plays a central role in the updating process itself. In Appendix C we obtain this quantity as the element ij of an Ns×Ns matrix [11,8]:

which is, again, in a form suitable for numerical calculations. And, thirdly, within the present approach the fermions only interact with the auxiliary fields, so that Wick's theorem [20] holds for a fixed HS configuration [11,8,14]; the two-particle Green's functions are then readily given in terms of the single-particle ones as

where the indices include spin, but since the ­ and ¯ spin channels are factorized [c.f. Eq. (32)], these fermion averages are zero if the spins are different. All average values of interest are therefore calculated in terms of single-particle Green's functions.

As will be seen, unequal-time correlation functions are also important. We define the operator a in the 'Heisenberg picture' as

so that the initial time is set to = D with this discretization, and a† () ¹ [a( )]†. In Appendix D, we show that the unequal-time Green's function, for 1 > 2, is given by [11] 

in which the Green's function matrix at the -th time slice is defined as

with 

The reader should notice the order in which the products of B's are taken in Eqs. (34), (37), and (39); in Eq. (37), in particular, the product runs from 2+1 to 1, and not cyclically as in Eq. (39). Also, for a given configuration {s} of the HS spins, the equal-time Green's functions do display a time-slice dependence, as expressed by Eq. (38); they only become (approximately) equal after averaging over a large number of configurations.

For future purposes, we also define

 

4  Determinantal quantum Monte Carlo: The simulations

The simulations follow the lines of those for classical systems, except for both the unusual Boltzmann weight, and the fact that one sweeps through a space-time lattice. With the parameters of the Hamiltonian, Uand m , as well as the temperature, fixed from the outset, we begin by generating, say a random configuration, {s}, for the HS fields. Since the walker starts on the first time-slice, we use the definition, Eq. (34), to calculate the Green's function at = 1. As the walker sweeps the spatial lattice, it attempts to flip the HS spin at every one of the Ns points.

At this point, it is convenient to picture the walker attempting to flip the HS (Ising) spin on a generic site, i of a generic time slice, . If the spin is flipped, the matrices and change due to the element ii of thematrices V ­() and V¯(), respectively, being affected; see Eqs. (20) and (24). The expression for the change in the matrix element, as s i()®si(), is

which allows us to write the change in as a matrix product, 

where the elements of the matrix (i) are

Let us now call {s}¢ and {s}, the HS configurations in which all Ising spins are the same, except for those on site (i,), which are opposite. We can then write the ratio of 'Boltzmann weights' as

where we have defined the ratio of fermion determinants as

It is important to notice that one actually does not need to calculate determinants, since Rs is given in terms of the Green's function:

The last equality follows from the fact that 

is a matrix such that all elements are zero, except for the i-th position in the diagonal, which is (i) º e–2l ss i()–1. With this simple form for Rs, the heat-bath algorithm is then easily implemented, with the probability of acceptance of the new configuration being given by Eq. (4), with Eqs. (44) and (46).

If the new configuration is accepted, the whole Green's function for the current time slice must be updated, not just its element ii; this is the non-local aspect of QMC simulations we referred to earlier. There are two ways of updating the Green's function. One can either compute the 'new' one from scratch, through Eq. (38), or iterate the 'old' Green's function, by following along the lines that led to Eq. (46), which yields

An explicit form for s() is obtained, e.g., by first using an ansatz to calculate the inverse of the matrix in squarebrackets above,

where x is to be determined from the condition that the product of both matrices in Eq. (49) is 1. Using the fact that (i) is very sparse, one arrives at

with Rs being given by Eq.(46). The element jk of the Green's function is then updated according to 

Alternatively, one could arrive at the same result by solving a Dyson's equation for () [8].

After the walker tries to flip the spin on the last site of the -th time slice, it moves on to the first site of the (+1)-th time slice. We therefore need the Green's function for the (+1)-th time slice, which, as before, can be calculated either from scratch, or iteratively from the Green's function for the -th time slice. Indeed, by comparing [gs (+1)]-1 with [gs ()]–1, as given by Eq. (38), it is easy to see that 

which can be used to compute the Green's function in the subsequent time slice.

While the computation from scratch of the new Green's function takes ~ operations, the iterations [Eq. (51) and (52)] only take ~ operations. Hence, at first sight the latter form of updating should be used. However, rounding errors will be amplified after many iterations, thus deteriorating the Green's function calculated in this way; see Sec.6. A solution of compromise is to iterate the Green's function while the walker sweeps all spatial sites of ( ~ 10)time slices, and then to compute a new one from scratch when the +1)-th time slice is reached. This 'refreshed' Green's function will then be used to start the iterations again. Clearly, one should check g for accuracy, by comparing the updated and refreshed ones at the +1)-th time slice; if the accuracy is poor, must be decreased.

For completeness, we now return to the situation described in the beginning of this Section, in which the walker attempts to flip the HS spins on every spatial site of the = 1 time slice. Whenever the flip is accepted, the Green's function is updated according to Eq.(51). After sweeping all spatial sites, and before the walker moves on to the first site of the = 2 time slice, we use Eq. (52) to calculate g s(2). The walker then sweeps the spatial sites of this time-slice, attempting to flip every spin; if the flip is accepted, the Green's function is updated according to Eq.(51). As mentioned above, this procedure is repeated for the next time slices. After sweeping the last one of these, a new Green's function is calculated from the definition, Eq.(38). For the subsequent time slices the iterative computation of g is used, and so forth.

Similarly to the classical case, one sweeps the space-time lattice several times (warm-up sweeps) before calculating average values. After warming up, one starts the 'measuring' stage, in which another advantage of the present approach manifests itself in the fact, already noted, that all average quantities are expressed in terms of the Green's functions. This brings us to the discussion of the main quantities used to probe the physical properties of the system, and how they relate to the Green's functions.

We start with the calculation of the occupation, n. It is given by 

where the first equality illustrates that the ensemble average ( i.e., over both fermionic and HS fields) á ni­ ()+n i¯ ()ñ is itself averaged over all MNs sites of the space-time lattice. In the second equality the fermion degrees of freedom have been integrated out (hence the Green's functions), and the average over HS fields, represented by double brackets, is to be performed by importance sampling over Na HS configurations [c.f. Eq. (33)]. We then have 

It should be reminded that since these are grand-canonical simulations, the chemical potential must be chosen a priori in order to yield the desired occupation. The chemical potential leading to half filling in the Hubbard model (with nearest-neighbour hopping only) is obtained by imposing particle-hole symmetry [2,21]: one gets n = 1 for m = U/2 for all T and Ns. Away from half-filling, one has to determine m by trial and error, which must be done repeatedly, since the dependence of n with m (for given Ns and U) itself varies with temperature. Overall, the dependence of n with m indicates that the Hubbard model is an insulator at half filling in any dimension:indeed, the compressibility 

where V and P are the volume and pressure, respectively, vanishes at m = U/2, so that no particles can be added to the system; see, e.g., Ref. [22] for explicit data for n(m) in two dimensions.

The magnetic properties are probed in several ways. Since the components of the magnetization operator are

Wick's theorem [Eq. (35)] allows one to obtain the equal-time spin-density correlation functions as 

and a similar expression for the yy correlations. It isunderstood that the g¢s are to be evaluated at the same time slice as the m¢s on the LHS.

If spin rotational symmetry is preserved, as in the case ofa singlet ground state, Eqs. (57a) and (57b)should yield the same result. This is indeed the case for average values, but,in practice, transverse (i.e., xx and yy) correlations are much less noisy than the longitudinal ones[23]; this can be traced back to the discrete HS transformation, which singles out the z component by coupling the auxiliary fields to the mz¢ s. It should also be pointedout that a possible ferromagnetic state (either saturated or just a non-singlet one) would break this symmetry.

Setting i = j in Eqs. (57) leads to thelocal moment,

which measures the degree of itinerancy of the fermions. For instance, considering again the Hubbard model at half filling, á )2ñ, n = x,y,z, decreases from 1, in the frozen-charge state (i.e., when U®¥), to 1/2, in the metallic state (U = 0).

One can collect the contributions to the spin-spin correlation functions from different sites, and calculate the equal-time magnetic structure factor as

where = /2, since = 1. S(q) shows peaks at values of q corresponding to the dominant magnetic arrangements. For the Hubbard model in one dimension, S(q) has peaks at q = 2kF = pn, corresponding to quasi-long range order (i.e., algebraic spatial decay of correlations) in the ground state; one refers to this as a state with a spin-density wave [24]. In two dimensions, and at half filling, the peak located at q = (p,p) [25] signals Néel order in the ground state; away from half-filling, the system is a paramagnet [11,25] with strong AFM correlations at incommensurate q's [22].

The non-commutation aspects imply that the susceptibility is given by [20]

In the discrete-time formulation, the integral becomes a sum over different time intervals, which is carried out with the aid of the unequal-time Green's functions. Focusing on one of the components of the scalar product above, say z, we have

Similarly to the magnetic properties, one can also investigate whether or not there is the formation of a charge density wave [26,27]. With ni = ni­+ni¯, the equal-time correlation function is given by

in terms of which the charge structure factor becomes

It is also useful to define a charge susceptibility as

where, in the discrete time formulation, the above average values are given in terms of the unequal-time Green's functions as

it should be noted that the uniform charge susceptibility is proportional to the compressibility, defined by Eq.(55).

For the one-dimensional Hubbard model, the presence of a charge-density wave (CDW) is signalled by a cusp in (q) and by a divergence (as T® 0) in (q). While the insulating character at half filling prevents a CDW from setting in, away from half filling there has been some debate on whether the position of the cusp is at q = 2kF, as predicted by the Luttinger liquid theory [28], or at q = 4kF, as evidenced by QMC data and Lanczos diagonalizations [29]. In two dimensions, and away from half filling, the situation is even less clear.

We now discuss superconducting correlations. The imaginary-time singlet pair-field operator is a generalization of the BCS gap function [30], 

where the subscript z in the form factor, fz (k), labels the symmetry of the pairing state, following closely the notation for hydrogenic orbitals; for instance, in two dimensions one has

and several other possibilities, including triplet pairing, as well as linear combinations of the above. Ignoring for the time being any -dependence, and introducing the Fourier transform of the annihilation operators, Eq. (66) can be written as

with the site-dependent pair-field operator being given by

where the sum runs over lattice sites neighbouring i, with range and relative phase determined by

For the dx2-y2-wave, for instance,

where x and y are unit vectors along the cartesian directions.

Similarly to the magnetization, simple averages of pair-field operators vanish identically in a finite-sized system, so one again turns to correlation functions. We define the equal-time pairing correlation function as 

whose spatial decay is sometimes analysed as a probe for the superconducting state [31]. Its uniform (i.e., q = 0) Fourier transform,

has also been used, together with finite-size scaling arguments, to probe superconducting pairing [32,21,33]. As before, it is a straightfoward exercise to express the above averages in terms of HS averages of Green's functions.

Restoring the -dependence, another useful probe is the uniform andzero-frequency (w = 0) pairing susceptibility,

A considerable enhancement of Pz over its non-interacting counterpart may be taken as indicativeof a superconducting state; see, e.g., Refs. [34] and [35]. In fairness, there has been no real consensus on which is the best numerical probe of a superconducting state. It may be argued, for instance, that Pz should be compared with the corresponding nonvertex correlation function; see, e.g., Refs. [13,36].

While the above probes for superconductivity assume a given symmetry for the pairing state, there is an alternative procedure which is free from such assumption [37]. The Kubo linear response to a vector potential Ax(q,w) is given by the x-component of the current density,

where

is the kinetic energy associated with the x-oriented links, and

is the imaginary-time and space Fourier transform ( wm = 2pm b is the Matsubara frequency [20]) of the current density correlation function,

with

As an exercise, the reader should express the current density correlation function in terms of HS averages of Green's functions.

The superconducting properties of the system are associated with its long wavelength static response (i.e., q® 0, w = 0), but with a careful distinction in the way in which the order of the limits qx® 0 (Longitudinal) and qy® 0 (Transverse) are taken:

and

As a result of gauge invariance [37], one should always have

which can be used to check the numerics. On the other hand, the superfluid weight, Ds, (which is proportional to the superfluid density, rs) turns out to be [37]

so that a Meissner state is associated with L L ¹ L T. In practice, áKxñ is calculated independently from LT, and one checks whether the latter quantity approaches the former as qy® 0. Since the calculations are performed on finite-sized systems, one should examine the trend of the data as the number of sites is increased; see Ref. [37].

If the system is not a superconductor, the current-density correlation function still allows us to distinguish between a metallic and an insulating state. Indeed, the limit q = 0,  w® 0 (note the order of the limits, which is opposite to that used in relation to the superfluid density!) of the conductivity determines whether or not the ground state of the system has zero resistance [37]. The real part of the zero temperature conductivity can be written in the form sxx = Dd(w)+(w), where (w) is a regular function of w, and the Drude weight, D, describes the DC response. The latter can be estimated at low temperatures by [37]

which amounts to extrapolating discrete non-zero wm data towards wm = 0. This can lead to a subtle interplay between the energy scales involved; see Ref. [37] for examples.

The above-mentioned criteria to determine whether the system is metallic, insulating, or superconducting are summarized in Table 1. The reader is urged to consult References [37,38,39] for illustrative discussions on the many aspects involved in the data analyses.

 

 

At this point one should already be convinced that a very wide range of probes is available to scrutinize most of the possible physical properties of a model system. However, two technical problems must be dealt with, which will be analysed in turn: the minus-sign of the fermionic determinant, and the instabilities appearing at low temperatures.

 

5  The minus-pign problem

As mentioned before, the product of fermionic determinants is not positive definite, apart from in a few cases. The best known instance is the repulsive Hubbard model at half filling. To see why this is so, we employ a particle-hole transformation,

such that

and consider the fermionic determinant at the symmetric point, m = U/2. One then has

where the tildes denote hole variables. Therefore,

For the attractive Hubbard model, the lack of s-dependence in the discrete HS transformation [see the comments below Eq.  (18)] leads to O­({s}) º O ¯({s}), so that the product of determinants is positive for all fillings. Similar arguments apply to show that the fermionic determinant is always positive for the Holstein model for electron-phonon interactions [40,41].

In other cases, the fermionic determinant becomes negative for some configurations. In order to circumvent this problem, recall that the partition function can be written as a sum over configurations, c º {s}, of the 'Boltzmann weight', p(c) º detO­ ({s}) detO¯({s}). If we write p(c) = s(c)|p(c)|, where s(c) = ±1 to keep track of the sign of p(c), the average of a quantitity A can be replaced by an average weighted by |p(c)| as follows

where p¢ (c) º |p(c)|. Therefore, if the absolute value of p(c) is used as the Boltzmann weight, one pays the price of having to divide the averages by the average sign of the product of fermionic determinants,

ásignñ º ásñ p¢. Whenever this quantity is small, much longer runs (in comparison to the cases in which á signñ 1) are necessary to compensate the strong fluctuations in áA ñp. Indeed, from Eq. (7) we can estimate that the runs need to be stretched by a factor on the order of á signñ -2 in order to obtain the same quality of data as for ásignñ 1.

In Fig. 1(a), we show the behaviour of ásignñ as a function of band filling, for the Hubbard model on a 4×4 square lattice with U = 4, and for three different temperatures. One sees that, away from n = 1, ásignñ is only well behaved at certain fillings, corresponding to 'closed-shell' configurations; i.e., those such that the ground state of the otherwise free system is non-degenerate [13]. For the case at hand, the special fillings are 2 and 10 fermions on 4×4 sites, leading to n = 0.125 and 0.625, respectively. At any given non-special filling, ásignñ deteriorates steadily as the temperature decreases, rendering the simulations inpractical in some cases. Further, Fig. 1(b) shows that for a given temperature, the dip in ásignñ gets deeper as the system size increases. One should note, however, that the position of the global minimum is roughly independent of the system size, which, for fillings away from the dip, allows one to safely monitor size effects on the calculated properties. In this respect, the three-dimensional model is much less friendly, as shown in Fig. 1(c): for 0.3 n < 1, ásignñ is never larger than 0.5 at the same filling for both system sizes.

 

 

It is also instructive to discuss the dependence of ásignñ with temperature, keeping fixed both the system size and the band filling. In Fig. 2 we show lnásignñ vs. b for the 4×4 lattice at the closed-shell filling n = 0.625. (Actually, these data have been obtained in Ref. [40] by means of a ground state algorithm, but they follow a trend similar to those obtained from the determinantal algorithm discussed here.) As U increases, the sign deteriorates even for this special filling. For other fillings the average sign also decreases with U, and confirms the general expectation [40] that 

 

 

where g depends on n and U. While for a given n, the dependence of g on U is monotonic, for a given U, g is smaller at the special fillings than elsewhere.

The fundamental question then is how to prevent, or at least to minimize, this minus-sign problem. While one could be tempted to attribute the presence of negative-weight configurations to the special choice of Hubbard-Stratonovich transformations (HST's) used, it has been argued [42] that even the most general forms of HST's are unable to remove the sign problem. It therefore seems that the problem is of a more fundamental nature.

In order to pinpoint the origin of the problem, let us change the notation slightly and write the partition function as

where a generic HS configuration of the -th time slice is now denoted by S º {s1(),s2(),¼sNs()}; the set S º {S1,S2,¼SM} defines a complete path in the space of HS fields. Instead of applying the HST to all time slices at once, one can apply it to each time slice in succession, and collect the contribution from a given HS configuration of the first time slices into the partial path [43],

where there are M factors of º eD which have not undergone a HST. Clearly, 0 º since it corresponds to the situation in which no HS fields have been introduced yet; it is therefore positive. When HS fields are introduced in the first time slice, a ''shower'' of 2Ns different values of 1 emerges from 0; see Fig. 3. Each subsequent introduction of HS fields leads to showers of 2Ns values of emerging from each –1. In Fig. 3, we only follow two representative partial paths: the top one is always positive, while the bottom one hits the -axis at some 0. In the latter case, subsequent showers lead to both negative and positive values of the partial paths. According to the framework discussed in Sec. 4, the simulations are carried out after performing the HST on all sites of all time slices. In the present context, this amounts to sampling solely the intersection of all possible paths with the vertical line at = M; see Fig. 3. If one were able to sum over all HS configurations, we would find that the number of positive M's would exceed that of negative M's by an amount which, at low temperatures, would be exponentially small. Since in practice only a limited set of HS configurations is sampled, it is hardly surprising that we find instances in which configurations leading to negative weights outnumber those leading to positive weights. This perspective also helps us to understand why simply discarding those negative-weighted configurations is not correct: the overall contribution of the positive-weighted configurations would be overestimated in the ensemble averages.

 

 

The analysis of partial paths is at the heart of a recent proposal [43] to solve the minus-sign problem. It is based on the fact that when a partial path touches the = 0 axis, it leads to showers which, when summed over all subsequent HS fields, give a vanishing contribution; see Fig.  3. In other words, a replacement of all 's in Eq. (92) by åS0+1¼ÕsBs(S0+1)¼ does not change the fact that 0 = 0. Therefore, if one is able to follow the 'time' evolution of the partial paths, and discard those that vanish at < M, only positive-weight configurations end up contributing at = M. However, though very simple in principle, this programme is actually quite hard to implement due to the need to handle the factors without using a HS transformation. Zhang [43] suggested the use of trial 's, and preliminary results seem encouraging. Clearly, much more work is needed along these lines in order to fully assess the efficiency and robustness of the method.

Likewise, other recent and interesting proposals to deal with the minus sign problem need to be thoroughly scrutinized. In the meron-cluster approach [44], HS fields are introduced in all sites as usual, but during the sampling process (1) the configurations are decomposed into clusters which can be flipped independently, and (2) a matching between positive- and negative-weighted configurations is sought; see Ref. [44] for details, and Ref. [45] for another grouping strategy. Another approach [46], so far devised for the ground-state projector algorithm, consists of introducing a decision-making step to guide walkers to avoid configurations leading to zero weight; it would be worth examining whether the ideas behind this adaptive sampling approach could also be applied to the finite temperature algorithm. In this respect it should be noted that Zhang's approach is closely related to the Constrained Path Quantum Monte Carlo[], in which the ground-state energy and correlation functions are obtained by eliminating configurations giving rise to negative overlaps with a trial state.

In summary, at the time of this writing, QMC simulations are still plagued by the negative-sign problem. Many ideas to solve this problem have been tested over the years, and they require either some bias (through resorting to trial states) or quite intricate algorithms (which render simulations of moderately large systems at low temperatures prohibitive in terms of CPU time), or both. We hope these recent proposals can be implemented in an optimized way.

 

6  Instabilities at low temperatures

When the framework discussed so far is implemented into actual simulations, one faces yet another problem, namely, the fact that the calculation of Green's functions becomes unstable at low temperatures. As mentioned in the paragraph below Eq. (52), the Green's functions can be iterated for about ~ 10 time slices, after which they have to be calculated from scratch. However, as the temperature is lowered, i.e., for b 4, must be decreased due to large errors in the iterated Green's function (as compared with the one calculated from scratch). One soon reaches the situation in which the Green's function has to be calculated from scratch at every time slice (i.e., = 1). It should be noted that this corruption only occurs as one iterates from one time slice to another, and not in the updating stage within a given time slice [13]. Worse still is the fact that as the temperature is lowered further, the Green's function cannot even be calculated from scratch, since Os = 1+As() becomes so ill-conditioned that it cannot be inverted by simple methods. For instance, in two dimensions and for U = 0, the eigenvalues of Os range between ~ 1 and ~ e4b; for U ¹ 0 we therefore expect the ratio between the largest and smaller eigenvalues of Os to grow exponentially with M, thus becoming singular at low temperatures.

Having located the problem, two solutions have been proposed, which we discuss in turn.

A. The Space-time Formulation  

The approach used so far can be termed as a space formulation, since the basic ingredient, the Green's function [or, equivalently, the matrices As and Os, c..f. Eqs.  (26), (38), and (39)] is an Ns×Ns matrix. However, within a field-theoretic framework, if space is discretized before integrating out the fermionic degrees of freedom [9], the matrices Os are blown up to 

which is an (NsM)×(NsM) matrix (since each of the M×M entries is itself an Ns×Ns matrix); one still has [9]

as in Eq. (23), and 

Taking the inverse of Ôs yields immediately the space-time Green's function matrix, 

This space-time formulation has the advantage of shrinking the range of eigenvalues of Ôs: the ratio between its largest and smallest eigenvalues now grows only linearly with M, thus becoming numerically stable [12]. Though this approach has been extremely useful in the study of magnetic impurities [47], it does slow down the simulations when applied to the usual Hubbard model [34]. Indeed, dealing with (NsM)×(NsM) Green's functions would require (NsM)2 operations per update, M2 operations per time slice, and, finally, (NsM)3 operations per sweep of the space-time lattice. Sweeping through the space-time lattice with s is then a factor of M2 slower than sweeping with gs.

A solution of compromise between these two formulations was proposed by Hirsch [12]. Instead of using one time slice as a new entry in Ôs [Eq. (93)], we collapse M0 < M time slices into a new entry. That is, by taking M0 º M/p, with p an integer, one now deals with (Nsp)×(Nsp) matrices of the form

in terms of which the partition function is calculated as in Eq. (95). The label 1 of Ô indicates that the product of B's start at the first time slice of each of the p groupings. As a consequence, the time-dependent Green's function sub-matrices, Gs(1;2), connecting the first time slice of each grouping with either itself or with the first time slice of subsequent groupings, are readily given by [12]

where the spin indices in G have been omitted for simplicity. The Green's functions connecting the –th time slices of the p groupings are similarly obtained from the inversion of ÔM0(l), which, in turn, is obtained by increasing all time indices of the B's in Eq. (97) by –1.

In the course of simulations, one starts with the time-expanded Green's function calculated from scratch through Eq. (98), and sweeps all sites in the first time slice of each of the p groupings. Each time the move is accepted, the Green's function at that time slice is updated according to Eq. (51). After sweeping over these sets of lattice sites, one iterates the Green's functions to obtain the elements of M0(2) through [12]

We then sweep through all sites of the second time slice of each of the p groupings, and so forth, until all time slices are swept. Note that since Eq. (99) is only used M0 times (as opposed to M times) it does not lead to instabilities for M0 small enough.

Each Green's function update with Eq. (51) requires ~ (Nsp)2 operations; sweeping through all spatial sites of one time slice in each of every p groupings therefore requires ~ (Nsp)3 operations. Since there are M0 slices on each grouping, one has, finally, a total of ~ Mp2 operations per sweep, which sets the scale of computer time; this should be compared with M for the original implementation, and (NsM)3 for the impurity algorithm. The strategy is then to keep M0 ~ 20 and let p increase as the temperature is lowered. With this algorithm, values of b ~ 20–30 have been achieved in many studies of the Hubbard model; see, e.g., Refs. [25,35,37,21,33,48,49,50,29]. It should also be mentioned that since the unequal-time Green's function is calculated at each step, this space-time formulation is especially convenient when one needs frequency-dependent quantities [37].

B. Matrix-decomposition stabilization

Let us assume that M0 of the B matrices can be multiplied without deteriorating the accuracy. One then defines [13,14]

which, by Gram-Schmidt orthogonalization, can be decomposed into

where U is a well-conditioned orthogonal matrix, D is a diagonal matrix with a wide spectrum (in the sense discussed at the beginning of the Section), and R is a right triangular matrix which turns out to be well-conditioned [13].

Using the fact that U is well-conditioned, we can multiply it by another group of M0 matrices,

without compromising the accuracy. We then form the product

which amounts to a simple rescaling of the columns, without jeopardizing the stability for the subsequent decomposition as 

where the matrices U, D, and satisfy the same conditions as in the first step, Eq. (101). With R º R, we conclude the second step with 

This process is then repeated p = M/M0 times, to finally recover Eq.  (39), in the form [13] 

The equal-time Green's function is therefore calculated through Eq.  (38), as before, but care must be taken when adding the identity matrix to A, since the latter involves the wide-spectrum matrix Dp. One therefore writes

so that the inverse of Eq. (38) becomes

with 

We now apply another decomposition to P, the result of which is multiplied to the left by U, and to the right by R, in order to express the Green's function in the form [13]

In the course of simulations, one proceeds exactly as in Sec. 4, by updating the Green's function through iterations, which takes up ~ M operations. Again, the iteration from one time slice to another is limited to about time slices, and it turns out that a significant fraction of the computer time is spent in calculating the Green's function from scratch. Indeed, a fresh g is calculated M/ times, each of which involving p decompositions, each of which taking about operations. Therefore, taking ~ p, the dominant scale of computer time is ~ p2 instead, which is about a factor of M faster than the space-time algorithm, though without the bonus of the unequal-time Green's functions. This matrix decomposition algorithm has also been efficiently applied to several studies of the Hubbard model, and values of b ~ 20 - 30 have been easily achieved; see, e.g., Refs. [13,22,40,32,36,37,38,39,51].

 

7  Conclusions

We have reviewed the Determinantal Quantum Monte Carlo technique for fermionic problems. Since the seminal proposal by Blankenbecler, Scalapino, and Sugar [9], over twenty years ago, this method has evolved tremendously. Stabilization techniques allowed the calculation of a variety of quantities at very low temperatures, but the minus-sign problem still plagues the simulations, restricting a complete analysis over a wide range of band fillings and coupling constants. In this respect, it should be mentioned that other implementations of QMC, such as ground-state algorithms (see Ref. [8]), also suffer from this minus-sign problem. An efficient solution of this problem would be a major contribution to the field.

Most of the discussion was centred in the simple Hubbard model, but advances have been made in the QMC study of other models, such as the extended Hubbard model [52], the Anderson impurity model [47], and the Kondo lattice model [53]. The first applications of QMC simulations to disordered systems have appeared only recently; see, e.g., Refs. [38,39]. With the ever increasing power of personal computers and workstations, one can foresee that many properties of these and of more ellaborate and realistic models will soon be elucidated.

 

Acknowledgments

The author is grateful to F. C. Alcaraz and S. R. A. Salinas for the invitation to lecture at the 2002 Brazilian School on Statistical Mechanics, and for the constant encouragement to write these notes. Financial support from the Brazilian agencies CNPq, FAPERJ, and Ministério de Ciência e Tecnologia, through the Millenium Institute for Nanosciences, is also gratefully acknowledged.

 

A  Grouping products of exponentials

One frequently needs to group products of exponentials into a single exponential. Here we establish a crucial result for our current purposes: the product of bilinear forms can be grouped into a single exponential of a bilinear form.

To show this result [8], let the generic Hamiltonian be expressed as (we omit spin indices) 

where h is a matrix representation of the operator . The 'time' evolution of Heisenberg operators [c.f., Eq. (36)], satisfies a differential equation, whose solution can be found to be 

If we now take the linear combination 

we see that

where the last passage has been used as the definition of .

Now we introduce the product over time slices, through

so that the h's now acquire -labels, and we have

where, again, the last passage has been used to define . We can then write

so that indeed behaves as a single exponential of the effective bilinear Hamiltonian , whose elements are defined by

B  Tracing out exponentials of fermion operators

We now use the results of App. A to show that the trace over the fermionic degrees of freedom can be expressed in the form of a determinant. Given that is bilinear in fermion operators, it can be straightforwardly diagonalized [9,11,8,14]:

where, for completeness, the relations between 'old' and 'new' fermion operators are

and

the inverse relations are 

and

Given the diagonal form, the fermion trace is then immediately given by

 

C  Equal-time Green's functions

The equal-time Green's function, defined as the fermion average for a given configuration of the HS fields, is (we omit the fermion spin label) 

where one assumes, for the time being, that ci and carry no time label.

As in the calculation of the partition function, Eq. (19), we write exp(–b) as a product of M factors exp(–D). We then introduce the matrix representation, 

to write 

In order to evaluate r, we first change to the diagonal representation through Eqs. (122) and (123), to write

When we take the r on the diagonal basis, ca¢ '' act on the bra to the left, so a nonzero contribution only occurs for a¢ = a¢¢. Then, all states but |a¢ñ contribute to the product with (1+exp(–De a)) [see Eq. (124)] which cancels with the same term in the denominator. We are then left with

We can generalize the above result and write the Green's function at the -th time slice as 

The reader should convince himself/herself that the average of g() over all HS configurations should be independent of l.

 

D  Unequal-time Green's functions

The unequal-time Green's functions are defined as (we omit the spin indices, for simplicity)

As in the calculation of the partition function, we write the first exponential to the right of the r as a product of M factors eD, so that there are, in effect, M 1 of these factors before ci; by the same token, there are 1 2 such factors between ci and , and 2 factors to the right of . We then use the cyclic property of the r to bring these latter factors to the front, and introduce 1 2 factors of eD followed by the same number of eD factors before ci. The Suzuki-Trotter decomposition is then used to break the kinetic energy and interaction terms, and the discrete HS transformation is applied to the exponentials involving the interaction; this adds a time-slice label to the resulting group of exponentials, and we can write

where D is given by Eq. (126). The product of D's can be expressed in its diagonal representation, {|}, 

which can also be used to express ci, as in Eq. (122). Since 

we have 

Thus, since 1 – na = 0,1, we can write 

which leads to 

where the sum is in site indices.

With (137), Eq. (132) becomes 

which can, finally, be expressed in terms of the equal-time Green's functions, Eq. (130), as

The reader should check that an analogous calculation, still for 1 > 2, leads to 

In the main text, the Green's functions given in Eqs. (139) and (140) acquire a spin index.

 

References

[1] J. Hubbard, Proc. R. Soc. (London) A 276, 238 (1963).         [ Links ]

[2] V. J. Emery, Phys. Rev. B 14, 2989 (1976).         [ Links ]

[3] R. Micnas, J. Ranninger, and S. Robaszkievicz, Rev. Mod. Phys. 62, 113 (1990).         [ Links ]

[4] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. W. Teller, and E. Teller, J. Chem. Phys. 21, 1087 (1953).         [ Links ]

[5] S. E. Koonin, Computational Physics, (Addison Wesley, NY), 1986.         [ Links ]

[6] K. Binder and D. W. Heermann, Monte Carlo Simulation in Statistical Physics, Solid State Sciences, Vol. 80, 3rd Edition, (Springer, Berlin), 1997.         [ Links ]

[7] Quantum Monte Carlo Methods, Solid State Sciences, Vol. 74, ed. M. Suzuki (Springer, Berlin), 1986.         [ Links ]

[8] W. von der Linden, Phys. Rep. 220, 53 (1992).         [ Links ]

[9] R. Blankenbecler, D. J. Scalapino, and R. L. Sugar, Phys. Rev. D 24, 2278 (1981).         [ Links ]

[10] J. E. Hirsch, Phys. Rev. B 28, 4059 (1983).         [ Links ]

[11] J. E. Hirsch, Phys. Rev. B 31, 4403 (1985).         [ Links ]

[12] J. E. Hirsch, Phys. Rev. B 38, 12 023 (1988).         [ Links ]

[13] S. R. White, D. J. Scalapino, R. L. Sugar, N. E. Bickers, and R. T. Scalettar, Phys. Rev. B 39, 839 (1989).         [ Links ]

[14] E. Y.Loh Jr and J. E. Gubernatis, in Electronic Phase Transitions, edited by W. Hanke and Yu. V. Kopaev (Elsevier, Amsterdam, 1992).         [ Links ]

[15] F. Reif, Fundamentals of Statistical and Thermal Physics, (McGraw-Hill, NY), 1965.         [ Links ]

[16] M. E. Fisher, in Critical Phenomena, Proceedings of the International School of Physics ''Enrico Fermi'', Course LI, Varenna, 1970, edited by M. S. Green (Academic, New York, 1971);         [ Links ]M. N. Barber, in Phase Transitions and Critical Phenomena, edited by C. Domb and J. L. Lebowitz (Academic, London, 1983), Vol. 8.         [ Links ]

[17] P. M. C. de Oliveira, Computing Boolean Statistical Models, (World Scientific, Singapore), 1991.         [ Links ]

[18] P. M. C. de Oliveira, T. J. P. Penna, and H.  J. Herrmann, Eur. Phys. J. B 1, 205 (1998).         [ Links ]

[19] M. Suzuki, in Ref. [7].

[20] A. L. Fetter and J. D. Walecka, Quantum Theory of Many-Particle Systems, (McGraw-Hill, New York), 1971.         [ Links ]

[21] R. R. dos Santos, Phys. Rev. B 48, 3976 (1993).         [ Links ]

[22] A. Moreo, D. J. Scalapino, R. L. Sugar, S. R. White, and N. E. Bickers, Phys. Rev. B 41, 2313 (1990).         [ Links ]

[23] J. E. Hirsch, Phys. Rev. B 35, 1851 (1987).         [ Links ]

[24] G. Grüner, Rev. Mod. Phys. 60, 1129 (1988).         [ Links ]

[25] J. E. Hirsch and S. Tang, Phys. Rev. Lett. 62, 591 (1989).         [ Links ]

[26] G. Grüner, Rev. Mod. Phys. 66, 1 (1994).         [ Links ]

[27] S. Brown and G. Grüner, Sci. Am. 270, 28 (1994).         [ Links ]

[28] H. J. Schulz, Phys. Rev. Lett. 64, 2831 (1990).         [ Links ]

[29] T. Paiva and R. R. dos Santos, Phys. Rev. B 61, 13480 (2000).         [ Links ]

[30] for a self-contained introduction to superconductivity see, e.g., D. R. Tilley e J. Tilley, Superfluidity and Superconductivity, 2nd Edition, (IOP Publishing, Bristol), 1994.         [ Links ]

[31] K. Kuroki and H. Aoki, Phys. Rev. B 56 14287 (1997);         [ Links ]K. Kuroki R. Arita, and H. Aoki, J. phys. Soc. Japan 66, 3371 (1997);         [ Links ]K. Kuroki and H. Aoki, J. phys. Soc. Japan 67, 1533 (1998).         [ Links ]

[32] A. Moreo and D. J. Scalapino, Phys. Rev. Lett. 66, 946 (1991).         [ Links ]

[33] R. R. dos Santos, Phys. Rev. B 50, 635 (1994).         [ Links ]

[34] J. E. Hirsch and H.-Q. Lin, Phys. Rev. B 37, 5070 (1988).         [ Links ]

[35] R. R. dos Santos, Phys. Rev. B 39, 7259 (1989).         [ Links ]

[36] A. Moreo and D. J. Scalapino, Phys. Rev. B 43, 8211 (1991).         [ Links ]

[37] D. J. Scalapino, S. R. White, and S. C. Zhang, Phys. Rev. Lett. 68, 2830 (1992);         [ Links ]Phys. Rev. B 47, 7995 (1993).         [ Links ]

[38] N. Trivedi, R. T. Scalettar, and M. Randeria, Phys. Rev. B 54, R3756 (1996).         [ Links ]

[39] R. T. Scalettar, N. Trivedi, and C. Huscroft, Phys. Rev. B 59, 4364 (1999)         [ Links ]

[40] E. Y. Loh, Jr. J. E. Gubernatis, R. T. Scalettar, S. R. White, D. J. Scalapino, and R. L. Sugar, Phys. Rev. B 41, 9301 (1990).         [ Links ]

[41] R. M. Noack, D. J. Scalapino, and R. T. Scalettar, Phys. Rev. Lett. 66, 778 (1991).         [ Links ]

[42] G. G. Batrouni and P. de Forcrand, Phys. Rev. B 48, 589 (1993).         [ Links ]

[43] S. Zhang, Phys. Rev. Lett. 83, 2777 (1999);         [ Links ]Comp. Phys. Comm. 127, 150 (2000); in Quantum Monte Carlo Methods in Physics and Chemistry, edited by M. P. Nightingale and C. J. Umrigar (Kluwer Academic Publishers, 1999) and cond-mat/9909090.         [ Links ]

[44] S. Chandrasekharan and U.-J. Wiese, Phys. Rev. Lett. 83, 3116 (1999).         [ Links ]

[45] C. H. Mak, R. Egger, and H. Weber-Gottschick, Phys. Rev. Lett. 81, 4533 (1998).         [ Links ]

[46] Y. Asai, Phys. Rev. B 62, 10 674 (2000).         [ Links ]

[47] S. Zhang, J. Carlson, and J. E. Gubernatis, Phys. Rev. Lett. 74, 3652 (1995);         [ Links ]Phys. Rev. B 55, 7464 (1997);         [ Links ]J. Carlson, J. E. Gubernatis, G. Ortiz, and S. Zhang, Phys. Rev. B 59, 12 788 (1999).         [ Links ]

[48] J. E. Hirsch and R. M. Fye, Phys. Rev. Lett. 56, 2521 (1986);         [ Links ]R. M. Fye and J. E. Hirsch, Phys. Rev. B 38, 433 (1988).         [ Links ]

[49] R. R. dos Santos, Phys. Rev. B 46, 5496 (1992).         [ Links ]

[50] R. R. dos Santos, Phys. Rev. B 51, 15 540 (1995).         [ Links ]

[51] H. Ghosh and R. R. dos Santos, J. Phys.: Condens. Matt. 11, 4499 (1999).         [ Links ]

[52] T. Paiva, R. T. Scalettar, C. Huscroft, and A. K.  McMahan, Phys. Rev. B 63, 125116 (2001).         [ Links ]

[53] Y. Zhang and J. Callaway, Phys. Rev. B 39, 9397 (1989).         [ Links ]

[54] S. Capponi and F. F. Assaad, Phys. Rev. B 63, 155114 (2001).         [ Links ]

 

 

Address to correspondence
Raimundo R. dos Santos
Instituto de Física, Universidade Federal do Rio de Janeiro
Caixa Postal 68528
21945-970, Rio de Janeiro, RJ, Brazil

E-mail: rrds@if.ufrj.br

Received on 27 August, 2002