Acessibilidade / Reportar erro

World records in the size of simulated Ising models

Abstract

Monte Carlo simulations with up to 176(5) spins confirm the Chen-Dohm theory for the five-dimensional Ising model. Also world record sizes 1000192²; 9984³; 880(4); 48(6) and 21(7) spins were simulated in the literature, and we describe the needed multi-spin coding algorithm.


World records in the size of simulated Ising models

Dietrich Stauffer

Institute for Theoretical Physics,

Cologne University, D-50923 Köln, Euroland

e-mail: stauffer@thp.uni-koeln.de

Received on 15 July, 2000

Monte Carlo simulations with up to 176

5

spins confirm the Chen-Dohm theory for the five-dimensional Ising model. Also world record sizes 1000192

2

; 9984

3

; 880

4

; 48

6

and 21

7

spins were simulated in the literature, and we describe the needed multi-spin coding algorithm.

Brazil does good Ising research, but Ising was not born there. Thus in contrast this paper reports on some work done in the town where Ernst Ising was born a century ago: Monte Carlo simulations of large Ising models in d dimensions: nearest-neighbour spin 1/2 model without any further quantum effects, periodic or helical boundaries on hypercubical lattices of Ld spins.

It is a waste of memory to store every spin of an Ising model in a full computer word of, say, 32 bits. Friedberg and Cameron [1] seem to have started putting spins into single bits, Claudio Rebbi [2] made the technique popular as multi-spin coding, but in biology such bitstrings were known from Eigen's quasispecies model [3], also of 1972. Since in three dimensions we have to sum up over six neighbours, giving zero to six antiparallel spins, the energy requires three bits. In more than three dimensions, more bits are needed. Here we use four bits per spin, allowing simulations in up to seven dimensions and fitting nicely into modern 32- or 64-bit words. (Rebbi used 3 bits on 60-bit words.) I do not deal here with the more efficient but less versatile method of using one bit per spin and doing the sum over the nearest neighbours through purely logical operations. Instead, the program allows for varying the dimensionality d by just changing one parameter. Its principles were already explained long ago [4, 5].

Let us assume we already have a working three-dimensional Ising program with multi-spin coding using 4 bits per spin, on a supercomputer like the Cray-T3E with 64 bits per word and thus 16 spins per word. The spin words are called IS(ii, i) with ii going from 1 to LL = L/16 and i = 1 ... Ld-1 in d dimensions. To add a fourth or fifth dimension we just have to add two or four neighbours. If we do this with an innermost loop over the 6 (or 2d) neighbours then vector computing is slowed down. If we deal with traditional if-commands then both vector and scalar computers are slowed down. The trick is to use if-commands depending on a constant dimension idim defined at the beginning through a Fortran PARAMETER statement (or analogous constructs in lesser languages). Then a statement like if(idim.eq.3) goto 8 is evaluated already at compile time, the compiler might even give you a warning that the following line is never reached, and after compilation the executation is no longer slowed down by the numerous if-statements.

Of course, it is still a waste of memory to have four bits per spin when one suffices. A way out is compression: only the current hyperplane and the two neighbouring hyperplanes are stored as IS with four bits per spin, while for all other planes four words are compressed into one after shifting them by zero, one, two or three bit positions. They can be re-read from there by logical-AND statements with some MASK of 0001 sequences, called MK(0), ... MK(3) here. Normally this would reduce the main spin array ISM(ii,i) to ii = 1 ... L/64, i = 1 ... Ld-1.

However, in this case the minimum linear dimension would be L = 128 since at least two words are needed for each lattice line in multi-spin coding. This would rectrict the applications of the program very much. Therefore, this 4:1 compression was instead performed for the second index i, thus requiring a main spin array ISM(ii,i) with ii = 1 ... L/16, i = 1 ...Ld-1/4. Unfortunately, this makes programming complicated, involving the variables I0, IM, IP, INDEX, INDX after the loop DO 4 in my program. And I cannot use it for two dimensions anymore, where compression was made in the ii-direction instead. (In case of need, ask stauffer@thp.uni-koeln.de for a simpler program ispardd9.f in two to seven dimensions without compression; the complicated one is ispardd0.f.) More than twenty spins were updated every microsecond on every processor of a fast Cray-T3E in Jülich, for five dimensions.)

[Technical remarks: This program was written for parallel computing with NPROC processors by dividing the hypercube into NPROC layers of thickness L/NPROC ³ 2. The message-passing command shmem_get(a, b, c, d) gets c words from processor d, starting with word b there, and stores them into the memory of the current processor NODE = shmem_my_pe beginning at word a. The line INFO = BARRIER() synchronizes the processors: computation proceeds after all nodes have reached this line. POPCNT counts the number of set bits in a word, IRTC the number of machine cycles, and the final loop do iadd = 1,nproc-1 gives a primitive global summation over the magnetization in every node. We use the Heat Bath algorithm.]

What can we do with this program? As the title promises it allows to simulate world record sizes, with L = 106 in two dimensions, matching Linke et al. [6], L = 104 in three dimensions equalizing percolation [7], and 8804, 1765 and 486 spins in higher dimensions. These world records were established at the same time as the prediction of Ceperley [8] was published, that 1012 sites could be simulated. (Since we need at least two hyperplanes per processor, the 512 nodes of the Cray-T3E cannot all be used for higher dimensions. Fortunately, in physics there is little need for more than five dimensions.) Some of the results in lower dimensions were given before [9] and are compatible with those from smaller systems [10]. Now come some new five-dimensional simulations.

According to Chen and Dohm [11], the behavior in five dimensions, above the upper critical dimension of four, is far from trivial: Some finite-size field theories are invalid, and finite-size scaling is not of the usual one-parameter type. This work was criticized [12] as being in contradiction to Monte Carlo simulations, and that criticism in turn was recently refuted [13]. A comparison of field theory, f4 lattice theory, and the true Ising model requires to fit several parameters since e.g. the critical temperature is not the same. Ref.[12] set one of these parameters to unity, while Ref. [13] allowed it to be different from one. This latter choice removed [13] some discrepancy between the theoretical prediction [11] and Monte Carlo simulation [12]. As a result, several bulk amplitudes differ depending on whether we use the fit of Mainz[12] or Aachen[13], and we now perform new simulations to find out which of these two predictions, [12] or [13], fits our new data better. (The parameters appearing in the theory of [11, 13] have been identified as bulk parameters related to the amplitudes of the bulk correlation length, the bulk susceptibility and the bulk magnetization. Thus the values of their parameters can be tested by bulk simulations.)

First, we determined the correlation length x, defined through the exponential decay of the correlation function, by observing the magnetization profile between two hyperplanes of up spins, fitting an exponential on an intermediate space region somewhat removed from the surface but where the magnetization still differs significantly from its bulk value, Fig.1. We used one-word-per spin coding for typically 355 spins above Tc in zero magnetic field and multi-spin coding (4 bits per spin) for 485, 805, 1125 at T = Tc in a magnetic field h = mB/kBT, where m is the magnetic dipole moment. Typically, 2000 Monte Carlo steps per spin were simulated. Fig. 2 shows the data for T > Tc in zero field, Fig.3 the case T = Tc in a positive field. In spite of the larger lattices, the results in a magnetic field are less accurate since then, as opposed to paramagnets, the bulk magnetization is nonzero and has to be subtracted from the magnetization profile. We thus estimate xh1/3 = 0.27 ± 0.03 and x[(T - Tc)/Tc]1/2 = 0.36 ± 0.02, in better agreement with the Aachen predictions (0.273 and 0.396) than the Mainz estimates (0.390 and 0.549).

The magnetization M can be determined more accurately since no fit to an exponential profile is involved. Fig. 4 shows at T = Tc the amplitude M/h1/3 and Fig.5 shows M/(1 - T/Tc)1/2 below Tc in zero field. Again the lower horizontal line is the Aachen prediction, which agrees better with the data than the upper horizontal line from Mainz. Moroever, Fig.5 using up to 1765 spins shows, as predicted by analytic theory [13], a scaling correction 2.25 - 1.1 · (1 - T/Tc)1/2 which fits better than the correction linear in T - Tc suggested by Cheon et al [11].


Figure 2.
Correlation length amplitudes above Tc in zero field.

Figure 4.
Magnetization amplitudes at Tc in positive field.

Figure 5. Magnetization amplitudes below Tc in zero field with t = T/Tc - 1; the straight line with negative slope suggests scaling corrections µ .

Thus in all cases the Aachen prediction fits better than the Mainz alternative, and only Fig.2 suggests that Aachen is only approximate.

[Two side remarks: (i) In six and seven dimensions, Figs. 6 and 7 confirm the series estimate J/kBTc = 0.092295(3) and 0.077706(2) less accurately [16]: For the correct temperature the intercept of the extrapolation gives the asymptotic kinetic exponent z = 2. The 217 spins were simulated without multi-spin coding. For Swendsen-Wang cluster flips, Fig.8 shows nicely the exponential decay of the magnetization with time in two dimensions: z = 0. Three-dimensional Q2R cellular automata still have an unexplained dynamics, z = 3.37, Fig.9. Fig.10 shows results for Metropolis kinetics with Ito's one-bit-per-spin and parallelization in stripes and 1010 updates per second. These system sizes can be compared with the present site percolation records [7] of L = 4 million, 10000, 611, 127, 49, and 26 for two to seven dimensions.





(ii) At the last conference in Belo Horizonte in which I took part, the Brazilian computational physics meeting, I learned [14] that simulations of dipole forces are easy. As a result, the stability of arrays of parallel strips of Ising spins interacting also with long-range dipole forces, as for iron monolayers on stepped surfaces [15], was investigated by Michelsen [17]. He found the Curie point to be stable for large systems, and not just a gradual freezing-in as in a glass.]

I thank X.S. Chen and V. Dohm for stimulating discussions and for sending the various 5D amplitude predictions, N. Ito for checking a parallel algorithm, P.M.C. de Oliveira for a critical reading of the manuscript, SFB 341 for partial support, and the German Supercomputer Center Jülich for time on their Cray-T3E.

References

[1] R. Friedberg and J.E. Cameron, J. Chem. Phys. 52, 6049 (1972)

[2] M. Creutz, L. Jacobs and C. Rebbi, Phys. Rev. Lett. 42, 1390 (1979)

[3] M. Eigen, J. McCaskill and P. Schuster, Adv. Chem. Phys. 75, 149 (1990). See also H.J. Muller, Mutat. Res. 1, 2 (1964).

[4] D. Stauffer, F.W. Hehl, N. Ito, V. Winkelmann and J.G. Zabolitzky, Computer Simulation and Computer Algebra, Springer, Heidelberg-Berlin 1993

[5] D. Stauffer, page 5 in: Computer simulation studies in condensed matter physics VI, D. P. Landau, K. K. Mon, H. B. Schüttler, eds., Springer, Heidelberg 1993

[6] A. Linke, D.W. Heermann, P. Altevogt, M. Siegert, Physica A 222, 205 (1995)

[7] N.Jan and D. Stauffer, Int. J. Mod. Phys. C 9, 341 (1998); D. Tiggemann and A.B. MacIsaac, priv.comm.

[8] L. Ceperley, Rev. Mod. Phys. 71, S 438 (1999)

[9] D. Stauffer, Physica A 244, 344 (1997), Int. J. Mod. Phys.C 10, 807, 809, 931 (1999)

[10] N. Ito, K. Hukushima, K. Ogawa and Y. Ozeki, J. Phys. Soc. Jpn. 69, 1931 (2000)

[11] X. S. Chen and V. Dohm, Int. J. Mod. Phys. C 9, 1007 and 1073 (1998); see also M. Cheon, I. Chang and D. Stauffer, Int. J. Mod. Phys. C 10, 131 (1999) and Phys. Rev. E, in press (Jan. 2001).

[12] E. Luijten, K. Binder and H. W. J. Blöte, Eur. Phys. J. B 9, 289 (1999); K. Binder et al., Physica A 281, 112 (2000).

[13] X. S. Chen and V. Dohm, e-print cond-mat 0005022 and priv.comm.

[14] T.G. Rappoport, T.S. de Menezes, L.C. Sampaio, M.P. Albuquerque and F. Mello, Int. J. Mod. Phys. C 9, 821 (1998)

[15] P. Sen, D. Stauffer and U. Gradmann, Physica A 245, 361 (1997)

[16] M. Gofman, J. Adler, A. Aharony, A.B. Harris and D. Stauffer, J. Stat. Phys. 71, 1221 (1993)

[17] A. Michelsen, Masters Thesis, Hamburg and Cologne, Feb. 2000

[18] J.S. Wang and R.H. Swendsen, Physica A 167, 565 (1990)

[19] R. Hackl, H.G. Matuttis, J.M. Singer, T. Husslein, and I. Morgenstern, Int. J. Mod. Phys. C 4, 1117 (1993)

  • [1] R. Friedberg and J.E. Cameron, J. Chem. Phys. 52, 6049 (1972)
  • [2] M. Creutz, L. Jacobs and C. Rebbi, Phys. Rev. Lett. 42, 1390 (1979)
  • [3] M. Eigen, J. McCaskill and P. Schuster, Adv. Chem. Phys. 75, 149 (1990).
  • See also H.J. Muller, Mutat. Res. 1, 2 (1964).
  • [4] D. Stauffer, F.W. Hehl, N. Ito, V. Winkelmann and J.G. Zabolitzky, Computer Simulation and Computer Algebra, Springer, Heidelberg-Berlin 1993
  • [5] D. Stauffer, page 5 in: Computer simulation studies in condensed matter physics VI, D. P. Landau, K. K. Mon, H. B. Schüttler, eds., Springer, Heidelberg 1993
  • [6] A. Linke, D.W. Heermann, P. Altevogt, M. Siegert, Physica A 222, 205 (1995)
  • [7] N.Jan and D. Stauffer, Int. J. Mod. Phys. C 9, 341 (1998);
  • [8] L. Ceperley, Rev. Mod. Phys. 71, S 438 (1999)
  • [9] D. Stauffer, Physica A 244, 344 (1997), Int. J. Mod. Phys.C 10, 807, 809, 931 (1999)
  • [10] N. Ito, K. Hukushima, K. Ogawa and Y. Ozeki, J. Phys. Soc. Jpn. 69, 1931 (2000)
  • [11] X. S. Chen and V. Dohm, Int. J. Mod. Phys. C 9, 1007 and 1073 (1998);
  • see also M. Cheon, I. Chang and D. Stauffer, Int. J. Mod. Phys. C 10, 131 (1999) and Phys. Rev. E, in press (Jan. 2001).
  • [12] E. Luijten, K. Binder and H. W. J. Blöte, Eur. Phys. J. B 9, 289 (1999);
  • K. Binder et al., Physica A 281, 112 (2000).
  • [14] T.G. Rappoport, T.S. de Menezes, L.C. Sampaio, M.P. Albuquerque and F. Mello, Int. J. Mod. Phys. C 9, 821 (1998)
  • [15] P. Sen, D. Stauffer and U. Gradmann, Physica A 245, 361 (1997)
  • [16] M. Gofman, J. Adler, A. Aharony, A.B. Harris and D. Stauffer, J. Stat. Phys. 71, 1221 (1993)
  • [17] A. Michelsen, Masters Thesis, Hamburg and Cologne, Feb. 2000
  • [18] J.S. Wang and R.H. Swendsen, Physica A 167, 565 (1990)
  • [19] R. Hackl, H.G. Matuttis, J.M. Singer, T. Husslein, and I. Morgenstern, Int. J. Mod. Phys. C 4, 1117 (1993)

Publication Dates

  • Publication in this collection
    06 Feb 2002
  • Date of issue
    Dec 2000

History

  • Received
    15 July 2000
Sociedade Brasileira de Física Caixa Postal 66328, 05315-970 São Paulo SP - Brazil, Tel.: +55 11 3091-6922, Fax: (55 11) 3816-2063 - São Paulo - SP - Brazil
E-mail: sbfisica@sbfisica.org.br