Acessibilidade / Reportar erro

Event-by-event simulation of quantum phenomena

Abstract

In this talk, I discuss recent progress in the development of simulation algorithms that do not rely on any concept of quantum theory but are nevertheless capable of reproducing the averages computed from quantum theory through an event-by-event simulation. The simulation approach is illustrated by applications to single-photon Mach-Zehnder interferometer experiments and Einstein-Podolsky-Rosen-Bohm experiments with photons.

Quantum Theory; Computational Techniques


Event-by-event simulation of quantum phenomena* * V Brazilian Meeting on Simulational Physics, Ouro Preto, 2007

H. De Raedt† † Electronic address: h.a.de.raedt@rug.nl

Department of Applied Physics, Zernike Institute for Advanced Materials, University of Groningen, Nijenborgh 4, NL-9747 AG Groningen, The Netherlands

ABSTRACT

In this talk, I discuss recent progress in the development of simulation algorithms that do not rely on any concept of quantum theory but are nevertheless capable of reproducing the averages computed from quantum theory through an event-by-event simulation. The simulation approach is illustrated by applications to single-photon Mach-Zehnder interferometer experiments and Einstein-Podolsky-Rosen-Bohm experiments with photons.

Keywords: Quantum Theory; Computational Techniques

I. INTRODUCTION

Computer simulation is widely regarded as complementary to theory and experiment [1]. The standard procedure is to start from one or more basic equations of physics and to apply existing or invent new algorithms to solve these equations. This approach has been highly successful for a wide variety of problems in science and engineering. However, there are a number of physics problems, very fundamental ones, for which this approach fails, simply because there are no basic equations to start from.

Indeed, as is well-known from the early days in the development of quantum theory, quantum theory has nothing to say about individual events [2–4]. Reconciling the mathematical formalism that does not describe individual events with the experimental fact that each observation yields a definite outcome is referred to as the quantum measurement paradox and is the most fundamental problem in the foundation of quantum theory [3].

If computer simulation is indeed a third methodology, it should be possible to simulate quantum phenomena on an event-by-event basis. For instance, it should be possible to simulate that we can see, with our own eyes, how in a two-slit experiment with single electrons, an interference pattern appears after a considerable number of individual events have been recorded by the detector [5].

In view of the quantum measurement paradox, it is unlikely that we can find such a simulation algorithm by limiting our thinking to the framework of quantum theory. Of course, we could simply use pseudo-random numbers to generate events according to the probability distribution that is obtained by solving the time-independent Schrödinger equation. However, that is not what we mean when we say that within the framework of quantum theory, there is little hope to find an algorithm that simulates the individual events and reproduces the expectation values obtained from quantum theory. The challenge is to find algorithms that simulate, event-by-event, the experimental observations that, for instance, interference patterns appear only after a considerable number of individual events have been recorded by the detector [5, 6], without first solving the Schrödinger equation.

In a number of recent papers [7–15], we have demonstrated that locally-connected networks of processing units can simulate event-by-event, the single-photon beam splitter and Mach-Zehnder interferometer experiments of Grangier et al. [6]. Furthermore, we have shown that this approach can be generalized to simulate universal quantum computation by an event-by-event process [8, 10, 12], and that it can be used to simulate real Einstein-Podolsky-Rosen-Bohm (EPRB) experiments [13–15]. Therefore, at least in principle, our approach can be used to simulate all wave interference phenomena and many-body quantum systems using particle-like processes only. Our work suggests that we may have discovered a procedure to simulate quantum phenomena using event-based processes that satisfy Einstein's criterion of local causality.

This talk is not about interpretations or extensions of quantum theory. The fact that there exist simulation algorithms that reproduce the results of quantum theory has no direct implications on the foundations of quantum theory: These algorithms describe the process of generating events on a level of detail about which quantum theory has nothing to say [3, 4]. The average properties of the data may be in perfect agreement with quantum theory but the algorithms that generate such data are outside of the scope of what quantum theory can describe. This may sound a little strange but it may not be that strange if one recognizes that probability theory does not contain nor provides an algorithm to generate the values of the random variables either, which in a sense, is at the heart of the quantum measurement paradox [15].

II. SINGLE-PHOTON MACH-ZEHNDER INTERFEROMETER

Figure 1 shows the schematic diagram of a Mach-Zehnder interferometer [17]. From Maxwell's theory of classical electrodynamics it follows that the intensity of light recorded by detectors N2 and N3 is proportional to cos2f/2 and sin2f/2, respectively [17]. Here f = f1-f2 is the phase difference that expresses the fact that depending on which path the light takes to travel from the first to the second beam splitter, the optical path length may be different [17].


It is an experimental fact that when the Mach-Zehnder interferometer experiment is carried out with one photon at a time, the number of individual photons recorded by detectors N2 and N3 is proportional to cos2f/2 and sin2f/2 [6], in agreement with classical electrodynamics. In quantum physics [18], single-photon experiments with one beam splitter provide direct evidence for the particle-like behavior of photons. The wave mechanical character appears when one performs interference experiments with individual particles [3, 6]. Quantum physics "solves" this logical contradiction by introducing the concept of particle-wave duality [3].

In this section, we describe a system that does not build on any concept of quantum theory yet displays the same interference patterns as those observed in single-photon Mach-Zehnder interferometer experiments [6]. The basic idea is to describe (quantum) processes in terms of events, messages, and units that process these events and messages. In the experiments of Grangier et al. [6], the photon carries the message (a phase), an event is the arrival of a photon at one of the input ports of a beam splitter, and the beam splitters are the processing units. In experiments with single photons, there is no way other than through magic, by which a photon can communicate directly with another photon. Thus, it is not difficult to imagine that if we want a system to exhibit some kind of interference, the communication among successive photons should take place in the beam splitters.

In this talk, we consider the simplest processing unit that is adequate for our purpose, namely a standard linear adaptive filter [9]. The processing unit receives a message through one of its input ports, processes the message according to some rule (see later), and sends a message (carried by the messenger, that is a photon) through an output port that it selects using a pseudo-random number, drawn from a distribution that is determined by the current state of the processing unit. Other, more complicated processing units that operate in a fully deterministic manner are described elsewhere [7, 10]. Although the sequence of events that the different types of processing units produce can be very different, the quantities that are described by quantum theory, namely the averages, are the same. The essential feature of all these processing units is their ability to learn from the events they process. Processing units that operate according to this principle will be referred to as deterministic learning machines (DLMs) [7, 10].

By connecting an output channel to the input channel of another DLM, we can build networks of DLMs. As the input of a network receives an event, the corresponding message is routed through the network while it is being processed and eventually a message appears at one of the outputs. At any given time during the processing, there is only one output-to-input connection in the network that is actually carrying a message. The DLMs process the messages in a sequential manner and communicate with each other by message passing. There is no other form of communication between different DLMs. The parts of the processing units and network map one-to-one on the physical parts of the experimental setup and only simple geometry is used to construct the simulation algorithm. Although networks of DLMs can be viewed as networks that are capable of unsupervised learning, they have little in common with neural networks. It obvious that this simulation approach satisfies Einstein's criteria of realism and local causality [3].

A. Beam splitter

Figure shows the schematic diagram of a DLM that simulates a beam splitter, event-by-event. We label events by a subscript n > 0. At the (n + 1)th event, the DLM receives a message on either input channel 0 or 1, never on both channel simultaneously. Every message consists of a two-dimensional unit vector yn+1 = (y1,n+1,y2,n+1). This vector represents the phase of the event that occurs on channel 0 (1). Although it would be sufficient to use the phase itself as the message, in practice it is more convenient to work with the cosine (y1,n+1) and sine (y2,n+1) of the phase.

The first stage of the DLM (see Fig. 2) stores the message yn+1 in its internal register Yk. Here, k = 0 (1) if the event occurred on channel 0 (1). The first stage also has an internal two-dimensional vector x = (x0,x1) with the additional constraints that xi > 0 for i = 0,1 and that x0 + x1 = 1. As we only have two input channels, the latter constraint can be used to recover x1 from the value of x0. We prefer to work with internal vectors that have as many elements as there are input channels. After receiving the (n + 1)-th event on input channel k = 0,1 the internal vector is updated according to the rule


where 0 < a < 1 is a parameter. By construction xi,n+1> 0 for i = 0,1 and x0,n+1 + x1,n+1 = 1. Hence the update rule Eqs. (1) preserves the constraints on the internal vector. Obviously, these constraints are necessary if we want to interpret the xk,n as (an estimate of) the probability for the occurrence of an event of type k.

The second stage of the DLM takes as input the values stored in the registers Y0, Y1, x and transforms this data according to the rule

where we have omitted the event label (n + 1) to simplify the notation. Note that the second subscript of the Y-register refers to the type of input event.

The third stage of the DLM in Fig. 2 responds to the input event by sending a message wn+1 = ( Y0,0 – Y1,1,Y0,1 + Y1,0/ through output channel 0 if > r where 0 < r < 1 is a uniform random number. Otherwise the back-end sends the message zn+1 = ( Y0,1 - Y1,0,Y0,0 + Y1,1/ through output channel 1. Finally, for reasons of internal consistency of the simulation method, it is necessary to replace wn+1 by wn+1/||wn+1|| or zn+1 by zn+1/||zn+1|| such that the output message is represented by a unit vector.

It is almost trivial to perform a computer simulation of the DLM model of the beam splitter and convince oneself that it reproduces all the results of quantum theory for this device [9]. With only a little more effort, it can be shown that the input-output behavior of the DLM is, on average, the same as that of the (ideal) beam splitter.

According to quantum theory, the probability amplitudes (b0,b1) of the photons in the output modes 0 and 1 of a beam splitter (see Fig. 2) are given by [6, 19, 20]

where the presence of photons in the input modes 0 or 1 is represented by the probability amplitudes (a0,a1) [6, 19, 20]. From Eq. 3, it follows that the intensities recorded by detectors N0 and N1 is given by6

On the other hand, the formal solution of Eq. (1) reads

where xn = (x0,n,x1,n), and x0 denotes the initial value of the internal vector. The input events are represented by the vectors vn+1 = (1,0)T or vn+1 = (0,1)T if the n + 1-th event occurred on channel 0 or 1, respectively. Let p0 ((1 - p0)) be the probability for an input event of type 0 (1). Taking the average of Eq.(5) over many events and using 0 < a < 1, we find that for large n, xn » (p0,1 - p0)T. Therefore the first stage of the DLM "learns" the probabilities for events 0 and 1 by processing these events in a sequential manner. The parameter 0 < a < 1 controls the learning process.

Using two complex numbers instead of four real numbers that enter Eq. (2), identification of a0 with Y0,0 + iY1,0 and a1 with Y0,1 + iY1,1 shows that the transformation stage plays the role of the matrix-vector multiplication in Eq.(3). By construction, the output stage receives as input the four real numbers that correspond to b0 and b1. Thus, after the DLM has reached the stationary state, it will distribute events over its output channels according to Eq.(4). Of course, this reasoning is firmly supported by extensive simulations [7, 9].

One may wonder what learning machines have to do with the (wave) mechanical models that we are accustomed to in physics. First, one should keep in mind that the approach that I describe in this talk is capable of giving a rational, logically consistent description of event-based phenomena that cannot be incorporated in a wave mechanical theory without adding logically incompatible concepts such as the wave function collapse [3]. Second, the fact that a mechanical system has some kind of memory is not strange at all. For instance, a pulse of light that impinges on a beam splitter induces a polarization in the active part (usually a thin layer of metal) of the beam splitter [17]. Assuming a linear response (as is usually done in classical electrodynamics), we have P(r,t) = c(r,t) * E(r,t) where "*" is a shorthand for the convolution. If the susceptibility c(r,t) has a nontrivial time dependence (as in the Lorentz model [17] for instance), the polarization will exhibit "memory" effects and will "learn" from subsequent pulses. DLMs mimic this behavior in the most simple manner (see the convolution in Eq. (5)), on an event-by-event basis.

B. Mach-Zehnder interferometer

Using the DLM of Fig. 2 as a module that simulates a beam splitter, we build the Mach-Zehnder interferometer by connecting two DLMs, as shown in Fig. 1. The length of each path from the first to the second beam splitter is made variable, as indicated by the controls on the horizontal lines. The thin, 45º-tilted lines act as perfect mirrors.

In quantum theory, the presence of photons in the input modes 0 or 1 of the interferometer is represented by the probability amplitudes (a0,a1) [20]. The amplitudes to observe a photon in the output modes 0 and 1 of the Mach-Zehnder interferometer (see Fig. 1) are given by

where b0 and b1 are given by Eq. (3). In Eq. (6), the entries for j = 0,1 implement the phase shifts that result from the time delays on the corresponding path (including the phase shifts due to the presence of the perfect mirrors).

C. Simulation results

The snapshot in Fig. 1 is taken after N = 3030 particles have been generated by the source. The numbers in the various corresponding fields clearly show that even after a modest number of events, this event-by-event simulation reproduces the quantum mechanical probabilities. Of course, this single snapshot is not a proof that the event-by-event simulation also works for other choices of the time delays. More extensive simulations, an example of a set of results being shown in Fig. 3, demonstrate that DLM-networks accurately reproduce the probabilities of quantum theory for these single-photon experiments [7–12].


III. EPRB EXPERIMENTS

In Fig. 4, we show a schematic diagram of an EPRB experiment with photons (see also Fig. 2 in [21]). The source emits pairs of photons. Each photon of a pair propagates to an observation station in which it is manipulated and detected. The two stations are separated spatially and temporally [21]. This arrangement prevents the observation at station 1 (2) to have a causal effect on the data registered at station 2 (1) [21]. As the photon arrives at station i = 1,2, it passes through an electro-optic modulator that rotates the polarization of the photon by an angle depending on the voltage applied to the modulator. These voltages are controlled by two independent binary random number generators. As the photon leaves the polarizer, it generates a signal in one of the two detectors. The station's clock assigns a time-tag to each generated signal. Effectively, this procedure discretizes time in intervals of a width that is determined by the time-tag resolution t [21]. In the experiment, the firing of a detector is regarded as an event.


As we wish to demonstrate that it is possible to reproduce the results of quantum theory (which implicitly assumes idealized conditions) for the EPRB gedanken experiment by an event-based simulation algorithm, it would be logically inconsistent to "recover" the results of the former by simulating nonideal experiments. Therefore, we consider ideal experiments only, meaning that we assume that detectors operate with 100% efficiency, clocks remain synchronized forever, the "fair sampling" assumption is satisfied [22], and so on. We assume that the two stations are separated spatially and temporally such that the manipulation and observation at station 1 (2) cannot have a causal effect on the data registered at station 2 (1). Furthermore, to realize the EPRB gedanken experiment on the computer, we assume that the orientation of each electro-optic modulator can be changed at will, at any time. Although these conditions are very difficult to satisfy in real experiments, they are trivially realized in computer experiments.

In general, on logical grounds (without counterfactual reasoning), it is impossible to make a statement about the directions of the polarization of particles emitted by the source unless we have performed an experiment to determine these directions. However, in a computer experiment we have perfect control and we can select any direction that we like. Conceptually, there are two extreme cases. In the first case, we assume that we know nothing about the direction of the polarization. We mimic this situation by using pseudo-random numbers to select the initial polarization. This is the case that is typical for a real EPRB experiment. In the second case, we assume that we know that the polarizations are fixed (but are not necessarily the same), mimicking a source that emits polarized photons. A simulation algorithm that aims to reproduce all the results of quantum theory should be able to reproduce all these results for both cases without any change to the simulation algorithm except for the part that simulates the source [13–15].

In the experiment, the firing of a detector is regarded as an event. At the nth event, the data recorded on a hard disk at station i = 1,2 consists of xn,i = ±1, specifying which of the two detectors fired, the time tag tn,i indicating the time at which a detector fired, and the two-dimensional unit vector an,i that represents the rotation of the polarization by the electro-optic polarizer. Hence, the set of data collected at station i = 1,2 during a run of N events may be written as

In the (computer) experiment, the data {U1,U2} may be analyzed long after the data has been collected [21]. Coincidences are identified by comparing the time differences {tn,1- tn,2 |n = 1,¼,N} with a time window W [21]. Introducing the symbol å¢ to indicate that the sum has to be taken over all events that satisfy ai = an,i for i = 1,2, for each pair of directions a1 and a2 of the electro-optic modulators, the number of coincidences Cxy º Cxy(a1,a2) between detectors Dx,1 (x = ±1) at station 1 and detectors Dy,2 (y = ±1 ) at station 2 is given by

where Q(t) is the Heaviside step function. We emphasize that we count all events that, according to the same criterion as the one employed in experiment, correspond to the detection of pairs. The average single-particle counts are defined by

where the denominator is the sum of all coincidences.

According to standard terminology, the correlation between x = ±1 and y = ±1 events is defined by [23]

The correlation r(a1,a2) is +1 (-1) in the case that x = y (x = -y) with certainty. If the values of x and y are independent, the correlation r(a1,a2) is zero. Note that in general, the converse is not necessarily true but in the special case of dichotomic variables x and y, the converse is true [24].

In the case of dichotomic variables x and y, the correlation r(a1,a2) is entirely determined by the average single-particle counts Eq. (9) and the two-particle average

For later use, it is expedient to introduce the function

and its maximum

In general, the values for the average single-particle counts E1(a1,a2) and E2(a1,a2) the coincidences Cxy(a1,a2), the two-particle averages E(a1,a2), S(a,b,c,d), and Smax not only depend on the directions a1 and a2 but also on the time-tag resolution t and the time window W used to identify the coincidences.

A. Analysis of real experimental data

We illustrate the procedure of data analysis and the importance of the choice of the time window W by analyzing a data set (the archives Alice.zip and Bob.zip) of an EPRB experiment with photons that is publicly available [25].

In the real experiment, the number of events detected at station 1 is unlikely to be the same as the number of events detected at station 2. In fact, the data sets of Ref. show that station 1 (Alice.zip) recorded 388455 events while station 2 (Bob.zip) recorded 302271 events. Furthermore, in the real EPRB experiment, there may be an unknown shift D (assumed to be constant during the experiment) between the times tn,1 gathered at station 1 and the times tn,2 recorded at station 2. Therefore, there is some extra ambiguity in matching the data of station 1 to the data of station 2.

A simple data processing procedure that resolves this ambiguity consists of two steps [27]. First, we make a histogram of the time differences tn,1 - tm,2 with a small but reasonable resolution (we used 0.5 ns). Then, we fix the value of the time-shift D by searching for the time difference for which the histogram reaches its maximum, that is we maximize the number of coincidences by a suitable choice of D. For the case at hand, we find D = 4 ns. Finally, we compute the coincidences, the two-particle average, and Smax using the expressions given earlier. The average times between two detection events is 2.5 ms and 3.3 ms for Alice and Bob, respectively. The number of coincidences (with double counts removed) is 13975 and 2899 for (D = 4 ns, W = 2 ns) and (D = 0 , W = 3 ns) respectively.

In Figs. 5 and 6 we present the results for Smax as a function of the time window W. First, it is clear that Smax decreases significantly as W increases but it is also clear that as W ® 0, Smax is not very sensitive to the choice of W [27]. Second, the procedure of maximizing the coincidence count by varying D reduces the maximum value of Smax from a value 2.89 that considerably exceeds the maximum for the quantum system (2, see Section III C) to a value 2.73 that violates the Bell inequality (Smax < 2, see Ref. 26) and is less than the maximum for the quantum system.



The fact that the "uncorrected" data (D = 0) violate the rigorous bound for the quantum system should not been taken as evidence that quantum theory is "wrong": It merely indicates that the way in which the data of the two stations has been grouped in two-particle events is not optimal. There is no reason why a correlation between similar but otherwise unrelated data should be described by quantum theory.

Finally, we use the experimental data to show that the time delays depend on the orientation of the polarizer. To this end, we select all coincidences between D+,1 and D+,2 (see Fig. 4) and make a histogram of the coincidence counts as a function of the time-tag difference, for fixed orientation q1 = 0 and the two orientations q2 = p/8,3p/8 (other combinations give similar results). The results of this analysis are shown in Fig. 7. The maximum of the distribution shifts by approximately 1 ns as the polarizer at station 2 is rotated by p/4, a demonstration that the time-tag data is sensitive to the orientation of the polarizer at station 2. A similar distribution of time-delays (of about the same width) was also observed in a much older experimental realization of the EPRB experiment [28].


According to Maxwell's equation, the birefringent properties of the optically anisotropic materials that are used to fabricate the optical elements (polarizers and electro-optic modulators), cause plane waves with different polarization to propagate with different phase velocity [17], suggesting a possible mechanism for the time delays observed in experiments. As light is supposed to consist of non-interacting photons, this suggests, but does not prove, that individual photons experience a time delay as they pass through the electro-optic modulators or polarizers. Of course, strictly speaking, we cannot derive the time delay from classical electrodynamics: The concept of a photon has no place in Maxwell's theory. A more detailed understanding of the time delay mechanism first requires dedicated, single-photon retardation measurements for these specific optical elements.

B. Role of the coincidence window W

The crucial point is that in any real EPR-type experiment, it is necessary to have an operational procedure to decide if the two detection events correspond to the observation of one two-particle system or to the observation of two single-particle systems. In standard "hidden variable" treatments of the EPR gedanken experiment [26], the operational definition of "observation of a single two-particle system" is missing. In EPRB-type experiments, this decision is taken on the basis of coincidence in time [21, 28, 29].

Our analysis of the experimental data shows beyond doubt that a model which aims to describe real EPRB experiments should include the time window W and that the interesting regime is W ® 0, not W ® ¥ as is assumed in all textbook treatments of the EPRB experiment. Indeed, in quantum mechanics textbooks it is standard to assume that an EPRB experiment measures the correlation [26]

which we obtain from Eq. (8) by taking the limit W ® ¥. Although this limit defines a valid theoretical model, there is no reason why this model should have any bearing on the real experiments, in particular because experiments pay considerable attention to the choice of W. A rational argument that might justify taking this limit is the hypothesis that for ideal experiments, the value of W should not matter. However, in experiments a lot of effort is made to reduce (not increase) W [21, 27].

As we will see later, using our model it is relatively easy to reproduce the experimental facts and the results of quantum theory if we consider the limit W ® 0. Furthermore, keeping W arbitrary does not render the mathematics more complicated so there really is no point of studying the simplified model defined by Eq. (14): We may always consider the limiting case W ® ¥ afterwards.

C. Quantum theory

According to the axioms of quantum theory [4], repeated measurements on the two-spin system described by the density matrix r yield statistical estimates for the single-spin expectation values

and the two-spin expectation value

where si = () are the Pauli spin-1/2 matrices describing the spin of particle i = 1,2 [4], and a and b are unit vectors. We have introduced the tilde to distinguish the quantum theoretical results from the results obtained from the data sets {U1,U2}. The state of a quantum system of two S = 1/2 objects is completely determined if we know the expectation values 1(a), 2(b), and (a,b).

It can be shown that |(a,b,c,d)| < 2 [30], independent of the choice of r. If the density matrix r = r1Ä r2 factorizes (here ri is the 2 × 2 density matrix of spin i), then it is easy to prove that |(a,b,c,d)| < 2. In other words, if maxa,b,c,d [(a,b,c,d) > 2, then r ¹ r1Ä r2, and the quantum system is in an entangled state. Specializing to the case of the photon polarization, the unit vectors a, b, c, and d lie in the same plane and we may use the angles a, a¢, b, and b¢ to specify their direction.

The quantum theoretical description of the EPRB experiment assumes that the system is represented by the singlet state |Yñ = (| Hñ1| V ñ2 -| V ñ1 | Hñ2) / of two spin-1/2 particles, where H and V denote the horizontal and vertical polarization and the subscripts refer to photon 1 and 2, respectively. For the singlet state r = |Yñ áY|,

for which maxa,a¢,b,b¢(a,a¢,b,b¢) = 2, confirming that the singlet is a quantum state with maximal entanglement.

Analysis of the experimental data according to the procedure sketched earlier [21, 31–36], yields results that are in good agreement with 1(a) = 2(b) = 0 and (a,b) = -cos2(a - b), leading to the conclusion that in a quantum theoretical description, the density matrix does not factorize, in spite of the fact that the photons are spatially and temporally separated and do not interact.

D. Classical simulation model

A concrete simulation model of the EPRB experiment sketched in Fig. 4 requires a specification of the information carried by the particles, of the algorithm that simulates the source and the observation stations, and of the procedure to analyze the data. In the following, we describe a slightly modified version of the algorithm proposed in Ref. [13], tailored to the case of photon polarization.

Source and particles: The source emits particles that carry a vector Sn,i = (cos(xn + (i - 1)p/2) ,sin(xn + (i - 1)p/2), representing the polarization of the photons that travel to station i = 1 and station i = 2, respectively. Note that Sn,1· Sn,2 = 0, indicating that the two particles have orthogonal polarizations. The "polarization state" of a particle is completely characterized by xn, which is distributed uniformly over the whole interval [0,2p[. For this purpose, to mimic the apparent unpredictability of the experimental data, we use uniform random numbers. However, from the description of the algorithm, it will be clear that the use of random numbers is not essential. Simple counters that sample the intervals [0,2p[ in a systematic, but uniform, manner might be employed as well.

Observation station: The electro-optic modulator in station i rotates Sn,i by an angle gn,i, that is an,i = (cosgn ,i,singn,i). The number M of different rotation angles is chosen prior to the data collection (in the experiment of Weihs et al., M = 2 [21]). We use 2M random numbers to fill the arrays (a1 ,...,aM) and (b1 ,...,bM). During the measurement process we use two uniform random numbers 1 < m,m¢ < M to select the rotation angles gn,1 = am and gn,2 = bm¢. The electro-optic modulator then rotates Sn,i = (cos(xn + (i - 1)p/2),sin(xn + (i - 1)p/2 ) by gn,i, yielding Sn,i = (cos(xn - gn,i + (i - 1)p/2 ),sin(xn - gn,i + (i - 1)p/2).

The polarizer at station i projects the rotated vector onto its x-axis: Sn,i · = cos(xn - gn,i + (i - 1)p/2 ), where denotes the unit vector along the x-axis of the polarizer. For the polarizing beam splitter, we consider a simple model: If cos2(xn - gn,i + (i - 1)p/2 ) > 1/2 the particle causes D+1,i to fire, otherwise D-1,i fires. Thus, the detection of the particles generates the data xn,i = sign(cos2(xn - gn,i + (i - 1)p/2)).

Time-tag model: To assign a time-tag to each event, we assume that as a particle passes through the detection system, it may experience a time delay. In our model, the time delay tn,i for a particle is assumed to be distributed uniformly over the interval [t0,t0 + T], an assumption that is not in conflict with available data [27]. In practice, we use uniform random numbers to generate tn,i. As in the case of the angles xn, the random choice of tn,i is merely convenient, not essential. From Eq.(8), it follows that only differences of time delays matter. Hence, we may put t0 = 0. The time-tag for the event n is then tn,iÎ [0,T].

There are not many options to make a reasonable choice for T. Assuming that the particle "knows" its own direction and that of the polarizer only, we can construct one number that depends on the relative angle: Sn,i · . Thus, T = T(xn - gn,i ) depends on xn - gn ,i only. Furthermore, consistency with classical electrodynamics requires that functions that depend on the polarization have period p [17]. Thus, we must have T(xn - gn,i + (i - 1)p/2) = F( (Sn,i · )2). We already used cos2(xn - gn,i + (i - 1)p/2 ) to determine whether the particle generates a +1 or -1 signal. By trial and error, we found that T(xn - q1 ) = T0F(|sin2(xn - q1)|) = T0|sin2(xn - q1)|d yields useful results [13–15, 24, 37]. Here, T0 = maxq T(q) is the maximum time delay and defines the unit of time, used in the simulation and d is a free parameter of the model. In our numerical work, we set T0 = 1.

Data analysis: For fixed N and M, the algorithm generates the data sets Ui just as experiment does [21]. In order to count the coincidences, we choose a time-tag resolution 0 < t < T0 and a coincidence window t < W. We set the correlation counts Cxy (am ,bm¢ ) to zero for all x,y = ±1 and m,m¢ = 1,...,M. We compute the discretized time tags kn,i = étn,i/tù for all events in both data sets. Here éxù denotes the smallest integer that is larger or equal to x, that is éxù - 1 < x < éxù. According to the procedure adopted in the experiment [21], an entangled photon pair is observed if and only if |kn,1 - kn,2| < k = éW/tù. Thus, if |kn,1 - kn,2| < k, we increment the count (am ,bm¢ ).

E. Simulation results

The simulation proceeds in the same way as the experiment, that is we first collect the data sets {U1,U2}, and then compute the coincidences Eq. (8) and the correlation Eq. (11). The simulation results for the coincidences Cxy (a,b) depend on the time-tag resolution t, the time window W and the number of events N, just as in real experiments [21, 31–36, 38].

Figure 8 shows simulation data for E(a,b) as obtained for d=2, N = 106 and W = t = 0.00025T0. In the experiment, for each event, the random numbers An,i = 1,¼,M select one out of four pairs {(ai,bj)|i,j = 1,M}, where the angles ai and bi are fixed before the data is recorded. The data shown has been obtained by allowing for M = 20 different angles per station. Hence, forty random numbers from the interval [0,360[ were used to fill the arrays (a1,¼,aM ) and (b1 ,¼,bM ). For each of the N events, two different random number generators were used to select the angles am and bm¢ . The statistical correlation between m and m¢ was measured to be less than 10-6.


From Fig. 8, it is clear that the simulation data for E(a,b) are in excellent agreement with quantum theory. Within the statistical noise, the simulation data (not shown) for the single-spin expectation values also reproduce the results of quantum theory.

Additional simulation results (not shown) demonstrate that the kind of models described earlier are capable of reproducing all the results of quantum theory for a system of two S=1/2 particles [13–15, 24, 37]. Furthermore, to first order in W and in the limit that the number of events goes to infinity, one can prove rigorously that these simulation models give the same expressions for the single- and two-particle averages as those obtained from quantum theory [13–15, 24, 37].

F. Discussion

Starting from the factual observation that experimental realizations of the EPRB experiment produce the data {U1,U2} (see Eq. (7)) and that coincidence in time is a key ingredient for the data analysis, we have described a computer simulation model that satisfies Einstein's criterion of local causality and, exactly reproduces the correlation (a1,a2) = -a1· a2 that is characteristic for a quantum system in the singlet state. Salient features of these models are that they generate the data set Eq. (7) event-by-event, use integer arithmetic and elementary mathematics to analyze the data, do not rely on concepts of probability and quantum theory, and provide a simple, rational and realistic picture of the mechanism that yields correlations such as Eq. (18).

We have shown that whether or not these simulation models produce quantum correlations depends on the data analysis procedure that is performed (long) after the data has been collected: In order to observe the correlations of the singlet state, the resolution t of the devices that generate the time-tags and the time window W should be made as small as possible. Disregarding the time-tag data (d = 0 or W > T0) yields results that disagree with quantum theory but agree with the models considered by Bell [26]. Our analysis of real experimental data and our simulation results show that increasing the time window changes the nature of the two-particle correlations [13–15, 24, 37].

According to the folklore about Bell's theorem, a procedure such as the one that we described should not exist. Bell's theorem states that any local, hidden variable model will produce results that are in conflict with the quantum theory of a system of two S = 1/2 particles [26]. However, it is often overlooked that this statement can be proven for a (very) restricted class of probabilistic models only. Indeed, minor modifications to the original model of Bell lead to the conclusion that there is no conflict [39–41]. In fact, Bell's theorem does not necessarily apply to the systems that we are interested in as both simulation algorithms and actual data do not need to satisfy the (hidden) conditions under which Bell's theorem hold [42–44].

The apparent conflict between the fact that there exist event-based simulation models that satisfy Einstein's criterion of local causality and reproduce all the results of the quantum theory of a system of two S = 1/2 particles and the folklore about Bell's theorem, stating that such models are not supposed to exist dissolves immediately if one recognizes that Bell's extension of Einstein's concept of locality to the domain of probabilistic theories relies on the hidden, fundamental assumption that the absence of a causal influence implies logical independence [45]. Indeed, in an attempt to extend Einstein's concept of a locally causal theory to probabilistic theories, Bell implicitly assumed that the absence of causal influence implies logical independence. In general, this assumption prohibits the consistent application of probability theory and leads to all kinds of logical paradoxes [46, 47]. However, if we limit our thinking to the domain of quantum physics, the violation of the Bell inequalities by experimental data should be taken as a strong signal that it is the correctness of this assumption that one should question. Thus, we are left with two options:

  • One accepts the assumption that the absence of a causal influence implies logical independence and lives with the logical paradoxes that this assumption creates.

  • One recognizes that logical independence and the absence of a causal influence are different concepts [46, 48] and one searches for rational explanations of experimental facts that are logically consistent, as we did in our simulational approach.

IV. CONCLUSION

The simulation models that I described in this talk are purely ontological models of quantum phenomena. The salient features of these simulation models [7–10, 13, 14, 24, 49] are that they

1. generate, event-by-event, the same type of data as recorded in experiment,

2. analyze data according to the procedure used in experiment,

3. satisfy Einstein's criterion of local causality,

4. do not rely on any concept of quantum theory or probability theory,

5. reproduce the averages that we compute from quantum theory,

We may therefore conclude that this computational modeling approach opens new routes to ontological descriptions of microscopic phenomena.

Acknowledgments

I thank K. Michielsen for a critical reading of the manuscript.

[16] Sample Fortran and Java programs and interactive programs that perform event-based simulations of a beam splitter, one Mach-Zehnder interferometer, and two chained Mach-Zehnder interferometers can be found at http://www.compphys.net/dlm.

[18] We make a distinction between quantum theory and quantum physics. We use the term quantum theory when we refer to the mathematical formalism, i.e., the postulates of quantum mechanics (with or without the wave function collapse postulate) [4] and the rules (algorithms) to compute the wave function. The term quantum physicsis used for microscopic, experimentally observable phenomena that do not find an explanation within the mathematical framework of classical mechanics.

[25] http://www.quantum.at/research/photonentangle/bellexp/data.html . The results presented here have been obtained by assuming that the data sets * _V.DAT contain IEEE-8 byte (instead of 8-bit) double-precision numbers and that the least significant bit in * _C.DAT specifies the position of the switch instead of the detector that fired.

Received on 5 November, 2007

  • [1] D. P. Landau and K. Binder, A Guide to Monte Carlo Simulation in Statistical Physics (Cambridge University Press, Cambridge, 2000).
  • [2] D. Bohm, Quantum Theory (Prentice-Hall, New York, 1951).
  • [3] D. Home, Conceptual Foundations of Quantum Physics (Plenum Press, New York, 1997).
  • [4] L. E. Ballentine, Quantum Mechanics: A Modern Development (World Scientific, Singapore, 2003).
  • [5] A. Tonomura, The Quantum World Unveiled by Electron Waves (World Scientific, Singapore, 1998).
  • [6] P. Grangier, G. Roger, and A. Aspect, Europhys. Lett. 1, 173 (1986).
  • [7] K. De Raedt, H. De Raedt, and K. Michielsen, Comp. Phys. Comm. 171, 19 (2005).
  • [8] H. De Raedt, K. De Raedt, and K. Michielsen, J. Phys. Soc. Jpn. Suppl. 76, 16 (2005).
  • [9] H. De Raedt, K. De Raedt, and K. Michielsen, Europhys. Lett. 69, 861 (2005).
  • [10] K. Michielsen, K. De Raedt, and H. De Raedt, J. Comput. Theor. Nanosci. 2, 227 (2005).
  • [11] H. De Raedt, K. De Raedt, and K. Michielsen, in Computer Simulation Studies in Condensed-Matter Physics XVIII, edited by D. P. Landau, S. P. Lewis, and H. B. Schüttler (Springer, Berlin, 2006), vol. 105.
  • [12] K. Michielsen, H. De Raedt, and K. De Raedt, in Computer Simulation Studies in Condensed-Matter Physics XVIII, edited by D. P. Landau, S. P. Lewis, and H. B. Sch¨uttler (Springer, Berlin, 2006), vol. 105.
  • [13] K. De Raedt, K. Keimpema, H. De Raedt, K. Michielsen, and S. Miyashita, Euro. Phys. J. B 53, 139 (2006).
  • [14] H. De Raedt, K. De Raedt, K. Michielsen, K. Keimpema, and S. Miyashita, J. Phys. Soc. Jpn. 76, 104005 (2007).
  • [15] H. De Raedt, K. De Raedt, K. Michielsen, K. Keimpema, and S. Miyashita, J. Comp. Theor. Nanosci. 4, 1 (2007).
  • [17] M. Born and E. Wolf, Principles of Optics (Pergamon, Oxford, 1964).
  • [19] J. G. Rarity and P. R. Tapster, Phil. Trans. R. Soc. Lond. A 355, 2267 (1997).
  • [20] G. Baym, Lectures on Quantum Mechanics (W.A. Benjamin, Reading MA, 1974).
  • [21] G. Weihs, T. Jennewein, C. Simon, H. Weinfurther, and A. Zeilinger, Phys. Rev. Lett. 81, 5039 (1998).
  • [22] G. Adenier and A. Y. Khrennikov, J. Phys. B: At. Mol. Opt. Phys. 40, 131 (2007).
  • [23] G. R. Grimmet and D. R. Stirzaker, Probability and Random Processes (Clarendon Press, Oxford, 1995).
  • [24] K. De Raedt, H. De Raedt, and K. Michielsen, Comp. Phys. Comm. 176, 642 (2007).
  • [26] J. S. Bell, Speakable and unspeakable in quantum mechanics (Cambridge University Press, Cambridge, 1993).
  • [27] G. Weihs, Ph.D. thesis, University of Vienna (2000), http://www.quantum.univie.ac.at/publications/thesis/gwdiss.pdf .
    » link
  • [28] C. A. Kocher and E. D. Commins, Phys. Rev. Lett. 18, 575 (1967).
  • [29] J. F. Clauser and M. A. Horne, Phys. Rev. D 10, 526 (1974).
  • [30] B. S. Cirel'son, Lett. Math. Phys. 4, 93 (1980).
  • [31] S. J. Freedman and J. F. Clauser, Phys. Rev. Lett. 28, 938 (1972).
  • [32] A. Aspect, J. Dalibard, and G. Roger, Phys. Rev. Lett. 49, 1804 (1982).
  • [33] P. R. Tapster, J. G. Rarity, and P. C. M. Owens, Phys. Rev. Lett. 73, 1923 (1994).
  • [34] W. Tittel, J. Brendel, H. Zbinden, and N. Gisin, Phys. Rev. Lett. 81, 3563 (1998).
  • [35] M. A. Rowe, D. Kielpinski, V. Meyer, C. A. Sackett, W. M. Itano, C. Monroe, and D. J.Wineland, Nature 401, 791 (2001).
  • [36] D. Fatal, E. Diamanti, K. Inoue, and Y. Yamamoto, Phys. Rev. Lett. 92, 037904 (2004).
  • [37] H. De Raedt, K. Michielsen, S. Miyashita, and K. Keimpema, Euro. Phys. J. B 58, 55 (2007).
  • [38] A. Aspect, P. Grangier, and G. Roger, Phys. Rev. Lett. 49, 91 (1982).
  • [39] J. A. Larsson and R. D. Gill, Europhys. Lett. 67, 707 (2004).
  • [40] E. Santos, Phil. Mod. Phys. 36, 544 (2005).
  • [41] M. Zukowksi, Stud. Hist. Phil. Mod. Phys. 36, 566 (2005), arXiv: quant-ph/0605034.
  • [42] L. Sica, Opt. Comm. 170, 55 (1999).
  • [43] K. Hess and W. Philipp, Proc. Natl. Acad. Sci. USA 101, 1799 (2004).
  • [44] K. Hess and W. Philipp, Found. of Phys. 35, 1749 (2005).
  • [45] E. T. Jaynes, in Maximum Entropy and Bayesian Methods, edited by J. Skilling (Kluwer Academic Publishers, Dordrecht, 1989), vol. 36.
  • [46] E. T. Jaynes, Probability Theory: The Logic of Science (Cambridge University Press, Cambridge, 2003).
  • [47] M. Tribus, Rational Descriptions, Decisions and Designs (Expira Press, Stockholm, 1999).
  • [48] R. T. Cox, The Algebra of Probable Inference (Johns Hopkins University Press, Baltimore, 1961).
  • [49] S. Zhao and H. De Raedt, J. Comp. Theor. Nanosci. (in press) (2007).
  • *
    V Brazilian Meeting on Simulational Physics, Ouro Preto, 2007
  • †
    Electronic address:
  • Publication Dates

    • Publication in this collection
      28 Mar 2008
    • Date of issue
      Mar 2008

    History

    • Received
      05 Nov 2007
    Sociedade Brasileira de Física Caixa Postal 66328, 05315-970 São Paulo SP - Brazil, Tel.: +55 11 3091-6922, Fax: (55 11) 3816-2063 - São Paulo - SP - Brazil
    E-mail: sbfisica@sbfisica.org.br