Acessibilidade / Reportar erro

Classical and quantum stochastic thermodynamics

Abstract

The stochastic thermodynamics provides a framework for the description of systems that are out of thermodynamic equilibrium. It is based on the assumption that the elementary constituents are acted by random forces that generate a stochastic dynamics, which is here represented by a Fokker-Planck-Kramers equation. We emphasize the role of the irreversible probability current, the vanishing of which characterizes the thermodynamic equilibrium and yields a special relation between fluctuation and dissipation. The connection to thermodynamics is obtained by the definition of the energy function and the entropy as well as the rate at which entropy is generated. The extension to quantum systems is provided by a quantum evolution equation which is a canonical quantization of the Fokker-Planck-Kramers equation. An example of an irreversible systems is presented which shows a nonequilibrium stationary state with an unceasing production of entropy. A relationship between the fluxes and the path integral is also presented.

Keywords:
Stochastic thermodynamics; Quantum thermodynamics; Entropy production

Introduction

Thermodynamics was conceived as a discipline based on principles and laws that refer to macroscopic quantities such as the principles of energy conservation and of entropy increase, which are the first and second laws of thermodynamics. Although these two principles are valid for systems in equilibrium as well as for systems out of equilibrium, the initial development of the discipline lead to the establishing of a theory of thermodynamics of system in equilibrium. This was possible because the energy of a system in thermodynamic equilibrium is related functionally to the entropy, which allows the definition of temperature.

The derivation of the thermodynamics from the microscopic laws of motion was the aim of the kinetic theory. One of its consequences was the development of the equilibrium statistical mechanics advanced by Gibbs. The statistical mechanics is based on the description of system by the probability distribution that bears the name of Gibbs, which for a system in contact with a heat reservoir is proportional to eE/kT, where E is the energy function and T, the temperature. The crucial property of the equilibrium distribution is that the probability depends on the states of the system only through the energy function. This property, along with the Gibbs expression for the entropy, leads to the relation between energy and entropy, mentioned above, which characterizes the thermodynamic equilibrium.

The entropy of an isolated system in equilibrium remains invariant. But if it is not in equilibrium, its entropy increases and the increase, in this case, is not due to the flux of entropy because the system is isolated. Entropy is being created spontaneously and in this sense it differs from the energy which is a conserved quantity. If a system is not isolated then the variation of the entropy S with time is the algebraic sum of two terms,

(1) d S d t = Π Φ ,

where Π is the rate in which entropy is being created, the rate of entropy production, and Φ is the flux of entropy.

The production of entropy is related to irreversible processes occurring inside the system which are understood as processes that are more likely to occur than their time-reverse counterparts. Thermodynamic equilibrium is thus characterized as the state where a process and its time reversal are equally probable. This characterization of equilibrium, embodied in the stochastic thermodynamics, is a dynamical definition, being more comprehensible than the static definition given above in terms of the Gibbs distribution. The stochastic thermodynamics [1[1] J. Schnakenberg, Rev. Mod. Phys. 48, 571 (1976). [2] L. Jiu-Li, C. Van den Broeck and G. Nicolis, Z. Phys. B 56,165 (1984). [3] C.Y. Mou, J.L. Luo and G. Nicolis, J. Chem. Phys. 84, 7011 (1986). [4] A. Pérez-Madrid, J.R. Rubí and P. Mazur, Physica A 212, 231 (1994). [5] T. Tomé and M.J. de Oliveira, Braz. J. Phys. 27, 525 (1997). [6] C. Jarzynski, Phys. Rev. Lett. 78, 2690 (1997). [7] K. Sekimoto, Prog. Theor. Phys. Suppl. 130, 17 (1998). [8] J.L. Lebowitz and H. Spohn, J. Stat. Phys. 95, 333 (1999). [9] P. Mazur, Physica A 274, 491 (1999). [10] C. Maes and K. Netočný, J. Stat. Phys. 110, 269 (2003). [11] L. Crochik and T. Tomé, Phys. Rev. E 72, 057103 (2005). [12] T. Tomé, Braz. J. Phys. 36, 1285 (2006). [13] R.K.P. Zia and B Schmittmann, J. Phys. A: Math. Gen. 39, L407 (2006). [14] D. Andrieux and P. Gaspard, Phys. Rev. E 74, 011906 (2006). [15] T. Schmiedl and U. Seifert, J. Chem. Phys. 126, 044101 (2007). [16] R.J. Harris and G. M. Schütz, J. Stat. Mech. 2007,P07020 (2007). [17] U. Seifert, Eur. Phys. J. B 64, 423 (2008). [18] R.A. Blythe, Phys. Rev. Lett. 100, 1010060 (2008). [19] M. Esposito, K. Lindenberg and C. Van den Broeck, Phys. Rev. Lett. 102, 130602 (2009). [20] M. Esposito, U. Harbola and S. Mukamel, Rev. Mod. Phys. 81, 1665 (2009). [21] T. Tomé and M.J. de Oliveira, Phys. Rev. E 82, 021120 (2010). [22] C. Van de Broeck and M. Esposito, Phys. Rev. E 82, 011144 (2010). [23] C. Jarzynski, Annual Review of Condensed Matter Physics 2, 329 (2011). [24] T. Tomé and M.J. de Oliveira, Phys. Rev. Lett. 108, 020601 (2012). [25] R.E. Spinney and I.J. Ford, Phys. Rev. E 85, 051113 (2012). [26] U. Seifert, Rep. Prog. Phys. 75, 126001 (2012). [27] M. Santillan and H. Qian, Physica A 392, 123 (2013). [28] D. Luposchainsky and H. Hinrichsen, J. Stat. Phys. 153, 828 (2013). [29] T. Tomé and M.J. de Oliveira, Phys. Rev. E 91, 042140 (2015). [30] M.J. de Oliveira, Phys. Rev. E 94, 012128 (2016). [31] T. Tomé and M.J. de Oliveira, J. Chem. Phys. 148, 224104 (2018).32[32] M.J. de Oliveira, Phys. Rev. E 99, 052138 (2019).] provides an approach to out of equilibrium an equilibrium thermodynamics that takes into account the dynamical characterization of the irreversible processes by assuming that a system evolves in time according to a microscopic stochastic dynamics.

The elementary constituents of a system are assumed to be acted by random forces in addition to the usual deterministic forces. As a consequence the trajectory followed by a particle is not in general deterministic. There are many possible trajectories that a particle may follow from a given point, each one with a certain probability of occurrence. The approach we follow here uses a representation of the dynamics in terms of the probability of the occurrence of a state at a certain instant of time, which is assumed to be governed by an evolution equation.

The main features of the approach that we follow here are: (1) a stochastic dynamics which is here represented by a Fokker-Planck-Kramers equation [33[33] N.G. van Kampen, Stochastic Processes in Physics and Chemistry (North-Holland, Amsterdam, 1981). [34] C.W. Gardiner, Handbook of Stochastic Methods for Physics, Chemistry and Natural Sciences (Springer, Berlin, 1983). [35] H. Risken, The Fokker-Planck Equation, Methods of Solution and Applications (Springer, Berlin, 1984).36[36] T. Tomé and M.J. de Oliveira, Stochastic Dynamics and Irreversibility (Springer, Heidelberg, 2015).]; (2) the assignment of an energy function; (3) the definition of entropy as having the same form as the Gibbs entropy; (4) a proper definition of the rate of entropy production.

The first part of this text is dedicated to the classical case. In the second part we extend the results obtained in the first part to the quantum case. In particular, we use as the evolution equation a quantum version of the Fokker-Planck-Kramers equation. In this case, the probability density is replaced by the probability density operator, usually called density matrix. In a third part we generalize the results for the case of many degrees of freedom and present an example of a system that displays a nonequilibrium stationary state with an unceasing production of entropy.

2. Evolution equation

Our object of study is a system of particles that interact among themselves and may also be subject to external forces. In addition each particle is acted by random forces, the origin of which may be internal or external to system. The system evolves in time according to the Newton equation of motion. Due to random forces, the trajectory is not uniquely defined. There are many possible trajectories starting from a given state, each one with a certain probability.

If the system is at a given state at the initial time, one may ask for the probability that it is at a given state at a later time. An answer to this question is provided by the Fokker-Planck-Kramers (FPK) equation which gives the time evolution of the probability density ρ(x,p,t). We will focus initially on a system with just one degree of freedom in which case x is the position and p is the momentum of the particle, and both quantities constitute the state of the system. The probability that at time t the state of the system is inside dxdp around (x,p) is ρ(x,p,t)dxdp.

In contrast to the equilibrium statistical mechanics for which the probability density is constant in time, here it depends on time. If we wish that the system reaches thermal equilibrium for long times, usually called thermalization, then the solution of the evolution equation for long times must be a Gibbs equilibrium distribution, which is characterized by depending on (x,p) only through the energy function associated to the system.

The FPK equation is given by

(2) ρ t = p m ρ x f ρ p + Γ 2 2 ρ p 2 ,

where m is the mass of the particle, f is the ordinary force acting on the particle and Γ is a constant that is associated to the stochastic forces. The force f is understood to be a sum of an internal force Fc, considered to be a conservative force, and a dissipative force D,

(3) f = F c + D .

The property of the dissipative force D that distinguishes it from the other forces is that it is an odd function of the momentum.

A derivation of the FPK equation (2) can be obtained from a Langevin equation and can be found in reference [36][36] T. Tomé and M.J. de Oliveira, Stochastic Dynamics and Irreversibility (Springer, Heidelberg, 2015).. The Langevin equation leading to (2) is the equation of motion for a particle of mass m moving along a straight line which in addition to the ordinary force f is also under the action of a stochastic force with zero mean and variance proportional to Γ.

An essential property of the FPK equation, and for that matter of any equation that governs the time evolution of a probability distribution, is that it preserves the normalization of ρ, that is,

(4) ρ ( x , p , t ) d x d p = 1 ,

for any instant of time, where the integration is performed on the whole space of states. If ρ is normalized at the initial time, it remains normalized forever. The demonstration of this fundamental property of the FPK equation is given in the appendix 16.

As Fc is a conservative force, we may write Fc=(H/x), p/m=(H/p), where

(5) H = p 2 2 m + V ( x )

is the energy function, which is the sum of the kinetic energy and the potential energy V. The first term and the one involving Fc of the FPK equation become

(6) p m ρ x F c ρ p = H p ρ x + H x ρ p .

The right-hand side of this equation is written in the abbreviated form as

(7) H x ρ p H p ρ x = { H , ρ } ,

which is called the Poisson brackets. Replacing this result in the equation (2), the FPK equation acquires the form

(8) ρ t = { H , ρ } J p ,

where

(9) J = D ρ Γ 2 ρ p .

The FPK equation (2) can also be written in the form

(10) ρ t = J x x J p p ,

where Jx=p/m and Jp=Fcρ+J. In this form, the FKP equation is understood as a continuity equation and Jx and Jp are understood as the components of the probability current. This is the form that allowed us to show the property (4), as presented in the appendix 16. The part J of the component Jp is the irreversible probability current, which plays a fundamental role in the present approach. We remark that without J the FPK equation reduces to the Liouville equation of classical statistical mechanics [37][37] S.R.A. Salinas, Introduction to Statistical Physics (Springer, New York, 2001).,

(11) ρ t = { H , ρ } .

3. Thermodynamic equilibrium

The FPK equation as it stands may or may not describe a system that for long times will be in thermodynamic equilibrium. For long times the solution of the FPK equation is its stationary solution, that is, the solution obtained by setting to zero the right-hand side of the equation, in which case J may or may not vanish.

A system in thermodynamic equilibrium may be said to be the one described by a Gibbs distribution. However, this is a static definition. We need here a dynamic characterization of thermodynamic equilibrium. This is provided by characterizing the thermodynamic equilibrium as the stationary state such that the irreversible current vanishes. Therefore, the equilibrium distribution ρe obeys the condition

(12) D ρ e Γ 2 ρ e p = 0 ,

and in addition

(13) { H , ρ e } = 0.

The solution of this last condition leads to the result that ρe is a function of H, that is, ρe depends on x and p through the energy function H(x,p). This characterizes any Gibbs equilibrium distribution.

The condition (12) is understood as the relation between dissipation, described by D, and noise or fluctuations, described by Γ. That is, in equilibrium there must be a relation between dissipation and fluctuations. We wish to describe a system in contact with a heat reservoir at a temperature T, in which case the appropriate Gibbs equilibrium distribution is [37][37] S.R.A. Salinas, Introduction to Statistical Physics (Springer, New York, 2001).

(14) ρ e = 1 Z e β H ,

where β=1/kT and k is the Boltzmann constant. Replacing (14) in equation (12), we find

(15) D = γ p ,

where γ is the constant that connects Γ with temperature,

(16) Γ = 2 m γ k T .

Notice that D is the usual type of dissipation force proportional to the velocity.

4. Energy, heat and entropy

A thermodynamic system may have its energy changing in time. The energy variation is due to the exchange of heat or work with the environment. Here we will treat the case where the external forces are absent so that the variation in energy is only due to the exchange of heat. The energy U of a thermodynamic system, sometimes called internal energy, is the average of the energy function introduced above, that is,

(17) U = H ρ d x d p ,

and may depend on time as ρ depends on time. The variation of U with time is

(18) d U d t = H ρ t d x d p .

Replacing the time derivative of ρ by using the FPK equation in the form (8), we find

(19) d U d t = H J p d x d p .

The term involving the Poisson brackets, vanishes after an integration by parts has been performed. Here and in the following, whenever we do an integration by parts, the integrated term is assumed to vanish. As shown in the appendix 16, this is so because we are considering that at the limits of integration the probability density vanishes rapidly. Performing the integral in (19) by parts, we find

(20) d U d t = J H p d x d p .

The right-hand side of this equation is interpreted as the rate at which heat is introduced into the system, or the flux of heat Φq,

(21) Φ q = J H p d x d p .

Thus we may write

(22) d U d t = Φ q ,

an equation that may be understood as the conservation of energy. Notice that the flux of heat Φq involves the irreversible part of the probability current.

From thermodynamics, we know that heat is related to entropy through the Clausius equation dQ=TdS, valid for systems in equilibrium, where dQ is the infinitesimal heat exchanged with the system and dS is the infinitesimal increase of the entropy S. Clausius equation is equivalent to the equation Φq=T(dS/dt) which could be used to define entropy. However, this equation is of no use here because it is valid only for system in equilibrium. The appropriate way to define entropy of a system is to use the Gibbs form

(23) S = k ρ ln ρ d x d p ,

which is a generalization of the Boltzmann entropy S=klnW where W is the number of accessible states. Although, S given by (23) is usually used for system in equilibrium, here we are assuming that this form is also appropriate for systems out of equilibrium.

5. Entropy production

If the probability ρ is found as a function of time by solving the FPK equation, then S is obtained as a function of time. Deriving (23) with respect to time, we find

(24) d S d t = k ρ t ln ρ d x d p .

There is another part involving the time derivative of lnρ but it vanishes if we take into account that ρ is normalized at any time, a result given by equation (4).

Replacing the time derivative of ρ in (24), by using the FPK equation in the form (8), we find

(25) d S d t = k J p ln ρ d x d p .

Again the part involving the Poisson brackets vanishes by an integration by parts. The entropy is not a conserved quantity like the energy and as a consequence its variation with time is not equal to the flux of entropy. In other terms, the right-hand side of (14) cannot be identified as the flux of entropy. In addition to the flux of entropy, there is another contribution related to the creation of entropy. This contribution is the rate of how entropy is being generated or created, which is called the rate of entropy production, denoted by Π. Thus we variation of the entropy of a system with time is written as

(26) d S d t = Π Φ ,

where Φ is the flux of entropy from the system to the outside. The next step is to define or postulate the expression for one of the two quantities, Π or Φ. Once one of them is given, the other is obtained by observing that their difference should be equal to the right-hand side of (25).

The rate of entropy production Π is a quantity that vanishes when the thermodynamic equilibrium sets in and gives a measure of the deviation from equilibrium. As we have seen above, the vanishing of the irreversible probability current J is a condition for the the thermodynamic equilibrium. Since the entropy production is a nonnegative quantity and vanishes when J vanishes, it should be related to J2. The expression for the Π that we are about to introduce meets this two conditions.

Let ρ0 be the probability distribution that makes the irreversible probability current J to vanish. Writing J in the form

(27) J ρ = D Γ 2 ln ρ p ,

this condition is equivalent to

(28) D = Γ 2 ln ρ 0 p .

The probability ρ0 does not need to be necessarily the equilibrium probability distribution ρe because we are not requiring that {H,ρ0} vanishes as it occurs with ρe. In analogy with the right-hand side of equation (25), we define the rate of entropy production by

(29) Π = k J p ( ln ρ ln ρ 0 ) d x d p .

If we integrate by parts and use the relation

(30) J ρ = Γ 2 p ( ln ρ 0 ln ρ ) ,

that follows from (27) and (28), we reach the expression

(31) Π = 2 k Γ J 2 ρ d x d p .

We see that the rate of entropy production is the integral of an expression proportional to J2, as desired. It is nonnegative and vanishes in equilibrium, when J=0, that is,

(32) Π 0 ,

which is a brief statement of the second law of thermodynamics.

The flux of entropy Φ is obtained from dS/dt=ΠΦ by using the expressions (25) and (29),

(33) Φ = k J p ln ρ 0 d x d p .

Performing an integration by parts and using (28), we find

(34) Φ = 2 k Γ J D d x d p .

The flux of entropy can also be written as

(35) Φ = 2 k Γ D 2 + k D p ,

after the replacement of J, given by (8) in (34) and performing an integration by parts in the second term. This is an interesting form for the flux of entropy because it can be understood as an average over the probability distribution ρ, which is not the case of the rate of entropy production.

From the expressions for the flux of entropy Φ and the rate of entropy production Π, we draw an important conclusion concerning the Liouville equation (11). Since this equation can be understood a the FPK equation without the irreversible probability current J, and since Φ and Π vanishes if J=0, it follows that the Liouville equation predicts no entropy production nor flux of entropy, and the entropy S is constant in time. If the Liouville equation is employed to describe a closed system that approaches equilibrium, but initially is out or equilibrium, then this prediction of the Liouville equation is in contradiction with thermodynamics which predicts an increase of entropy with time.

Let us consider in the following that D and Γ are related by (12) where ρe is the canonical Gibbs distribution (14) so that the FPK equation describes a system in contact with a thermal reservoir and at equilibrium. Replacing the results (15) and (16) in (9), the expression for the irreversible probability current becomes

(36) J = γ ( p ρ + m k T ρ p ) .

Replacing the expression for D given by (12) in the equation (34), we find

(37) Φ = k J ln ρ e p d x d p = 1 T J H p d x d p .

Comparing with (21), we see that the entropy flux and the heat flux are related by

(38) Φ = 1 T Φ q .

Using equations (22) and (26) we get the following relation

(39) d U d t T d S d t = T Π ,

valid at any time. Near equilibrium, Π vanishes faster then the other two time derivatives and

(40) d U d t = T d S d t ,

or dU=TdS which is the Clausius relation, valid at thermodynamic equilibrium. We remark that T here is the temperature of the heat reservoir. The temperature of the system is (U/S)=T if U could be written as a function of S. In out of equilibrium, when Π0, this is not possible, but in equilibrium, in view of the relation dU=TdS, then U becomes a function of S. The relation dU=TdS is translated into T=(U/S), which implies T=T, and T becomes also the temperature of the system.

It is worth mentioning that the variation in time of the free energy F, defined by F=UTS, at T constant, is

(41) d F d t = T Π .

which follows form (39). Since Π0, then dF/dt0 and F decreases monotonically towards its equilibrium value. This inequality can also be viewed as the H theorem of Boltzmann. Defining the Boltzmann H by

(42) H = ρ ln ρ ρ e d x d p ,

we see that it is equal to βF plus a constant. Then, it follows from dF/dt0 that dH/dt0, which is the H theorem of Boltzmann.

6. Work

The specific systems that we have consider so far are those that exchange only heat with the environment and are described by the FPK equation (8). Now we wish to consider the case where the systems are also subject to external forces. The appropriate way to treat this case is to add to the ordinary force appearing in the FPK equation (2) an external force Fe, so that f now reads

(43) f = F c + F e + D .

Repeating the reasoning leading to (8) from (2), we reach the evolution equation

(44) ρ t = { H , ρ } F e ρ p J p ,

where J is the irreversible probability current, given by (36), which is the one appropriate for the contact with a heat reservoir at a temperature T.

Due to the presence of the external force, the variation in time of the energy has another contribution in addition to the flux of heat,

(45) d U d t = Φ q Φ w ,

where Φq is the heat flux into the system and Φw is the work performed by the system per unit time against the the external forces, or power.

To determine the variation of energy with time, we proceed in the same way as we did to derive (22) from the evolution equation (2), but now we use the evolution equation (44). The result is the equation (45) where Φq is the expression (21) and the power Φw is

(46) Φ w = F e ρ H p d x d p .

The equation (25) for the variation of entropy with time remains unchanged by the addition of the external force. To see this we replace the expression (44) into (24). The term involving the Poisson brackets vanishes as we have already seen. The term involving the external force is

(47) k F e ρ p ln ρ d x d p = k F e ρ p d x d p ,

where we have performed an integration by parts. But this integral also vanishes if we assume that Fe does not depend on p.

The rate of entropy production is defined by (31), and considering that the expression for dS/dT remains unchanged, as we have just seen, so does the expression (35) for the entropy flux. As these relations are not modified by the presence of the external force, then the relation Φ=Φq/T, as expressed by equation (38), between the entropy flux and the heat flux, valid for a system is in contact we a heat reservoir at a temperature T, also remains unchanged.

Taking into account that dS/dt=ΠΦ and that dU/dt=ΦqΦw, we reach the following relation

(48) d U d t T d S d t = Φ w T Π .

Considering a process in which T is constant, then the left hand side is dF/dt where F=UTS is the free energy, that is,

(49) d F d t = Φ w T Π .

Integrating in time, between t1 and t2, we find

(50) Δ F = W T t 1 t 2 Π d t .

where W is the work performed by the external force,

(51) W = t 1 t 2 Φ w d t ,

which is the time integration of the power Φw. Since Π0, it follows that

(52) Δ F W ,

that is, the variation of the free energy is smaller than the work done on the system. The equality holds in an equilibrium process, when Π=0.

The following remark is in order. When the system is in contact with a heat reservoir at a certain temperature T, it does not mean that T is the temperature of the system, as we have pointed out in the remark just below equation (40). If the rate of entropy production is nonzero, no temperature could be assigned to the system and F=UTS could not be, strictly speaking, the free energy of the system although U and S are the energy and entropy of the system. But we may suppose that for tt1 and tt2, the system is in equilibrium, in which case Π is nonzero only for t1<t<t2. Within this scenario, T can be considered the temperature of the system at time t1 and at time t2, and F at these two instants of time will be the free energy of the system, and the relation ΔF in (50) will represent the difference in the free energies of the system.

The derivations that we have just carried out, such as that of the inequality (52), made no restriction on the type of external force. It could be a nonconserved force or a time dependent force. This latter type of external force happens, for instance, when the system is driven at our will.

7. Harmonic oscillator

Let us apply the results we have found so far to the case of a harmonic oscillator for which Fc=Kx and

(53) H = p 2 2 m + 1 2 K x 2 .

It is in contact with a heat reservoir at a temperature T so that the FPK equation is

(54) ρ t = p m ρ x + K x ρ p + γ p ρ p + m γ β 2 ρ p 2 .

The FPK equation can be solved exactly by assuming the following Gaussian form for the probability distribution

(55) ρ = 1 Z e ( a x 2 + b p 2 + 2 c x p ) / 2 ,

where the parameters a, b, c depend on time and

(56) Z = 2 π a b c 2 .

The solution is given in the appendix 16, where we find the parameters a, b, and c as functions of time.

From the probability distribution (55), we determine the covariances,

(57) x 2 = b a b c 2 ,
(58) p 2 = a a b c 2 ,
(59) x p = c a b c 2 ,

and other properties. The energy is

(60) U = 1 2 m p 2 + K 2 x 2 ,

and the entropy is found by using its definition and is

(61) S = k + ln 2 π 1 2 ln ( a b c 2 ) .

To determine dS/dt, Π and Φ, we need to find J, which is defined by (36). In the present case it reads

(62) J = m γ β ( c x + b p β m p ) ρ .

From J we find

(63) d S d t = k m γ β b k γ ,
(64) Φ = k β γ m p 2 k γ ,
(65) Π = k β γ m p 2 + k m γ β b 2 k γ ,

and it becomes clear that dS/dt=ΠΦ.

From the asymptotic values of the parameter a, b, and c, given in the appendix 16, which are a=Kβ, b=β/m, and c=0, we find the equilibrium values of the various quantities which are xp=0,

(66) 1 2 m p 2 = 1 2 K x 2 = 1 2 β = 1 2 k T ,
(67) U = 1 β = k T ,
(68) S = k + ln 2 π 1 2 ln K β 2 m ,

dS/dt=0, Φ=0, and Π=0. As the parameters a, b, and c decay exponentially to their asymptotic values, so do the properties obtained above.

We remark that the probability distribution approaches the equilibrium probability distribution

(69) ρ e = 1 Z e β H ,

where H is the energy function (53), and the system thermalizes properly.

8. Quantum evolution equation

To extend the stochastic approach developed above to the quantum case we need to provide a quantum version of the evolution equation. One way of setting up the quantum version is to use a procedure known as canonical quantization, which amounts to replace the Poisson brackets of classical mechanics by a quantum commutator. For the case of just one degree of freedom that we are considering here, the Poisson brackets between A and B are given by

(70) { A , B } = A x B p B x A p .

The canonical quantization is obtained by performing the replacement

(71) { A , B } 1 i [ A ^ , B ^ ] ,

where is the Planck constant and the quantities A^ and B^ on the right are understood as quantum operators, and [A^,B^]=A^B^B^A^.

From the quantization rule above, we obtain two useful rules. The first is obtained by setting A=x in the Poisson brackets, which gives

(72) { x , B } = B p .

Using the quantization rule, we obtain

(73) B p 1 i [ x ^ , B ^ ] .

In an analogous way, if we set B=p, we find

(74) { A , p } = A x ,

and using the quantization rule, we obtain

(75) A x 1 i [ A ^ , p ^ ] .

We remark that x^ and p^ on the right-hand sides of (73) and (75) are quantum operators representing the position and the momentum of a particle, and we recall that according to quantum mechanics [x^,p^]=i.

The last two rules are useful in the transformation of a differential equation such as the FPK equation into a quantum equation. It should be remarked however that the equation obtained by this procedure is not a mathematical derivation of the quantum equation from the classical equation. In fact, the opposite is true. From the quantum equation one reaches the classical equation by taking the classical limit. Thus, the quantization rules should be used as a guide to find a quantum equation which at the end should be introduced as a postulate.

It is usual to use the hat symbol to denote a quantum operator as we have done above. But from now on we will drop the hat symbol and denote an operator by a letter without the hat. Thus the position and momentum operator, for instance, will be denoted by x and p.

Let us consider the FPK equation in the form (8). According to the quantization rules, the quantum evolution equation is

(76) ρ t = 1 i [ H , ρ ] 1 i [ x , J ] ,

where now, ρ, H, and J are quantum operators. Since the quantum operators can be represented by matrices most properties of quantum operators are better understood if stated in terms of matrices. For instance the density operator ρ, which is the quantum version of the probability density distribution, holds the following property: its diagonal elements are nonnegative and the sum of the diagonal elements, its trace, equals the unity,

(77) T r ρ = 1.

The operator H is the quantum energy function, or the Hamiltonian, given by

(78) H = 1 m p 2 + V ,

where V is a function of x. We remark that without J the equation reduces to the quantum Liouville equation of quantum statistical mechanics.

The property (77) is the analogue of the normalization of the probability density that we used before and thus it should be preserved in time. To see that this is the case let us take the trace of the right-hand side of equation (76). There are two terms to be considered and both vanish because each one is the trace of a commutator, and the trace of a commutator vanishes. Therefore, the left hand side, which is the time derivative of the trace of ρ, vanishes and ρ should be constant in time.

The trace of a commutator vanishes because Tr(AB)=Tr(BA). This is a particular case of the cyclic property of the trace Tr(ABC)=Tr(BCA). This cyclic property allows us to write the following property

(79) T r ( [ A , B ] C ) = T r ( A [ B , C ] ) ,

that we will employ further on.

9. Energy and entropy

The average U=H of the energy function H with respect to the density operator ρ is given by

(80) U = T r ( H ρ ) ,

and is the quantum analog of the integral in (17). Deriving it with respect to time,

(81) d U d t = T r ( H ρ t ) ,

and using the evolution equation (76), we find

(82) d U d t = 1 i T r ( H [ x , J ] ) ,

where the term involving the commutator [H,ρ] vanishes owing to the property (79). Using again this same property we get

(83) d U d t = 1 i T r ( [ x , H ] J ) .

The right-hand side is interpreted as the heat flux into the system

(84) Φ q = 1 i T r ( [ x , H ] J ) ,

and

(85) d U d t = Φ q .

The definition of entropy for the quantum case is that introduced by von Neumman,

(86) S = k T r ( ρ ln ρ ) ,

and corresponds to the extension of the Gibbs entropy to the quantum case. Deriving this equation with respect to time we find

(87) d S d t = k T r ( ρ t ln ρ ) ,

where the term involving the derivative of lnρ vanishes in view of the normalization property (77). Using the evolution equation, we find

(88) d S d t = k i T r ( [ x , J ] ln ρ ) ,

where again the term involving the commutator [H,ρ] vanishes owing to the property (79).

10. Rate of entropy production

The right-hand side of the equation (88) is not equal to to the flux of entropy because the entropy is not a conserved quantity. There is another source of entropy which comes from dissipation inside the system. Thus as before the time variation of the entropy has two term,

(89) d S d t = Π Φ ,

where Π is the rate of entropy production and Φ is the flux of entropy from the system to the outside.

Guided by the classical version, the production of entropy is defined as follows. The quantum irreversible current J has not been defined yet but it is expressed in terms of the density operator, that is, J(ρ). Let us denote by ρ0 the quantity such that J(ρ0)=0. If the commutator of ρ0 with H also vanishes then ρ0 is identified as the density at thermodynamic equilibrium. However, here we do not demand that this commutator vanishes. The rate of entropy production is defined as

(90) Π = k i T r { [ x , J ] ( ln ρ ln ρ 0 } ,

and it becomes clear that Π vanishes whenever J vanishes. Taking into account (88) and (89), the expression for the flux of entropy Φ is

(91) Φ = k i T r { [ x , J ] ln ρ 0 } .

Let us assume that the irreversible current J, which has not yet been specified, is so defined that the evolution equation describes a system in contact with a heat reservoir at temperature T. In thermodynamic equilibrium J vanishes and ρ0 should be identified as corresponding to the Gibbs canonical distribution which in the quantum case reads

(92) ρ 0 = 1 Z e β H ,

where

(93) Z = T r ( e β H ) .

Replacing ρ0 in the expressions for Π and Φ, we find

(94) Π = k i T r { [ x , J ] ( ln ρ + β H } ,
(95) Φ = k β i T r ( [ x , J ] H ) .

Now let us compare equations (95) and (84). We see that Φ and Φq are related by

(96) Φ = 1 T Φ q ,

which is the thermodynamic relation that should exist between the flux of entropy Φ and the heat flux Φq when a system is in contact with a heat reservoir at a temperature T. Other thermodynamic relations that we have obtained for the classical case, such those given by equations (39) and (40), can also be shown to be valid in the quantum case.

11. Irreversible current

The quantum irreversible current J has not yet been specified. Here we will choose it by applying the rules of the canonical quantization to the expression (9). The form chosen for the irreversible current is

(97) J = 1 2 ( D ρ + ρ D ) Γ 2 1 i [ x , ρ ] ,

where D is a quantum operator representing the dissipative force and Γ is a real constant as in the classical case. Notice that we have written a symmetrized form for the product of the dissipative force and the density operator. In one of the products, we have used the Hermitian conjugate of D, denoted D, so that the whole expression is Hermitian.

The matrix A that represents the Hermitian conjugate of an operator is obtained from the matrix A that represent the operator by transposing the elements of the matrix and taking the complex conjugate of each element. If Aij and (A)ij represent an element of these two matrices then (A)ij=(Aji). A Hermitian operator is the operator which is equal to its Hermitian conjugate. An important property of such an operator is that its eigenvalue are real.

If we wish to describe a system that at long times reaches the thermodynamic equilibrium then D and Γ should have a relation such that J vanishes when ρ is the equilibrium density operator ρe. Imposing the vanish of J when ρ is replaced by ρe we find the condition

(98) 1 2 ( D ρ e + ρ e D ) = Γ 2 1 i [ x , ρ e ] .

The solution for D is

(99) D = Γ 2 1 i ρ e 1 [ x , ρ e ] = Γ 2 1 i ( ρ e 1 x ρ e x ) ,

which is understood as the relation between dissipation, described by D, and noise or fluctuations, described by Γ. In equilibrium there must be a relation between dissipation and fluctuations.

That (99) is a solution can be verified by substitution and using the property that the Hermitian conjugate of a product of two operators equals the product of the conjugate of each operator in the inverse order. In the present case, (ρeD)=Dρe because ρe is Hermitian.

Next we seek for J that could describe the contact of the system with a heat reservoir. In this case

(100) ρ e = 1 Z e β H ,

which replace in (99) gives

(101) D = Γ 2 1 i ( e β H x e β H x ) .

This form of dissipation is certainly not the form of the classical dissipation found above which is proportional to the momentum. However, at high temperatures this is the case. If we expand the terms between parentheses in the right-hand side of (101) up to terms of order β, we find D=γp where γ=Γβ/2m, or

(102) Γ = 2 γ m β .

For an arbitrary temperature, the dissipation, according to the present approach, is not proportional to the momentum and is given by (101), which we write, by using (102), as

(103) D = γ g ,

where

(104) g = m i β ( e β H x e β H x ) .

Replacing the irreversible current (97) into the evolution equation (76) we may write it in the more explicit form

(105) ρ t = 1 i [ H , ρ ] + γ 2 i [ x , g ρ + ρ g ] γ m β 2 [ x , [ x , ρ ] ] ,

which we call the quantum FPK equation.

12. Quantum harmonic oscillator

For a quantum Harmonic oscillator the energy function is

(106) H = 1 2 m p 2 + 1 2 m ω 2 x 2 ,

where ω is the frequency of oscillations. To find the solution of the quantum FPK equation from which we may determine the thermodynamic properties it is necessary to know the explicit expression of the dissipation force D=γg, that is, we need to know g as a function of x and p. For the harmonic oscillator we show in the appendix 16 that

(107) g = a p + i b x ,

where a and b are real numbers given by

(108) a = 1 β ω sinh β ω , b = m β ( cosh β ω 1 ) .

A solution of the quantum FPK equation (105) can be obtained by a method similar to that used in the classical case, which is to assume a solution of the form

(109) ρ = 1 Z e ( a x 2 + b p 2 + c x p + c p x ) / 2 ,

where a, b, and c are real constant that depends on time. Here we will limit ourselves to write down the equations for the covariances and determine their asymptotic values, which are the values at thermodynamic equilibrium. The time dependent solution can be found in the reference [30][30] M.J. de Oliveira, Phys. Rev. E 94, 012128 (2016).. Multiplying the quantum FPK equation successively by x2, p2, and xp, and taking the trace, we obtain the following equations for the covariances

(110) d d t p 2 = m ω 2 ( p x + x p ) + b γ 2 a γ p 2 + 2 γ m β ,
(111) d d t x 2 = 1 m ( p x + x p ) ,
(112) d d t x p = 1 m p 2 m ω 2 x 2 a γ 2 ( p x + x p ) .

The equation for px is not needed because xppx=i.

At the stationary state, which is a state of thermodynamic equilibrium we find

(113) x p = x p = i 2 ,
(114) 1 2 m p 2 = 1 2 m ω 2 x 2 = 1 2 ω ( 1 e β ω 1 + 1 2 ) .

From these results one reaches the expected expression for the average energy of a quantum oscillator,

(115) H = ω ( 1 e β ω 1 + 1 2 ) .

We remark that the probability distribution approaches the equilibrium probability distribution

(116) ρ e = 1 Z e β H ,

where H is the quantum energy function (106), and the system thermalizes properly.

13. Multiple degrees of freedom

Up to this point, we have considered systems with just one degree of freedom. Here we wish to consider the case of a system with multiple degrees of freedom. The derivations of the results for the present case parallel those obtained for one degree of freedom and will not be shown in full detail. We restrict ourselves to the classical case but the quantum case can be obtained in a way similar to the case of one degree of freedom and can be found in reference [30][30] M.J. de Oliveira, Phys. Rev. E 94, 012128 (2016)..

An appropriate treatment of a system with many degrees of freedom begins with the generalization of the FKP equation (44) for this case. As we have seen, this equation describes a system in contact with a heat reservoir and is subject to an external force. The generalization that we give below is the one appropriate to describe a system in contact with multiple reservoirs at distinct temperatures and subject to several forces. We denote by xi a Cartesian component of the position and by pi the respective component of the momentum related to a certain degree of freedom. The energy function is a sum of kinetic energy and a potential energy,

(117) H = i p i 2 2 m + V .

The FPK equation, which governs the time evolution of the probability density ρ, now reads

(118) ρ t = { H , ρ } i F i e ρ p i i J i p i ,

where Fie are the Cartesian components of the external force, which may be nonconservative and time dependent, and

(119) J i = D i ρ Γ i 2 ρ p i

are the components of the dissipative probability current. We choose the dissipation force Di to be of the usual form Di=γpi and Γi=2mkTi so that we may interpret the FPK equation of describing a system in contact with several heat reservoirs at temperatures Ti. Therefore,

(120) J i = γ ( p i ρ + m k T i ρ p i ) .

In the absence of external forces and if all heat baths have the same temperature Ti=T, then the stationary state is a state of thermodynamic equilibrium because in this case Ji vanishes for all i. Indeed, if we replace the Gibbs probability distribution

(121) ρ e = 1 Z e β H ,

where β=1/kT, in the expression for Ji we see that it vanishes. We remark that the Poisson brackets also vanish.

The time variation of the energy U=H has the same form of equation (45),

(122) d U d t = Φ q Φ w ,

but now Φq is a sum of the heat fluxes coming from each reservoir, that is,

(123) Φ q = i Φ q i ,

where each heat flux has the form (21), with J replaced by Ji,

(124) Φ q i = J i H p i d x d p .

Using the relation H/pi=pi/m and performing an integration by parts, the heat flux can be written as

(125) Φ q i = γ ( 1 m p i 2 k T i ) .

It becomes clear that if kTi/2 is larger than the average kinetic energy pi2/2m then Φqi>0, and heat flows into the system, otherwise, heat flows from the system to the heat reservoir.

The expression for the power Φw is similar to that of equation (46) but now there is a sum over all components of the force

(126) Φ w = i F i e ρ H p i d x d p .

Using again the relation H/pi=pi/m and performing an integration by parts, the power can be written as

(127) Φ w = 1 m i F i e p i .

It is useful to understand that the forces Fie are being acted by an external agent which is a power device. The role played by the power device in relation to the transfer of mechanical work is analogous to the role played by the heat reservoir in relation to the transfer of heat. We see from (127) that the power has the usual form of a force multiplied by the velocity. If Fie and pi have the same sign, work is performed by the power device onto the system, otherwise, the system performs work on the power device.

The heat flux and the power in (122) might be understood as functions of the time so that we may write Φq=dQ/dt and Φw=dW/dt in which case equation (122) reduces to the form

(128) d U = d Q d W ,

which is the usual way of writing the conservation of energy. However, it should be remarked that both dQ and dW are not in general exact differential, although dU is. The concepts of exact and inexact differentials are commented in the appendix 16.

The variation of entropy with time is

(129) d S d t = Π Φ ,

where the rate of entropy production Π and the entropy flux Φ are generalization of the equations (31) and (35) for the present case,

(130) Π = 1 m γ i 1 T i J i 2 ρ d ξ ,

where the integration extends over the space of states, and

(131) Φ = i γ T i ( 1 m p i 2 k T i ) .

In view of the equation (125), we see that the flux of entropy is related to the heat fluxes by

(132) Φ = i Φ q i T i .

Using this relation, we may derive again all the results involving the free energy obtained in section 6, including the inequality 52.

14. Nonequilibrium steady state

Here we wish to show by an example that the present approach is indeed capable of describing systems displaying nonequilibrium steady states. That is, for long times the systems reaches a stationary state in which the irreversible currents are nonzero and entropy is permanently being created. One way of setting a system in a nonequilibrium steady state is to place the system in contact with heat reservoirs with different temperatures. Another possibility is to place the system under the action of a power device. The two possibilities are embodied in the development made in the previous section.

We examine a system with two degrees of freedom. The potential energy is harmonic and the energy function is

(133) H = 1 2 m ( p 1 2 + p 2 2 ) + 1 2 K ( x 1 2 + x 2 2 ) L x 1 x 2 ,

so that the conservative forces are linear, F1=Kx1+Lx2 and F2=Kx2+Lx1. In addition to the conservative forces, the system is under the action of a nonconservative external force given by

(134) F 1 e = c x 2 , F 2 e = c x 1 .

According to equation (127),

(135) Φ w = c m x 2 p 1 c m x 1 p 2 ,

which is minus the power performed by the power device. Each degree of freedom is understood as being coupled to one of the two heat reservoirs at the temperatures T1 and T2. The fluxes of heat associated to each reservoir are given by (125) and are

(136) Φ q 1 = γ m ( p 1 2 m k T 1 ) ,
(137) Φ q 2 = γ m ( p 2 2 m k T 2 ) .

From the two heat fluxes we determine the entropy flux by means of the relation (132)

(138) Φ = Φ q 1 T 1 Φ q 2 T 2 .

From now on we wish to determine the above quantities in the steady state. In the steady state Π=Φ and in view of the relation (138), it suffices to determine the heat fluxes Φqi and the power Φw. To find these quantities we need the covariances xipj and pi2 at the steady state. Therefore, we should solve the FPK equation (118), with the energy function H given by (133) and external forces given by (134).

In view of the fact that the forces, conservative and nonconservative, are linear, it is possible to solve the FPK equation exactly. The solution is carried out in the appendix 16 where the covariances at the stationary state are determined. Replacing the covariances in the expressions (135), (136), and (137), we find

(139) Φ w = 2 c m C ,
(140) Φ q 1 = c L m C , Φ q 2 = c + L m C ,

where

(141) C = C 0 ( L Δ T + 2 c T ) ,

T=(T1+T2)/2, ΔT=T1T2, and

(142) C 0 = m γ k 2 ( m γ 2 K + L 2 c 2 ) .

The range of values of c are such that the denominator be positive.

In the steady state the production of entropy equals the entropy flux which, from (138) is given by

(143) Π = Φ = C 0 T 1 T 2 m ( 2 c T + L Δ T ) 2 ,

which is clearly nonnegative.

Let us analyze the results we have just found for L>0. The various possibilities for the heat fluxes and power are shown in table 1. In all cases where c0, except one, energy is being dissipated, that is, work is performed onto the system (Φw<0) which in turn releases it in the form of heat to one or to both heat reservoirs. The exception is the case in which the system perform works (Φw>0) in which case heat flows from the hotter reservoir to the colder reservoir through the system, and the whole system functions as a heat engine.

Table 1
Heat fluxes and power for the case of the system defined by the energy function (133) and by the external force (134), where ΔT=T1T2. The convention for the fluxes are: if Φq1 is positive, heat flows from the reservoir 1 to the system, if Φq2 is positive, heat flows from the reservoir 2 to the system, if Φw is positive, the system performs work on to the external device. Notice that Φw=Φq1+Φq2. We are considering here L>0 and |c|<L, and that T1T2.

15. Path integral

The approach to the stochastic thermodynamics that we have developed here is based on the FPK equation which is understood as an equation that governs the time evolution of the probability distribution ρ. The solution of the evolution equation gives ρ at any instant of time. For convenience here we denote a state by ξ which is understood as the collection of positions and momenta of the particles of the system.

If we solve the FPK equation considering that at the initial time t0 it was in a certain state ξ0, then ρ(ξ,t) is understood as the conditional probability of finding the system in state ξ at time t, given that it was in the state ξ0 at time t0, and we denote it by P(ξ,t|ξ0,t0).

Let us consider now a discretized trajectory, that is, a trajectory for which the system is at state ξ0 at an initial time t0, in ξ1 at time t1, in ξ2 at time t2, …, and in ξn at the final time tn. The probability of the occurrence of this discretized trajectory is a successive product of

(144) P ( ξ , t | ξ 1 , t 1 ) ,

from =1 until =n, multiplied by the probability P(ξ0,t0). Omitting the reference to the instants of time, the trajectory probability is

(145) P ( ξ n | ξ n 1 ) P ( ξ 2 | ξ 1 ) P ( ξ 1 | ξ 0 ) P ( ξ 0 ) .

From now on we consider that the time intervals between two successive instants of time are the same and equal to τ. It is understood that τ is small enough so that the trajectory approaches a continuous trajectory.

Generally speaking the probability of a trajectory is a joint probability, which we denote by

(146) P ( ξ n , ξ n 1 , , ξ 2 , ξ 1 , ξ 0 ) .

The identification of (146) with (145) defines a type of stochastic dynamics associated with the name of Markov and the approach we are using here, with the FKP equation as the evolution equation, is thus a Markovian stochastic dynamics.

The probability distribution (146) is a joint probability distribution. If we integrate in all variables except one one them, we find the probability of this variable. For instance, if we integrate in ξ0,ξ1,,ξn1, we find the probability of ξn=ξ at time tn=t, which is ρ(ξ,t), that is,

(147) ρ ( ξ ) = P ( ξ , ξ n 1 , , ξ 0 ) d ξ n 1 d ξ 0 .

In which circumstance should we use the path probability (146)? If we wish to find the average U of the energy function H(ξ) at time t, for instance, it suffices to use the probability distribution ρ(ξ,t). However, if we wish to find the average of the mechanical work, we should use the path probability because the work is a path integral.

Let L be the work along a certain trajectory of the force with components fi,

(148) L = i p a t h f i d x i ,

where the index ‘path’ serves to remember that the integral is an integral along a certain trajectory. In a discretized form, the path integral is written the sum

(149) p a t h f i d x i = f i 0 a i 0 + f i 1 a i 1 + + f i n a i n ,

where fi is the value of fi at the -th step, and ai is the increment in xi at the -th step. In this discretized form we see clearly that the path integral depends on ξ0,ξ1,,ξn, and to find its average we should use the path probability (146). This amounts to multiply the right-hand side of (149) by the right-hand side of (146) and integrate in all variables, ξ0,ξ1,,ξn. The result of this procedure is indicated by an index ‘path’ in the signs of the average. Therefore, the average of L which we call W is written as

(150) W = L p a t h .

Although we call work both L and W, it should be understood that L is the actual work and W its average.

The path integral (148) can be written as a time integral of the power as is well known. A trajectory may be defined parametrically by given ξi, and thus xi and pi, as functions of this parameter which we take to be the time t. In terms of this parameter, the integral (148) becomes a time integral,

(151) L = i t 1 t 2 f i p i m d t ,

where we have replaced dxi/dt by the velocity pi/m. This expression is written as

(152) L = t 1 t 2 ϕ d t ,

where ϕ is the power at time t, and given by

(153) ϕ = i f i p i m .

Taking the average of the expression (152) we find

(154) L p a t h = t 1 t 2 ϕ d t ,

where in the right-hand side the average is the usual average taken by the use of the probability density ρ(ξ,t) at time t because ϕ depends only on ξ at time t. Thus the average over a path integral is transformed into an average over the ordinary probability density. The integrand on the right-hand side of (154) is understood as the average power,

(155) Φ w = ϕ = i f i p i m ρ d ξ ,

which coincides with the power of the external force given by the equivalent forms (127) or (126), if we recall that fi=Fie. Since W is the average of L, we may write

(156) W = t 1 t 2 Φ w d t .

The result (156) or its equivalent form (154) where L and ϕ are given by (148) and (153), respectively, can immediately be generalized by replacing fi(ξ) by any other function of ξ. Let us suppose that it is replaced by Ji/ρ where Ji is the irreversible current given by (119)

(157) J i = D i ρ Γ i 2 ρ p i .

The path integral of this quantity is

(158) Ψ = i p a t h J i ρ d x i ,

and the associated flux according to the result above is

(159) Φ q = 1 m i 1 ρ J i p i = 1 m i J i p i d ξ ,

which is the heat flux as given by (124). The quantity

(160) Q = t 1 t 2 Φ q d t

is thus the heat exchanged between the two instants of time, and according to the results above may be written as the path average

(161) Q = Ψ p a t h .

Let us consider that a system in contact with a heat bath at a temperature T is acted by external forces during a certain interval of time. The following equality relating the free energy to the work performed by the system during this interval of time has been shown to be valid [6][6] C. Jarzynski, Phys. Rev. Lett. 78, 2690 (1997).,

(162) e β Δ F = e + β L p a t h .

Therefore, if the right-hand side of equation (162) is measured, we may obtain ΔF. Using the inequality eηeη, valid for any random variable η, we find

(163) Δ F L p a t h = W ,

which should be compared with the equation (52).

Before we end this section it is appropriate to place a discussion on the experimental measurement of the several quantities presented in the theory. The quantities that are mensurable are those that we call state functions, that is, quantities that are functions of the random variables, in the present case the positions and momenta, and themselves random variables. A experimental result obtained for a state function E, be it the value of one trial or the arithmetic average of several trials, should be compared with the average predicted by the theory which is written as

(164) E = E ( ξ ) ρ ( ξ ) d ξ ,

such as the energy, or as a path average as is the case of the mechanical work.

Let us consider the case of the entropy which is

(165) S = k ρ ( ξ ) ln ρ ( ξ ) d ξ ,

which sometimes is written as

(166) S = k ln ρ .

Although one may write in this form, this expression is merely an abbreviation of the expression on the right-hand side of (165) and cannot be understood as the average of a state function merely because klnρ is not a state function. Although, sometimes klnρ is called an instantaneous entropy, it is not a mensurable quantity. This point can be better understood if we try to calculate the entropy from a Monte Carlo simulation. One immediately realizes that it is impossible to determine lnρ along a Monte Carlo run and an alternative should be used.

The observation made above with respect to the average (166) is extended to the average in (161) because Ji/ρ is not a state function and it is not a mensurable quantity. This is paradoxical because no one denies that heat Q is mensurable. However, a moment of reflection will reveal that heat is measured through the work dissipated and not as the quantity Ψ above.

16. Discussion and conclusion

We have developed an approach to the stochastic thermodynamics based on the use of the FPK equation, which governs the time evolution of the probability distribution. The main feature of the approach in addition to the evolution equation is the assignment of an energy function, the definition of entropy and the introduction of an expression for the rate of production of entropy. According to the approach, these quantities are well defined quantities in equilibrium or out of equilibrium. This is in contrast to other quantities of thermodynamics such as the temperature, which is defined only when the system is in thermodynamic equilibrium.

The evolution equation contains the mechanism of dissipation and stochastic fluctuation or noise which leads the system toward equilibrium, if an appropriate relation exists between dissipation and noise. The mechanism is included in the irreversible current by the term containing the dissipative force and the term containing the quantity Γ, which is a measure of the noise. If the thermodynamic equilibrium sets in, the irreversible current vanishes. In out of equilibrium the irreversible current is nonzero and the production of entropy, which is related to the square of the irreversible currents is greater than zero. The rate of entropy production is thus a measure of the deviation of a system from thermodynamic equilibrium and of the irreversibility.

We have considered systems with a continuous space of states in which case the appropriate evolution equation is the FPK equation. However, the present approach can be extended to a discrete space of states in which case the evolution equation is called a master equation [29[29] T. Tomé and M.J. de Oliveira, Phys. Rev. E 91, 042140 (2015)., 31[31] T. Tomé and M.J. de Oliveira, J. Chem. Phys. 148, 224104 (2018).]. It can be applied to systems of interacting particles with different species including reactions among them [31][31] T. Tomé and M.J. de Oliveira, J. Chem. Phys. 148, 224104 (2018).. We did not treat the case where the parameters taking place in the evolution equation depends on time but the present approach can also be applied for instance to the case where the temperature oscillate periodically in time [39[39] M.J. Oliveira, J. Stat. Mech., 073204 (2019)., 40[40] C.E. Fiore, M.J. de Oliveira, Phys. Rev. E 99, 052131 (2019).]. In this case, for long times the system may not properly reaches a stationary state in the sense of being independent of time but may reach a state with a probability that oscillates in time.

Stochastic thermodynamics is sometimes called a discipline whose quantities are defined at the level of single trajectories. This denomination emphasizes the fluctuation aspect of the theory, which is a relevant feature in systems with few degrees of freedom, the main application of the theory. In this respect the theory looks like statistical mechanics, which incorporates fluctuations and may also be applied to small systems. Thus, an alternative name to discipline would be stochastic mechanics avoiding the term thermodynamics which is usually associated to macroscopic systems.

The emphasis on trajectories and path integralsl is a distinguish feature as is used for instance in the Jarzynski equality (162) The present approach, on the other hand, the emphasis rests on the fluxes and currents of various types but a relationship between fluxes and path integrals exists as shown in section 15, revealing the equivalence between the two approaches. The present approach also emphasizes the connection with the laws of thermodynamics, particularly the second law expressed by the nonnegativity of the rate of entropy production.

The present approach to the quantum stochastic thermodynamics is based on the quantum evolution equation which is a canonical quantization of the FPK equation. It differs from other approaches such as those based on the Lindblad operators [41][41] G. Lindblad, Comm. Math. Phys. 48, 19 (1976).. However, the present quantum FPK equation has similarity with the quantum master equation derived by Dekker [42][42] H. Dekker, Phys. Rev. A 16, 2116 (1977). and by Caldeira and Leggett [43[43] A.O. Caldeira and A. Leggett, Physica 121, 587 (1983)., 44[44] A.O. Caldeira, Introduction to Macroscopic Quantum Phenomena and Quantum Dissipation (Cambridge University Press, Cambridge, 2014).]. The main features of the quantum FPK equation that distinguishes from other approaches is that it is centered on the irreversible density current operator, the analog of the classical irreversible probability current, which plays a fundamental role in defining the fluxes of various type as well as the rate of entropy production. The similarity of the quantum evolution equation to the classical counterpart is useful because the generalization of the concepts of classical stochastic thermodynamics, such as those associated to the current of probability, to the quantum case become easier. A final distinguishing feature is that a system described by the quantum FPK equation thermalizes properly. That is, for long times the system approaches the equilibrium state, if, of course, the relationship (99) between noise and fluctuation is obeyed.

Appendix A

The FPK equation (2) can be written in the form

(167) ρ t = J x x J p p ,

where Jx and Jp are the components of the probability current, and given by

(168) J x = p m , J p = F ρ Γ 2 ρ p ,

Let us integrate both sides of the equation (167) in a region R of the space (x,p) delimited by a boundary line,

(169) d d t R ρ d x d p = R J x x d x d p R J p p d x d p .

The first integral can be integrated in x,

(170) R J x x d x d p = [ J x ( x 2 , p ) J x ( x 1 , p ) ] d p .

For simplicity we are considering that R is a convex region so that there are two values of x at the boundary of R for a given p, which we are denoting by x1(p) and x2(p). In an analogous way we write the second integral as

(171) R J p p d x d p = [ J p ( x , p 1 ) J p ( x , p 2 ) ] d x .

If Jx and Jp vanish at the boundary then both integrals vanish and

(172) d d t R ρ d x d p = 0 ,

from which follows that the integral is a constant that we set equal to unity,

(173) R ρ d x d p = 1.

This result is extended to the case where the region R is the whole space of states, in which case we demand that Jx and Jp vanish at infinity, a requirement that is provided by demanding that ρ vanishes rapidly at infinity.

Let us consider now the case of an integral of the type

(174) A B x d x d p ,

where the integral is over the whole space of states. If we perform and integration by parts the result is

(175) A B x d x d p A x B d x d p .

Assuming that AB vanishes rapidly at the limits of integration as we did above, the first integral vanishes and we are left with the result

(176) A B x d x d p = A x B d x d p .

Appendix B

Here we solve the FPK equation for the case of a harmonic force Fc=Kx. The equation is

(177) ρ t = p m ρ x + K x ρ p + γ p ρ p + m γ β 2 ρ p 2 ,

and it can be solved exactly by assuming the following Gaussian form for the probability distribution

(178) ρ = 1 Z e ( a x 2 + b p 2 + 2 c x p ) / 2 ,

where the parameters a, b, c and Z depend on time. Replacing this form in the equation (177), we see that the left and right-hand sides will only have terms of the types x2, p2 and xp. Equating the respective coefficients of these terms we find equations for the parameters a, b, and c. There is no need to seek an equation for Z because this quantity can be obtained from the three parameters a, b, and c. This follows from the normalization of (178), which gives

(179) Z = e ( a x 2 + b p 2 + 2 c x p ) / 2 d x d p = 2 π a b c 2 .

Replacing the Gaussian distribution (178) in the FPK equation we may find the equations for the three parameters. However, the equations are too complicated and we will instead seek for equations that determine the covariances x2, p2, and xp. Before that we should write down the relations between the covariances and the three parameters, which are are obtained from the probability distribution (178), and are

(180) x 2 = b a b c 2 ,
(181) p 2 = a a b c 2 ,
(182) x p = c a b c 2 .

Inverting these relations we find

(183) a = p 2 x 2 p 2 x p 2 ,
(184) b = x 2 x 2 p 2 x p 2 ,
(185) c = x p x 2 p 2 x p 2 .

It remains now to determine the covariances as functions of time. To find the equations for the covariances we proceed as follows. We multiply both sides of the FPK equation successively by x2, p2 and xp, and integrate in x and p. Performing appropriate integration by parts, we find

(186) d d t x 2 = 2 m x p ,
(187) d d t p 2 = 2 K x p 2 γ p 2 + 2 m γ β ,
(188) d d t x p = 1 m p 2 K x 2 γ x p .

The stationary solution of this equation is x2=1/Kβ, p2=m/β, and xp=0. Taking these result into account, we define variables that are deviations of the covariances from their stationary values as follows, A=x21/Kβ, B=p2m/β, and C=rp. These variables obey the set of linear equations

(189) d A d t = 2 m C ,
(190) d B d t = 2 K C 2 γ B ,
(191) d C d t = 1 m B K A γ C ,

which we write in matrix form

(192) d d t ( A B C ) = ( 0 0 2 / m 0 2 γ 2 K K 1 / m γ ) ( A B C ) .

The solution for each variable is of the type eλt where λ is an eigenvalue of the square matrix above. They are

(193) λ 1 = γ + γ 2 4 K / m ,
(194) λ 2 = γ ,
(195) λ 3 = γ γ 2 4 K / m ,

and are all negative. The general solution is

(196) A = A 1 e λ 1 t + A 2 e λ 2 t + A 3 e λ 3 t ,
(197) B = B 1 e λ 1 t + B 2 e λ 2 t + B 3 e λ 3 t ,
(198) C = C 1 e λ 1 t + C 2 e λ 2 t + C 3 e λ 3 t ,

and the coefficients are not all independent, but are related by

(199) λ i A i = 2 m C i ,
(200) λ i B i = 2 K C i 2 γ B i ,
(201) λ i C i = 1 m B i K A i γ C i .

Thus only three, say A1, A2, and A3 can be chosen to be independent, and they are determined by the initial conditions.

It is worth determining the solution for long times. In this case the solution is dominated by the largest eigenvalue, which is λ1. The covariances are

(202) x 2 = 1 K β + A 1 e λ 1 t ,
(203) p 2 = m β + B 1 e λ 1 t ,
(204) x p = C 1 e λ 1 t ,

from which we find the three parameters

(205) a = K β a 1 e λ 1 t ,
(206) b = β m b 1 e λ 1 t ,
(207) c = c 1 e λ 1 t ,

where a1=K2β2A1, b1=β2B1/m2, c1=Kβ2C1/m.

Appendix C

We wish to determine here in an explicit form the dissipative force D=γ for the quantum harmonic oscillator were

(208) g = m i β ( e β H x e β H x ) ,

and

(209) H = 1 2 m p 2 + 1 2 m ω 2 x 2 ,

To this end we start with the following identity [38][38] E. Merzbacher, Quantum Mechanics (Wiley, New York, 1970). 2ª ed.

(210) e β H x e β H = x + β [ H , x ] + β 2 2 [ H , [ H , x ] ] + + β 3 3 ! [ H , [ H , [ H , x ] ] ] +

Using the notation

(211) A n = [ H , [ H , [ H , x ] ] ] ,

where the numbers of commutator is equal to n, the identity above is written as

(212) e β H x e β H = n = 0 β n n ! A n ,

where A0=x. The quantities An obeys the recursive relations

(213) A n + 1 = [ H , A n ] .

To determine An is easier if we use the relations

(214) [ H , x ] = i m p ,
(215) [ H , p ] = i m ω 2 x ,

which are obtained by using the commutation relation [x,p]=i. From these relations we get the two useful rules,

(216) [ H , [ H , x ] = 2 ω 2 x ,
(217) [ H , [ H , p ] = 2 ω 2 p .

The first two coefficients of the expansion are

(218) A 0 = x , A 1 = i m p .

Next, with the two rules above in mind, we observe that A2 will be proportional to x and A3 will be proportional to p, and, in general, An will be proportional to x if n is even, and it will be proportional to p if n is odd.

Let us consider the case n even and write An=anx. Then using the two rules above,

(219) A n + 2 = [ H , [ H , A n ] ] = 2 ω 2 a n x ,

so that

(220) a n + 2 = 2 ω 2 a n ,

from which we find

(221) a n = ( ω ) n ,

because a0=1. The part of the expansion (212) corresponding to n even is

(222) n ( e v e n ) β n n ! A n = x n ( e v e n ) ( β ω ) n n ! = x cosh β ω .

Now we consider the case n odd and write An=bnp. Then using the two rules above

(223) A n + 2 = [ H , [ H , A n ] ] = 2 ω 2 b n p ,

so that

(224) b n + 2 = 2 ω 2 b n ,

from which we find

(225) b n = i m ω ( ω ) n ,

because b1=i/m. The part of the expansion (212) corresponding to n odd is

(226) n ( o d d ) β n n ! A n = i p m ω n ( o d d ) ( β ω ) n n ! = i p m ω sinh β ω .

Collecting the results above we find

(227) e β H x e β H = i p m ω sinh β ω + x cosh β ω ,

and the quantity (208) becomes

(228) g = p 1 β ω sinh β ω + i x m β ( cosh β ω 1 ) .

Appendix D

Let suppose that f is a function of several variables that we denote by xi, and that these variables depend on time. The derivative of f with respect to time is

(229) d f d t = i f i d x i d t ,

where fi are functions of xi given by

(230) f i = f x i .

Equation (229) can be written in simplified form

(231) d f = i f i d x i ,

where dxi are the differentials of the variable xi and df is the differential of f. Since f is a function of xi then the following relation is valid

(232) f i x j = f j x i .

Now we raise the following question. Let g be a function of t and let xi depend on time as before and let us assume that

(233) d g d t = i g i d x i d t ,

where gi are given function of the variables xj. The question now arises whether g could depend on time only through the variables xi, that is, whether

(234) g ( t ) = g ( x 1 ( t ) , x 2 ( t ) , ) .

If that is possible then according to our reasoning above the given functions gi of the variables xj must fulfill the condition

(235) g i x j = g j x i ,

for all pairs i,j. If this condition is not satisfied it is not possible to write g as a function of xi. In this case if we write (233) in the simplified form

(236) d g = i g i d x i .

we say that dg is not an exact differential.

Appendix E

We determine here the covariances xixj, xipj and pipj for the system described by the FPK equation (118) where F1c=Kx1+Lx2, F2c=Kx2+Lx1, and F1e=cx2 and F2e=cx1, which we reproduce here in the following form

ρ t = p 1 m ρ x 1 p 2 m ρ x 2
+ p 1 ( K x 1 + b x 2 + γ p 1 ) ρ + p 2 ( K x 2 + a x 1 + γ p 2 ) ρ
(237) + γ m k T 1 2 ρ p 1 2 + γ m k T 2 2 ρ p 2 2 ,

where b=L+c and a=Lc.

Multiplying successively equation (237) by xixj, xipj, and pipj, and performing the integration we find the following equations, after appropriate integration by parts,

(238) d d t x 1 2 = 2 m x 1 p 1 ,
(239) d d t x 2 2 = 2 m x 2 p 2 ,
(240) d d t x 1 x 2 = 1 m x 1 p 2 + 1 m x 2 p 1 ,
(241) d d t x 1 p 2 = 1 m p 1 p 2 K x 1 x 2 a x 1 2 γ x 1 p 2 ,
(242) d d t x 2 p 1 = 1 m p 1 p 2 K x 1 x 2 b x 2 2 γ x 2 p 1 ,
(243) d d t x 1 p 1 = 1 m p 1 2 K x 1 2 b x 1 x 2 γ x 1 p 1 ,
(244) d d t x 2 p 2 = 1 m p 2 2 K x 2 2 a x 1 x 2 γ x 2 p 2 ,
(245) d d t p 1 2 = 2 K x 1 p 1 2 b x 2 p 1 2 γ p 1 2 + 2 γ m k T 1 ,
(246) d d t p 2 2 = 2 K x 2 p 2 2 a x 1 p 2 2 γ p 2 2 + 2 γ m k T 2 ,
d d t p 1 p 2 = K x 1 p 2 K x 2 p 1
(247) b x 2 p 2 a x 1 p 1 2 γ p 1 p 2 .

Now we look for the stationary solution. Setting the above equation to zero, we find that the following covariances vanish, x1p1=0, x2p2=0, p1p2=0. The other covariances are the solution of the set of linear equations

(248) x 1 p 2 + x 2 p 1 = 0 ,
(249) K x 1 x 2 + a x 1 2 + γ x 1 p 2 = 0 ,
(250) K x 1 x 2 + b x 2 2 + γ x 2 p 1 = 0 ,
(251) p 1 2 m K x 1 2 b m x 1 x 2 = 0 ,
(252) p 2 2 m K x 2 2 a m x 1 x 2 = 0 ,
(253) b x 2 p 1 + γ p 1 2 = γ m k T 1 ,
(254) a x 1 p 2 + γ p 2 2 = γ m k T 2 .

A straightforward calculation leads us to the result

(255) x 1 p 2 = x 2 p 1 = m γ k ( b T 2 a T 1 ) 2 ( m γ 2 K + a b ) ,
(256) x 1 x 2 = k ( a T 1 + b T 2 ) 2 ( K 2 a b ) ,
(257) p 1 2 = m k T 1 + b γ x 1 p 2 ,
(258) p 2 2 = m k T 2 a γ x 1 p 2 ,
(259) x 1 2 = γ a x 1 p 2 K a x 1 x 2 ,
(260) x 2 2 = γ b x 1 p 2 K b x 1 x 2 .

We remark that, as x12, x22, p12, and p22 must be nonnegative, the following conditions should be fulfilled

(261) m K γ 2 + a b 0 , K 2 a b 0.

It is worth mentioning that the probability density can also be determined. On account of the linearity of the FPK equation in relation to the variable xi and pi, the solution is a multivariate Gaussian distribution, which we write as

(262) ρ = 1 Z exp { 1 2 i , j = 1 4 L i j ξ i ξ j } ,

where we are using the abbreviations ξ1=x1, ξ2=x2, ξ3=p1, and ξ4=p2. The matrix L whose elements are Lij is the inverse of the covariance matrix C, whose elements we have just determined, and are C11=x12, C22=x22, C12=x1x2, C33=p12, C44=p22, C34=p1p2, C13=x1p1, C14=x1p2, C23=x2p1, and C24=x2p2. The expression given by equation (262) is the probability distribution describing the nonequilibrium stationary state of the present problem.

References

  • [1] J. Schnakenberg, Rev. Mod. Phys. 48, 571 (1976).
  • [2] L. Jiu-Li, C. Van den Broeck and G. Nicolis, Z. Phys. B 56,165 (1984).
  • [3] C.Y. Mou, J.L. Luo and G. Nicolis, J. Chem. Phys. 84, 7011 (1986).
  • [4] A. Pérez-Madrid, J.R. Rubí and P. Mazur, Physica A 212, 231 (1994).
  • [5] T. Tomé and M.J. de Oliveira, Braz. J. Phys. 27, 525 (1997).
  • [6] C. Jarzynski, Phys. Rev. Lett. 78, 2690 (1997).
  • [7] K. Sekimoto, Prog. Theor. Phys. Suppl. 130, 17 (1998).
  • [8] J.L. Lebowitz and H. Spohn, J. Stat. Phys. 95, 333 (1999).
  • [9] P. Mazur, Physica A 274, 491 (1999).
  • [10] C. Maes and K. Netočný, J. Stat. Phys. 110, 269 (2003).
  • [11] L. Crochik and T. Tomé, Phys. Rev. E 72, 057103 (2005).
  • [12] T. Tomé, Braz. J. Phys. 36, 1285 (2006).
  • [13] R.K.P. Zia and B Schmittmann, J. Phys. A: Math. Gen. 39, L407 (2006).
  • [14] D. Andrieux and P. Gaspard, Phys. Rev. E 74, 011906 (2006).
  • [15] T. Schmiedl and U. Seifert, J. Chem. Phys. 126, 044101 (2007).
  • [16] R.J. Harris and G. M. Schütz, J. Stat. Mech. 2007,P07020 (2007).
  • [17] U. Seifert, Eur. Phys. J. B 64, 423 (2008).
  • [18] R.A. Blythe, Phys. Rev. Lett. 100, 1010060 (2008).
  • [19] M. Esposito, K. Lindenberg and C. Van den Broeck, Phys. Rev. Lett. 102, 130602 (2009).
  • [20] M. Esposito, U. Harbola and S. Mukamel, Rev. Mod. Phys. 81, 1665 (2009).
  • [21] T. Tomé and M.J. de Oliveira, Phys. Rev. E 82, 021120 (2010).
  • [22] C. Van de Broeck and M. Esposito, Phys. Rev. E 82, 011144 (2010).
  • [23] C. Jarzynski, Annual Review of Condensed Matter Physics 2, 329 (2011).
  • [24] T. Tomé and M.J. de Oliveira, Phys. Rev. Lett. 108, 020601 (2012).
  • [25] R.E. Spinney and I.J. Ford, Phys. Rev. E 85, 051113 (2012).
  • [26] U. Seifert, Rep. Prog. Phys. 75, 126001 (2012).
  • [27] M. Santillan and H. Qian, Physica A 392, 123 (2013).
  • [28] D. Luposchainsky and H. Hinrichsen, J. Stat. Phys. 153, 828 (2013).
  • [29] T. Tomé and M.J. de Oliveira, Phys. Rev. E 91, 042140 (2015).
  • [30] M.J. de Oliveira, Phys. Rev. E 94, 012128 (2016).
  • [31] T. Tomé and M.J. de Oliveira, J. Chem. Phys. 148, 224104 (2018).
  • [32] M.J. de Oliveira, Phys. Rev. E 99, 052138 (2019).
  • [33] N.G. van Kampen, Stochastic Processes in Physics and Chemistry (North-Holland, Amsterdam, 1981).
  • [34] C.W. Gardiner, Handbook of Stochastic Methods for Physics, Chemistry and Natural Sciences (Springer, Berlin, 1983).
  • [35] H. Risken, The Fokker-Planck Equation, Methods of Solution and Applications (Springer, Berlin, 1984).
  • [36] T. Tomé and M.J. de Oliveira, Stochastic Dynamics and Irreversibility (Springer, Heidelberg, 2015).
  • [37] S.R.A. Salinas, Introduction to Statistical Physics (Springer, New York, 2001).
  • [38] E. Merzbacher, Quantum Mechanics (Wiley, New York, 1970). 2ª ed.
  • [39] M.J. Oliveira, J. Stat. Mech., 073204 (2019).
  • [40] C.E. Fiore, M.J. de Oliveira, Phys. Rev. E 99, 052131 (2019).
  • [41] G. Lindblad, Comm. Math. Phys. 48, 19 (1976).
  • [42] H. Dekker, Phys. Rev. A 16, 2116 (1977).
  • [43] A.O. Caldeira and A. Leggett, Physica 121, 587 (1983).
  • [44] A.O. Caldeira, Introduction to Macroscopic Quantum Phenomena and Quantum Dissipation (Cambridge University Press, Cambridge, 2014).

Publication Dates

  • Publication in this collection
    14 Sept 2020
  • Date of issue
    2020

History

  • Received
    21 May 2020
  • rev-received
    11 June 2020
  • Accepted
    18 June 2020
Sociedade Brasileira de Física Caixa Postal 66328, 05389-970 São Paulo SP - Brazil - São Paulo - SP - Brazil
E-mail: marcio@sbfisica.org.br