Abstract
The stochastic thermodynamics provides a framework for the description of systems that are out of thermodynamic equilibrium. It is based on the assumption that the elementary constituents are acted by random forces that generate a stochastic dynamics, which is here represented by a Fokker-Planck-Kramers equation. We emphasize the role of the irreversible probability current, the vanishing of which characterizes the thermodynamic equilibrium and yields a special relation between fluctuation and dissipation. The connection to thermodynamics is obtained by the definition of the energy function and the entropy as well as the rate at which entropy is generated. The extension to quantum systems is provided by a quantum evolution equation which is a canonical quantization of the Fokker-Planck-Kramers equation. An example of an irreversible systems is presented which shows a nonequilibrium stationary state with an unceasing production of entropy. A relationship between the fluxes and the path integral is also presented.
Keywords:
Stochastic thermodynamics; Quantum thermodynamics; Entropy production
Introduction
Thermodynamics was conceived as a discipline based on principles and laws that refer to macroscopic quantities such as the principles of energy conservation and of entropy increase, which are the first and second laws of thermodynamics. Although these two principles are valid for systems in equilibrium as well as for systems out of equilibrium, the initial development of the discipline lead to the establishing of a theory of thermodynamics of system in equilibrium. This was possible because the energy of a system in thermodynamic equilibrium is related functionally to the entropy, which allows the definition of temperature.
The derivation of the thermodynamics from the microscopic laws of motion was the aim of the kinetic theory. One of its consequences was the development of the equilibrium statistical mechanics advanced by Gibbs. The statistical mechanics is based on the description of system by the probability distribution that bears the name of Gibbs, which for a system in contact with a heat reservoir is proportional to , where E is the energy function and T, the temperature. The crucial property of the equilibrium distribution is that the probability depends on the states of the system only through the energy function. This property, along with the Gibbs expression for the entropy, leads to the relation between energy and entropy, mentioned above, which characterizes the thermodynamic equilibrium.
The entropy of an isolated system in equilibrium remains invariant. But if it is not in equilibrium, its entropy increases and the increase, in this case, is not due to the flux of entropy because the system is isolated. Entropy is being created spontaneously and in this sense it differs from the energy which is a conserved quantity. If a system is not isolated then the variation of the entropy S with time is the algebraic sum of two terms,
where is the rate in which entropy is being created, the rate of entropy production, and is the flux of entropy.
The production of entropy is related to irreversible processes occurring inside the system which are understood as processes that are more likely to occur than their time-reverse counterparts. Thermodynamic equilibrium is thus characterized as the state where a process and its time reversal are equally probable. This characterization of equilibrium, embodied in the stochastic thermodynamics, is a dynamical definition, being more comprehensible than the static definition given above in terms of the Gibbs distribution. The stochastic thermodynamics [1[1] J. Schnakenberg, Rev. Mod. Phys. 48, 571 (1976). [2] L. Jiu-Li, C. Van den Broeck and G. Nicolis, Z. Phys. B 56,165 (1984). [3] C.Y. Mou, J.L. Luo and G. Nicolis, J. Chem. Phys. 84, 7011 (1986). [4] A. Pérez-Madrid, J.R. Rubí and P. Mazur, Physica A 212, 231 (1994). [5] T. Tomé and M.J. de Oliveira, Braz. J. Phys. 27, 525 (1997). [6] C. Jarzynski, Phys. Rev. Lett. 78, 2690 (1997). [7] K. Sekimoto, Prog. Theor. Phys. Suppl. 130, 17 (1998). [8] J.L. Lebowitz and H. Spohn, J. Stat. Phys. 95, 333 (1999). [9] P. Mazur, Physica A 274, 491 (1999). [10] C. Maes and K. Netočný, J. Stat. Phys. 110, 269 (2003). [11] L. Crochik and T. Tomé, Phys. Rev. E 72, 057103 (2005). [12] T. Tomé, Braz. J. Phys. 36, 1285 (2006). [13] R.K.P. Zia and B Schmittmann, J. Phys. A: Math. Gen. 39, L407 (2006). [14] D. Andrieux and P. Gaspard, Phys. Rev. E 74, 011906 (2006). [15] T. Schmiedl and U. Seifert, J. Chem. Phys. 126, 044101 (2007). [16] R.J. Harris and G. M. Schütz, J. Stat. Mech. 2007,P07020 (2007). [17] U. Seifert, Eur. Phys. J. B 64, 423 (2008). [18] R.A. Blythe, Phys. Rev. Lett. 100, 1010060 (2008). [19] M. Esposito, K. Lindenberg and C. Van den Broeck, Phys. Rev. Lett. 102, 130602 (2009). [20] M. Esposito, U. Harbola and S. Mukamel, Rev. Mod. Phys. 81, 1665 (2009). [21] T. Tomé and M.J. de Oliveira, Phys. Rev. E 82, 021120 (2010). [22] C. Van de Broeck and M. Esposito, Phys. Rev. E 82, 011144 (2010). [23] C. Jarzynski, Annual Review of Condensed Matter Physics 2, 329 (2011). [24] T. Tomé and M.J. de Oliveira, Phys. Rev. Lett. 108, 020601 (2012). [25] R.E. Spinney and I.J. Ford, Phys. Rev. E 85, 051113 (2012). [26] U. Seifert, Rep. Prog. Phys. 75, 126001 (2012). [27] M. Santillan and H. Qian, Physica A 392, 123 (2013). [28] D. Luposchainsky and H. Hinrichsen, J. Stat. Phys. 153, 828 (2013). [29] T. Tomé and M.J. de Oliveira, Phys. Rev. E 91, 042140 (2015). [30] M.J. de Oliveira, Phys. Rev. E 94, 012128 (2016). [31] T. Tomé and M.J. de Oliveira, J. Chem. Phys. 148, 224104 (2018).–32[32] M.J. de Oliveira, Phys. Rev. E 99, 052138 (2019).] provides an approach to out of equilibrium an equilibrium thermodynamics that takes into account the dynamical characterization of the irreversible processes by assuming that a system evolves in time according to a microscopic stochastic dynamics.
The elementary constituents of a system are assumed to be acted by random forces in addition to the usual deterministic forces. As a consequence the trajectory followed by a particle is not in general deterministic. There are many possible trajectories that a particle may follow from a given point, each one with a certain probability of occurrence. The approach we follow here uses a representation of the dynamics in terms of the probability of the occurrence of a state at a certain instant of time, which is assumed to be governed by an evolution equation.
The main features of the approach that we follow here are: (1) a stochastic dynamics which is here represented by a Fokker-Planck-Kramers equation [33[33] N.G. van Kampen, Stochastic Processes in Physics and Chemistry (North-Holland, Amsterdam, 1981). [34] C.W. Gardiner, Handbook of Stochastic Methods for Physics, Chemistry and Natural Sciences (Springer, Berlin, 1983). [35] H. Risken, The Fokker-Planck Equation, Methods of Solution and Applications (Springer, Berlin, 1984).–36[36] T. Tomé and M.J. de Oliveira, Stochastic Dynamics and Irreversibility (Springer, Heidelberg, 2015).]; (2) the assignment of an energy function; (3) the definition of entropy as having the same form as the Gibbs entropy; (4) a proper definition of the rate of entropy production.
The first part of this text is dedicated to the classical case. In the second part we extend the results obtained in the first part to the quantum case. In particular, we use as the evolution equation a quantum version of the Fokker-Planck-Kramers equation. In this case, the probability density is replaced by the probability density operator, usually called density matrix. In a third part we generalize the results for the case of many degrees of freedom and present an example of a system that displays a nonequilibrium stationary state with an unceasing production of entropy.
2. Evolution equation
Our object of study is a system of particles that interact among themselves and may also be subject to external forces. In addition each particle is acted by random forces, the origin of which may be internal or external to system. The system evolves in time according to the Newton equation of motion. Due to random forces, the trajectory is not uniquely defined. There are many possible trajectories starting from a given state, each one with a certain probability.
If the system is at a given state at the initial time, one may ask for the probability that it is at a given state at a later time. An answer to this question is provided by the Fokker-Planck-Kramers (FPK) equation which gives the time evolution of the probability density . We will focus initially on a system with just one degree of freedom in which case x is the position and p is the momentum of the particle, and both quantities constitute the state of the system. The probability that at time t the state of the system is inside around is .
In contrast to the equilibrium statistical mechanics for which the probability density is constant in time, here it depends on time. If we wish that the system reaches thermal equilibrium for long times, usually called thermalization, then the solution of the evolution equation for long times must be a Gibbs equilibrium distribution, which is characterized by depending on only through the energy function associated to the system.
The FPK equation is given by
where m is the mass of the particle, f is the ordinary force acting on the particle and is a constant that is associated to the stochastic forces. The force f is understood to be a sum of an internal force , considered to be a conservative force, and a dissipative force D,
The property of the dissipative force D that distinguishes it from the other forces is that it is an odd function of the momentum.
A derivation of the FPK equation (2) can be obtained from a Langevin equation and can be found in reference [36][36] T. Tomé and M.J. de Oliveira, Stochastic Dynamics and Irreversibility (Springer, Heidelberg, 2015).. The Langevin equation leading to (2) is the equation of motion for a particle of mass m moving along a straight line which in addition to the ordinary force f is also under the action of a stochastic force with zero mean and variance proportional to .
An essential property of the FPK equation, and for that matter of any equation that governs the time evolution of a probability distribution, is that it preserves the normalization of , that is,
for any instant of time, where the integration is performed on the whole space of states. If is normalized at the initial time, it remains normalized forever. The demonstration of this fundamental property of the FPK equation is given in the appendix 16.
As is a conservative force, we may write , , where
is the energy function, which is the sum of the kinetic energy and the potential energy V. The first term and the one involving of the FPK equation become
The right-hand side of this equation is written in the abbreviated form as
which is called the Poisson brackets. Replacing this result in the equation (2), the FPK equation acquires the form
where
The FPK equation (2) can also be written in the form
where and . In this form, the FKP equation is understood as a continuity equation and and are understood as the components of the probability current. This is the form that allowed us to show the property (4), as presented in the appendix 16. The part J of the component is the irreversible probability current, which plays a fundamental role in the present approach. We remark that without J the FPK equation reduces to the Liouville equation of classical statistical mechanics [37][37] S.R.A. Salinas, Introduction to Statistical Physics (Springer, New York, 2001).,
3. Thermodynamic equilibrium
The FPK equation as it stands may or may not describe a system that for long times will be in thermodynamic equilibrium. For long times the solution of the FPK equation is its stationary solution, that is, the solution obtained by setting to zero the right-hand side of the equation, in which case J may or may not vanish.
A system in thermodynamic equilibrium may be said to be the one described by a Gibbs distribution. However, this is a static definition. We need here a dynamic characterization of thermodynamic equilibrium. This is provided by characterizing the thermodynamic equilibrium as the stationary state such that the irreversible current vanishes. Therefore, the equilibrium distribution obeys the condition
and in addition
The solution of this last condition leads to the result that is a function of H, that is, depends on x and p through the energy function . This characterizes any Gibbs equilibrium distribution.
The condition (12) is understood as the relation between dissipation, described by D, and noise or fluctuations, described by . That is, in equilibrium there must be a relation between dissipation and fluctuations. We wish to describe a system in contact with a heat reservoir at a temperature T, in which case the appropriate Gibbs equilibrium distribution is [37][37] S.R.A. Salinas, Introduction to Statistical Physics (Springer, New York, 2001).
where and k is the Boltzmann constant. Replacing (14) in equation (12), we find
where is the constant that connects with temperature,
Notice that D is the usual type of dissipation force proportional to the velocity.
4. Energy, heat and entropy
A thermodynamic system may have its energy changing in time. The energy variation is due to the exchange of heat or work with the environment. Here we will treat the case where the external forces are absent so that the variation in energy is only due to the exchange of heat. The energy U of a thermodynamic system, sometimes called internal energy, is the average of the energy function introduced above, that is,
and may depend on time as depends on time. The variation of U with time is
Replacing the time derivative of by using the FPK equation in the form (8), we find
The term involving the Poisson brackets, vanishes after an integration by parts has been performed. Here and in the following, whenever we do an integration by parts, the integrated term is assumed to vanish. As shown in the appendix 16, this is so because we are considering that at the limits of integration the probability density vanishes rapidly. Performing the integral in (19) by parts, we find
The right-hand side of this equation is interpreted as the rate at which heat is introduced into the system, or the flux of heat ,
Thus we may write
an equation that may be understood as the conservation of energy. Notice that the flux of heat involves the irreversible part of the probability current.
From thermodynamics, we know that heat is related to entropy through the Clausius equation , valid for systems in equilibrium, where is the infinitesimal heat exchanged with the system and is the infinitesimal increase of the entropy S. Clausius equation is equivalent to the equation which could be used to define entropy. However, this equation is of no use here because it is valid only for system in equilibrium. The appropriate way to define entropy of a system is to use the Gibbs form
which is a generalization of the Boltzmann entropy where W is the number of accessible states. Although, S given by (23) is usually used for system in equilibrium, here we are assuming that this form is also appropriate for systems out of equilibrium.
5. Entropy production
If the probability is found as a function of time by solving the FPK equation, then S is obtained as a function of time. Deriving (23) with respect to time, we find
There is another part involving the time derivative of but it vanishes if we take into account that is normalized at any time, a result given by equation (4).
Replacing the time derivative of in (24), by using the FPK equation in the form (8), we find
Again the part involving the Poisson brackets vanishes by an integration by parts. The entropy is not a conserved quantity like the energy and as a consequence its variation with time is not equal to the flux of entropy. In other terms, the right-hand side of (14) cannot be identified as the flux of entropy. In addition to the flux of entropy, there is another contribution related to the creation of entropy. This contribution is the rate of how entropy is being generated or created, which is called the rate of entropy production, denoted by . Thus we variation of the entropy of a system with time is written as
where is the flux of entropy from the system to the outside. The next step is to define or postulate the expression for one of the two quantities, or . Once one of them is given, the other is obtained by observing that their difference should be equal to the right-hand side of (25).
The rate of entropy production is a quantity that vanishes when the thermodynamic equilibrium sets in and gives a measure of the deviation from equilibrium. As we have seen above, the vanishing of the irreversible probability current J is a condition for the the thermodynamic equilibrium. Since the entropy production is a nonnegative quantity and vanishes when J vanishes, it should be related to . The expression for the that we are about to introduce meets this two conditions.
Let be the probability distribution that makes the irreversible probability current J to vanish. Writing J in the form
this condition is equivalent to
The probability does not need to be necessarily the equilibrium probability distribution because we are not requiring that vanishes as it occurs with . In analogy with the right-hand side of equation (25), we define the rate of entropy production by
If we integrate by parts and use the relation
that follows from (27) and (28), we reach the expression
We see that the rate of entropy production is the integral of an expression proportional to , as desired. It is nonnegative and vanishes in equilibrium, when , that is,
which is a brief statement of the second law of thermodynamics.
The flux of entropy is obtained from by using the expressions (25) and (29),
Performing an integration by parts and using (28), we find
The flux of entropy can also be written as
after the replacement of J, given by (8) in (34) and performing an integration by parts in the second term. This is an interesting form for the flux of entropy because it can be understood as an average over the probability distribution , which is not the case of the rate of entropy production.
From the expressions for the flux of entropy and the rate of entropy production , we draw an important conclusion concerning the Liouville equation (11). Since this equation can be understood a the FPK equation without the irreversible probability current J, and since and vanishes if , it follows that the Liouville equation predicts no entropy production nor flux of entropy, and the entropy S is constant in time. If the Liouville equation is employed to describe a closed system that approaches equilibrium, but initially is out or equilibrium, then this prediction of the Liouville equation is in contradiction with thermodynamics which predicts an increase of entropy with time.
Let us consider in the following that D and are related by (12) where is the canonical Gibbs distribution (14) so that the FPK equation describes a system in contact with a thermal reservoir and at equilibrium. Replacing the results (15) and (16) in (9), the expression for the irreversible probability current becomes
Replacing the expression for D given by (12) in the equation (34), we find
Comparing with (21), we see that the entropy flux and the heat flux are related by
Using equations (22) and (26) we get the following relation
valid at any time. Near equilibrium, vanishes faster then the other two time derivatives and
or which is the Clausius relation, valid at thermodynamic equilibrium. We remark that T here is the temperature of the heat reservoir. The temperature of the system is if U could be written as a function of S. In out of equilibrium, when , this is not possible, but in equilibrium, in view of the relation , then U becomes a function of S. The relation is translated into , which implies , and T becomes also the temperature of the system.
It is worth mentioning that the variation in time of the free energy F, defined by , at T constant, is
which follows form (39). Since , then and F decreases monotonically towards its equilibrium value. This inequality can also be viewed as the H theorem of Boltzmann. Defining the Boltzmann H by
we see that it is equal to plus a constant. Then, it follows from that , which is the H theorem of Boltzmann.
6. Work
The specific systems that we have consider so far are those that exchange only heat with the environment and are described by the FPK equation (8). Now we wish to consider the case where the systems are also subject to external forces. The appropriate way to treat this case is to add to the ordinary force appearing in the FPK equation (2) an external force , so that f now reads
Repeating the reasoning leading to (8) from (2), we reach the evolution equation
where J is the irreversible probability current, given by (36), which is the one appropriate for the contact with a heat reservoir at a temperature T.
Due to the presence of the external force, the variation in time of the energy has another contribution in addition to the flux of heat,
where is the heat flux into the system and is the work performed by the system per unit time against the the external forces, or power.
To determine the variation of energy with time, we proceed in the same way as we did to derive (22) from the evolution equation (2), but now we use the evolution equation (44). The result is the equation (45) where is the expression (21) and the power is
The equation (25) for the variation of entropy with time remains unchanged by the addition of the external force. To see this we replace the expression (44) into (24). The term involving the Poisson brackets vanishes as we have already seen. The term involving the external force is
where we have performed an integration by parts. But this integral also vanishes if we assume that does not depend on p.
The rate of entropy production is defined by (31), and considering that the expression for remains unchanged, as we have just seen, so does the expression (35) for the entropy flux. As these relations are not modified by the presence of the external force, then the relation , as expressed by equation (38), between the entropy flux and the heat flux, valid for a system is in contact we a heat reservoir at a temperature T, also remains unchanged.
Taking into account that and that , we reach the following relation
Considering a process in which T is constant, then the left hand side is where is the free energy, that is,
Integrating in time, between and , we find
where W is the work performed by the external force,
which is the time integration of the power . Since , it follows that
that is, the variation of the free energy is smaller than the work done on the system. The equality holds in an equilibrium process, when .
The following remark is in order. When the system is in contact with a heat reservoir at a certain temperature T, it does not mean that T is the temperature of the system, as we have pointed out in the remark just below equation (40). If the rate of entropy production is nonzero, no temperature could be assigned to the system and could not be, strictly speaking, the free energy of the system although U and S are the energy and entropy of the system. But we may suppose that for and , the system is in equilibrium, in which case is nonzero only for . Within this scenario, T can be considered the temperature of the system at time and at time , and F at these two instants of time will be the free energy of the system, and the relation in (50) will represent the difference in the free energies of the system.
The derivations that we have just carried out, such as that of the inequality (52), made no restriction on the type of external force. It could be a nonconserved force or a time dependent force. This latter type of external force happens, for instance, when the system is driven at our will.
7. Harmonic oscillator
Let us apply the results we have found so far to the case of a harmonic oscillator for which and
It is in contact with a heat reservoir at a temperature T so that the FPK equation is
The FPK equation can be solved exactly by assuming the following Gaussian form for the probability distribution
where the parameters a, b, c depend on time and
The solution is given in the appendix 16, where we find the parameters a, b, and c as functions of time.
From the probability distribution (55), we determine the covariances,
and other properties. The energy is
and the entropy is found by using its definition and is
To determine , and , we need to find J, which is defined by (36). In the present case it reads
From J we find
and it becomes clear that .
From the asymptotic values of the parameter a, b, and c, given in the appendix 16, which are , , and , we find the equilibrium values of the various quantities which are ,
, , and . As the parameters a, b, and c decay exponentially to their asymptotic values, so do the properties obtained above.
We remark that the probability distribution approaches the equilibrium probability distribution
where H is the energy function (53), and the system thermalizes properly.
8. Quantum evolution equation
To extend the stochastic approach developed above to the quantum case we need to provide a quantum version of the evolution equation. One way of setting up the quantum version is to use a procedure known as canonical quantization, which amounts to replace the Poisson brackets of classical mechanics by a quantum commutator. For the case of just one degree of freedom that we are considering here, the Poisson brackets between A and B are given by
The canonical quantization is obtained by performing the replacement
where is the Planck constant and the quantities and on the right are understood as quantum operators, and .
From the quantization rule above, we obtain two useful rules. The first is obtained by setting in the Poisson brackets, which gives
Using the quantization rule, we obtain
In an analogous way, if we set , we find
and using the quantization rule, we obtain
We remark that and on the right-hand sides of (73) and (75) are quantum operators representing the position and the momentum of a particle, and we recall that according to quantum mechanics .
The last two rules are useful in the transformation of a differential equation such as the FPK equation into a quantum equation. It should be remarked however that the equation obtained by this procedure is not a mathematical derivation of the quantum equation from the classical equation. In fact, the opposite is true. From the quantum equation one reaches the classical equation by taking the classical limit. Thus, the quantization rules should be used as a guide to find a quantum equation which at the end should be introduced as a postulate.
It is usual to use the hat symbol to denote a quantum operator as we have done above. But from now on we will drop the hat symbol and denote an operator by a letter without the hat. Thus the position and momentum operator, for instance, will be denoted by x and p.
Let us consider the FPK equation in the form (8). According to the quantization rules, the quantum evolution equation is
where now, , H, and J are quantum operators. Since the quantum operators can be represented by matrices most properties of quantum operators are better understood if stated in terms of matrices. For instance the density operator , which is the quantum version of the probability density distribution, holds the following property: its diagonal elements are nonnegative and the sum of the diagonal elements, its trace, equals the unity,
The operator H is the quantum energy function, or the Hamiltonian, given by
where V is a function of x. We remark that without J the equation reduces to the quantum Liouville equation of quantum statistical mechanics.
The property (77) is the analogue of the normalization of the probability density that we used before and thus it should be preserved in time. To see that this is the case let us take the trace of the right-hand side of equation (76). There are two terms to be considered and both vanish because each one is the trace of a commutator, and the trace of a commutator vanishes. Therefore, the left hand side, which is the time derivative of the trace of , vanishes and should be constant in time.
The trace of a commutator vanishes because . This is a particular case of the cyclic property of the trace . This cyclic property allows us to write the following property
that we will employ further on.
9. Energy and entropy
The average of the energy function H with respect to the density operator is given by
and is the quantum analog of the integral in (17). Deriving it with respect to time,
and using the evolution equation (76), we find
where the term involving the commutator vanishes owing to the property (79). Using again this same property we get
The right-hand side is interpreted as the heat flux into the system
and
The definition of entropy for the quantum case is that introduced by von Neumman,
and corresponds to the extension of the Gibbs entropy to the quantum case. Deriving this equation with respect to time we find
where the term involving the derivative of vanishes in view of the normalization property (77). Using the evolution equation, we find
where again the term involving the commutator vanishes owing to the property (79).
10. Rate of entropy production
The right-hand side of the equation (88) is not equal to to the flux of entropy because the entropy is not a conserved quantity. There is another source of entropy which comes from dissipation inside the system. Thus as before the time variation of the entropy has two term,
where is the rate of entropy production and is the flux of entropy from the system to the outside.
Guided by the classical version, the production of entropy is defined as follows. The quantum irreversible current J has not been defined yet but it is expressed in terms of the density operator, that is, . Let us denote by the quantity such that . If the commutator of with H also vanishes then is identified as the density at thermodynamic equilibrium. However, here we do not demand that this commutator vanishes. The rate of entropy production is defined as
and it becomes clear that vanishes whenever J vanishes. Taking into account (88) and (89), the expression for the flux of entropy is
Let us assume that the irreversible current J, which has not yet been specified, is so defined that the evolution equation describes a system in contact with a heat reservoir at temperature T. In thermodynamic equilibrium J vanishes and should be identified as corresponding to the Gibbs canonical distribution which in the quantum case reads
where
Replacing in the expressions for and , we find
Now let us compare equations (95) and (84). We see that and are related by
which is the thermodynamic relation that should exist between the flux of entropy and the heat flux when a system is in contact with a heat reservoir at a temperature T. Other thermodynamic relations that we have obtained for the classical case, such those given by equations (39) and (40), can also be shown to be valid in the quantum case.
11. Irreversible current
The quantum irreversible current J has not yet been specified. Here we will choose it by applying the rules of the canonical quantization to the expression (9). The form chosen for the irreversible current is
where D is a quantum operator representing the dissipative force and is a real constant as in the classical case. Notice that we have written a symmetrized form for the product of the dissipative force and the density operator. In one of the products, we have used the Hermitian conjugate of D, denoted , so that the whole expression is Hermitian.
The matrix that represents the Hermitian conjugate of an operator is obtained from the matrix A that represent the operator by transposing the elements of the matrix and taking the complex conjugate of each element. If and represent an element of these two matrices then . A Hermitian operator is the operator which is equal to its Hermitian conjugate. An important property of such an operator is that its eigenvalue are real.
If we wish to describe a system that at long times reaches the thermodynamic equilibrium then D and should have a relation such that J vanishes when is the equilibrium density operator . Imposing the vanish of J when is replaced by we find the condition
The solution for D is
which is understood as the relation between dissipation, described by D, and noise or fluctuations, described by . In equilibrium there must be a relation between dissipation and fluctuations.
That (99) is a solution can be verified by substitution and using the property that the Hermitian conjugate of a product of two operators equals the product of the conjugate of each operator in the inverse order. In the present case, because is Hermitian.
Next we seek for J that could describe the contact of the system with a heat reservoir. In this case
which replace in (99) gives
This form of dissipation is certainly not the form of the classical dissipation found above which is proportional to the momentum. However, at high temperatures this is the case. If we expand the terms between parentheses in the right-hand side of (101) up to terms of order , we find where , or
For an arbitrary temperature, the dissipation, according to the present approach, is not proportional to the momentum and is given by (101), which we write, by using (102), as
where
Replacing the irreversible current (97) into the evolution equation (76) we may write it in the more explicit form
which we call the quantum FPK equation.
12. Quantum harmonic oscillator
For a quantum Harmonic oscillator the energy function is
where is the frequency of oscillations. To find the solution of the quantum FPK equation from which we may determine the thermodynamic properties it is necessary to know the explicit expression of the dissipation force , that is, we need to know g as a function of x and p. For the harmonic oscillator we show in the appendix 16 that
where a and b are real numbers given by
A solution of the quantum FPK equation (105) can be obtained by a method similar to that used in the classical case, which is to assume a solution of the form
where a, b, and c are real constant that depends on time. Here we will limit ourselves to write down the equations for the covariances and determine their asymptotic values, which are the values at thermodynamic equilibrium. The time dependent solution can be found in the reference [30][30] M.J. de Oliveira, Phys. Rev. E 94, 012128 (2016).. Multiplying the quantum FPK equation successively by , , and , and taking the trace, we obtain the following equations for the covariances
The equation for is not needed because .
At the stationary state, which is a state of thermodynamic equilibrium we find
From these results one reaches the expected expression for the average energy of a quantum oscillator,
We remark that the probability distribution approaches the equilibrium probability distribution
where H is the quantum energy function (106), and the system thermalizes properly.
13. Multiple degrees of freedom
Up to this point, we have considered systems with just one degree of freedom. Here we wish to consider the case of a system with multiple degrees of freedom. The derivations of the results for the present case parallel those obtained for one degree of freedom and will not be shown in full detail. We restrict ourselves to the classical case but the quantum case can be obtained in a way similar to the case of one degree of freedom and can be found in reference [30][30] M.J. de Oliveira, Phys. Rev. E 94, 012128 (2016)..
An appropriate treatment of a system with many degrees of freedom begins with the generalization of the FKP equation (44) for this case. As we have seen, this equation describes a system in contact with a heat reservoir and is subject to an external force. The generalization that we give below is the one appropriate to describe a system in contact with multiple reservoirs at distinct temperatures and subject to several forces. We denote by a Cartesian component of the position and by the respective component of the momentum related to a certain degree of freedom. The energy function is a sum of kinetic energy and a potential energy,
The FPK equation, which governs the time evolution of the probability density , now reads
where are the Cartesian components of the external force, which may be nonconservative and time dependent, and
are the components of the dissipative probability current. We choose the dissipation force to be of the usual form and so that we may interpret the FPK equation of describing a system in contact with several heat reservoirs at temperatures . Therefore,
In the absence of external forces and if all heat baths have the same temperature , then the stationary state is a state of thermodynamic equilibrium because in this case vanishes for all i. Indeed, if we replace the Gibbs probability distribution
where , in the expression for we see that it vanishes. We remark that the Poisson brackets also vanish.
The time variation of the energy has the same form of equation (45),
but now is a sum of the heat fluxes coming from each reservoir, that is,
where each heat flux has the form (21), with J replaced by ,
Using the relation and performing an integration by parts, the heat flux can be written as
It becomes clear that if is larger than the average kinetic energy then , and heat flows into the system, otherwise, heat flows from the system to the heat reservoir.
The expression for the power is similar to that of equation (46) but now there is a sum over all components of the force
Using again the relation and performing an integration by parts, the power can be written as
It is useful to understand that the forces are being acted by an external agent which is a power device. The role played by the power device in relation to the transfer of mechanical work is analogous to the role played by the heat reservoir in relation to the transfer of heat. We see from (127) that the power has the usual form of a force multiplied by the velocity. If and have the same sign, work is performed by the power device onto the system, otherwise, the system performs work on the power device.
The heat flux and the power in (122) might be understood as functions of the time so that we may write and in which case equation (122) reduces to the form
which is the usual way of writing the conservation of energy. However, it should be remarked that both and are not in general exact differential, although is. The concepts of exact and inexact differentials are commented in the appendix 16.
The variation of entropy with time is
where the rate of entropy production and the entropy flux are generalization of the equations (31) and (35) for the present case,
where the integration extends over the space of states, and
In view of the equation (125), we see that the flux of entropy is related to the heat fluxes by
Using this relation, we may derive again all the results involving the free energy obtained in section 6, including the inequality 52.
14. Nonequilibrium steady state
Here we wish to show by an example that the present approach is indeed capable of describing systems displaying nonequilibrium steady states. That is, for long times the systems reaches a stationary state in which the irreversible currents are nonzero and entropy is permanently being created. One way of setting a system in a nonequilibrium steady state is to place the system in contact with heat reservoirs with different temperatures. Another possibility is to place the system under the action of a power device. The two possibilities are embodied in the development made in the previous section.
We examine a system with two degrees of freedom. The potential energy is harmonic and the energy function is
so that the conservative forces are linear, and . In addition to the conservative forces, the system is under the action of a nonconservative external force given by
According to equation (127),
which is minus the power performed by the power device. Each degree of freedom is understood as being coupled to one of the two heat reservoirs at the temperatures and . The fluxes of heat associated to each reservoir are given by (125) and are
From the two heat fluxes we determine the entropy flux by means of the relation (132)
From now on we wish to determine the above quantities in the steady state. In the steady state and in view of the relation (138), it suffices to determine the heat fluxes and the power . To find these quantities we need the covariances and at the steady state. Therefore, we should solve the FPK equation (118), with the energy function H given by (133) and external forces given by (134).
In view of the fact that the forces, conservative and nonconservative, are linear, it is possible to solve the FPK equation exactly. The solution is carried out in the appendix 16 where the covariances at the stationary state are determined. Replacing the covariances in the expressions (135), (136), and (137), we find
where
, , and
The range of values of c are such that the denominator be positive.
In the steady state the production of entropy equals the entropy flux which, from (138) is given by
which is clearly nonnegative.
Let us analyze the results we have just found for . The various possibilities for the heat fluxes and power are shown in table 1. In all cases where , except one, energy is being dissipated, that is, work is performed onto the system which in turn releases it in the form of heat to one or to both heat reservoirs. The exception is the case in which the system perform works in which case heat flows from the hotter reservoir to the colder reservoir through the system, and the whole system functions as a heat engine.
Heat fluxes and power for the case of the system defined by the energy function (133) and by the external force (134), where . The convention for the fluxes are: if is positive, heat flows from the reservoir 1 to the system, if is positive, heat flows from the reservoir 2 to the system, if is positive, the system performs work on to the external device. Notice that . We are considering here and , and that .
15. Path integral
The approach to the stochastic thermodynamics that we have developed here is based on the FPK equation which is understood as an equation that governs the time evolution of the probability distribution . The solution of the evolution equation gives at any instant of time. For convenience here we denote a state by which is understood as the collection of positions and momenta of the particles of the system.
If we solve the FPK equation considering that at the initial time it was in a certain state , then is understood as the conditional probability of finding the system in state at time t, given that it was in the state at time , and we denote it by .
Let us consider now a discretized trajectory, that is, a trajectory for which the system is at state at an initial time , in at time , in at time , …, and in at the final time . The probability of the occurrence of this discretized trajectory is a successive product of
from until , multiplied by the probability . Omitting the reference to the instants of time, the trajectory probability is
From now on we consider that the time intervals between two successive instants of time are the same and equal to . It is understood that is small enough so that the trajectory approaches a continuous trajectory.
Generally speaking the probability of a trajectory is a joint probability, which we denote by
The identification of (146) with (145) defines a type of stochastic dynamics associated with the name of Markov and the approach we are using here, with the FKP equation as the evolution equation, is thus a Markovian stochastic dynamics.
The probability distribution (146) is a joint probability distribution. If we integrate in all variables except one one them, we find the probability of this variable. For instance, if we integrate in , we find the probability of at time , which is , that is,
In which circumstance should we use the path probability (146)? If we wish to find the average U of the energy function at time t, for instance, it suffices to use the probability distribution . However, if we wish to find the average of the mechanical work, we should use the path probability because the work is a path integral.
Let L be the work along a certain trajectory of the force with components ,
where the index ‘path’ serves to remember that the integral is an integral along a certain trajectory. In a discretized form, the path integral is written the sum
where is the value of at the -th step, and is the increment in at the -th step. In this discretized form we see clearly that the path integral depends on , and to find its average we should use the path probability (146). This amounts to multiply the right-hand side of (149) by the right-hand side of (146) and integrate in all variables, . The result of this procedure is indicated by an index ‘path’ in the signs of the average. Therefore, the average of L which we call W is written as
Although we call work both L and W, it should be understood that L is the actual work and W its average.
The path integral (148) can be written as a time integral of the power as is well known. A trajectory may be defined parametrically by given , and thus and , as functions of this parameter which we take to be the time t. In terms of this parameter, the integral (148) becomes a time integral,
where we have replaced by the velocity . This expression is written as
where is the power at time t, and given by
Taking the average of the expression (152) we find
where in the right-hand side the average is the usual average taken by the use of the probability density at time t because depends only on at time t. Thus the average over a path integral is transformed into an average over the ordinary probability density. The integrand on the right-hand side of (154) is understood as the average power,
which coincides with the power of the external force given by the equivalent forms (127) or (126), if we recall that . Since W is the average of L, we may write
The result (156) or its equivalent form (154) where L and are given by (148) and (153), respectively, can immediately be generalized by replacing by any other function of . Let us suppose that it is replaced by where is the irreversible current given by (119)
The path integral of this quantity is
and the associated flux according to the result above is
which is the heat flux as given by (124). The quantity
is thus the heat exchanged between the two instants of time, and according to the results above may be written as the path average
Let us consider that a system in contact with a heat bath at a temperature T is acted by external forces during a certain interval of time. The following equality relating the free energy to the work performed by the system during this interval of time has been shown to be valid [6][6] C. Jarzynski, Phys. Rev. Lett. 78, 2690 (1997).,
Therefore, if the right-hand side of equation (162) is measured, we may obtain . Using the inequality , valid for any random variable , we find
which should be compared with the equation (52).
Before we end this section it is appropriate to place a discussion on the experimental measurement of the several quantities presented in the theory. The quantities that are mensurable are those that we call state functions, that is, quantities that are functions of the random variables, in the present case the positions and momenta, and themselves random variables. A experimental result obtained for a state function E, be it the value of one trial or the arithmetic average of several trials, should be compared with the average predicted by the theory which is written as
such as the energy, or as a path average as is the case of the mechanical work.
Let us consider the case of the entropy which is
which sometimes is written as
Although one may write in this form, this expression is merely an abbreviation of the expression on the right-hand side of (165) and cannot be understood as the average of a state function merely because is not a state function. Although, sometimes is called an instantaneous entropy, it is not a mensurable quantity. This point can be better understood if we try to calculate the entropy from a Monte Carlo simulation. One immediately realizes that it is impossible to determine along a Monte Carlo run and an alternative should be used.
The observation made above with respect to the average (166) is extended to the average in (161) because is not a state function and it is not a mensurable quantity. This is paradoxical because no one denies that heat Q is mensurable. However, a moment of reflection will reveal that heat is measured through the work dissipated and not as the quantity above.
16. Discussion and conclusion
We have developed an approach to the stochastic thermodynamics based on the use of the FPK equation, which governs the time evolution of the probability distribution. The main feature of the approach in addition to the evolution equation is the assignment of an energy function, the definition of entropy and the introduction of an expression for the rate of production of entropy. According to the approach, these quantities are well defined quantities in equilibrium or out of equilibrium. This is in contrast to other quantities of thermodynamics such as the temperature, which is defined only when the system is in thermodynamic equilibrium.
The evolution equation contains the mechanism of dissipation and stochastic fluctuation or noise which leads the system toward equilibrium, if an appropriate relation exists between dissipation and noise. The mechanism is included in the irreversible current by the term containing the dissipative force and the term containing the quantity , which is a measure of the noise. If the thermodynamic equilibrium sets in, the irreversible current vanishes. In out of equilibrium the irreversible current is nonzero and the production of entropy, which is related to the square of the irreversible currents is greater than zero. The rate of entropy production is thus a measure of the deviation of a system from thermodynamic equilibrium and of the irreversibility.
We have considered systems with a continuous space of states in which case the appropriate evolution equation is the FPK equation. However, the present approach can be extended to a discrete space of states in which case the evolution equation is called a master equation [29[29] T. Tomé and M.J. de Oliveira, Phys. Rev. E 91, 042140 (2015)., 31[31] T. Tomé and M.J. de Oliveira, J. Chem. Phys. 148, 224104 (2018).]. It can be applied to systems of interacting particles with different species including reactions among them [31][31] T. Tomé and M.J. de Oliveira, J. Chem. Phys. 148, 224104 (2018).. We did not treat the case where the parameters taking place in the evolution equation depends on time but the present approach can also be applied for instance to the case where the temperature oscillate periodically in time [39[39] M.J. Oliveira, J. Stat. Mech., 073204 (2019)., 40[40] C.E. Fiore, M.J. de Oliveira, Phys. Rev. E 99, 052131 (2019).]. In this case, for long times the system may not properly reaches a stationary state in the sense of being independent of time but may reach a state with a probability that oscillates in time.
Stochastic thermodynamics is sometimes called a discipline whose quantities are defined at the level of single trajectories. This denomination emphasizes the fluctuation aspect of the theory, which is a relevant feature in systems with few degrees of freedom, the main application of the theory. In this respect the theory looks like statistical mechanics, which incorporates fluctuations and may also be applied to small systems. Thus, an alternative name to discipline would be stochastic mechanics avoiding the term thermodynamics which is usually associated to macroscopic systems.
The emphasis on trajectories and path integralsl is a distinguish feature as is used for instance in the Jarzynski equality (162) The present approach, on the other hand, the emphasis rests on the fluxes and currents of various types but a relationship between fluxes and path integrals exists as shown in section 15, revealing the equivalence between the two approaches. The present approach also emphasizes the connection with the laws of thermodynamics, particularly the second law expressed by the nonnegativity of the rate of entropy production.
The present approach to the quantum stochastic thermodynamics is based on the quantum evolution equation which is a canonical quantization of the FPK equation. It differs from other approaches such as those based on the Lindblad operators [41][41] G. Lindblad, Comm. Math. Phys. 48, 19 (1976).. However, the present quantum FPK equation has similarity with the quantum master equation derived by Dekker [42][42] H. Dekker, Phys. Rev. A 16, 2116 (1977). and by Caldeira and Leggett [43[43] A.O. Caldeira and A. Leggett, Physica 121, 587 (1983)., 44[44] A.O. Caldeira, Introduction to Macroscopic Quantum Phenomena and Quantum Dissipation (Cambridge University Press, Cambridge, 2014).]. The main features of the quantum FPK equation that distinguishes from other approaches is that it is centered on the irreversible density current operator, the analog of the classical irreversible probability current, which plays a fundamental role in defining the fluxes of various type as well as the rate of entropy production. The similarity of the quantum evolution equation to the classical counterpart is useful because the generalization of the concepts of classical stochastic thermodynamics, such as those associated to the current of probability, to the quantum case become easier. A final distinguishing feature is that a system described by the quantum FPK equation thermalizes properly. That is, for long times the system approaches the equilibrium state, if, of course, the relationship (99) between noise and fluctuation is obeyed.
Appendix A
The FPK equation (2) can be written in the form
where and are the components of the probability current, and given by
Let us integrate both sides of the equation (167) in a region R of the space delimited by a boundary line,
The first integral can be integrated in x,
For simplicity we are considering that R is a convex region so that there are two values of x at the boundary of R for a given p, which we are denoting by and . In an analogous way we write the second integral as
If and vanish at the boundary then both integrals vanish and
from which follows that the integral is a constant that we set equal to unity,
This result is extended to the case where the region R is the whole space of states, in which case we demand that and vanish at infinity, a requirement that is provided by demanding that vanishes rapidly at infinity.
Let us consider now the case of an integral of the type
where the integral is over the whole space of states. If we perform and integration by parts the result is
Assuming that vanishes rapidly at the limits of integration as we did above, the first integral vanishes and we are left with the result
Appendix B
Here we solve the FPK equation for the case of a harmonic force . The equation is
and it can be solved exactly by assuming the following Gaussian form for the probability distribution
where the parameters a, b, c and Z depend on time. Replacing this form in the equation (177), we see that the left and right-hand sides will only have terms of the types , and . Equating the respective coefficients of these terms we find equations for the parameters a, b, and c. There is no need to seek an equation for Z because this quantity can be obtained from the three parameters a, b, and c. This follows from the normalization of (178), which gives
Replacing the Gaussian distribution (178) in the FPK equation we may find the equations for the three parameters. However, the equations are too complicated and we will instead seek for equations that determine the covariances , , and . Before that we should write down the relations between the covariances and the three parameters, which are are obtained from the probability distribution (178), and are
Inverting these relations we find
It remains now to determine the covariances as functions of time. To find the equations for the covariances we proceed as follows. We multiply both sides of the FPK equation successively by , and , and integrate in x and p. Performing appropriate integration by parts, we find
The stationary solution of this equation is , , and . Taking these result into account, we define variables that are deviations of the covariances from their stationary values as follows, , , and . These variables obey the set of linear equations
which we write in matrix form
The solution for each variable is of the type where is an eigenvalue of the square matrix above. They are
and are all negative. The general solution is
and the coefficients are not all independent, but are related by
Thus only three, say , , and can be chosen to be independent, and they are determined by the initial conditions.
It is worth determining the solution for long times. In this case the solution is dominated by the largest eigenvalue, which is . The covariances are
from which we find the three parameters
where , , .
Appendix C
We wish to determine here in an explicit form the dissipative force for the quantum harmonic oscillator were
and
To this end we start with the following identity [38][38] E. Merzbacher, Quantum Mechanics (Wiley, New York, 1970). 2ª ed.
Using the notation
where the numbers of commutator is equal to n, the identity above is written as
where . The quantities obeys the recursive relations
To determine is easier if we use the relations
which are obtained by using the commutation relation . From these relations we get the two useful rules,
The first two coefficients of the expansion are
Next, with the two rules above in mind, we observe that will be proportional to x and will be proportional to p, and, in general, will be proportional to x if n is even, and it will be proportional to p if n is odd.
Let us consider the case n even and write . Then using the two rules above,
so that
from which we find
because . The part of the expansion (212) corresponding to n even is
Now we consider the case n odd and write . Then using the two rules above
so that
from which we find
because . The part of the expansion (212) corresponding to n odd is
Collecting the results above we find
and the quantity (208) becomes
Appendix D
Let suppose that f is a function of several variables that we denote by , and that these variables depend on time. The derivative of f with respect to time is
where are functions of given by
Equation (229) can be written in simplified form
where are the differentials of the variable and is the differential of f. Since f is a function of then the following relation is valid
Now we raise the following question. Let g be a function of t and let depend on time as before and let us assume that
where are given function of the variables . The question now arises whether g could depend on time only through the variables , that is, whether
If that is possible then according to our reasoning above the given functions of the variables must fulfill the condition
for all pairs . If this condition is not satisfied it is not possible to write g as a function of . In this case if we write (233) in the simplified form
we say that is not an exact differential.
Appendix E
We determine here the covariances , and for the system described by the FPK equation (118) where , , and and , which we reproduce here in the following form
where and .
Multiplying successively equation (237) by , , and , and performing the integration we find the following equations, after appropriate integration by parts,
Now we look for the stationary solution. Setting the above equation to zero, we find that the following covariances vanish, , , . The other covariances are the solution of the set of linear equations
A straightforward calculation leads us to the result
We remark that, as , , , and must be nonnegative, the following conditions should be fulfilled
It is worth mentioning that the probability density can also be determined. On account of the linearity of the FPK equation in relation to the variable and , the solution is a multivariate Gaussian distribution, which we write as
where we are using the abbreviations , , , and . The matrix L whose elements are is the inverse of the covariance matrix C, whose elements we have just determined, and are , , , , , , , , , and . The expression given by equation (262) is the probability distribution describing the nonequilibrium stationary state of the present problem.
References
- [1] J. Schnakenberg, Rev. Mod. Phys. 48, 571 (1976).
- [2] L. Jiu-Li, C. Van den Broeck and G. Nicolis, Z. Phys. B 56,165 (1984).
- [3] C.Y. Mou, J.L. Luo and G. Nicolis, J. Chem. Phys. 84, 7011 (1986).
- [4] A. Pérez-Madrid, J.R. Rubí and P. Mazur, Physica A 212, 231 (1994).
- [5] T. Tomé and M.J. de Oliveira, Braz. J. Phys. 27, 525 (1997).
- [6] C. Jarzynski, Phys. Rev. Lett. 78, 2690 (1997).
- [7] K. Sekimoto, Prog. Theor. Phys. Suppl. 130, 17 (1998).
- [8] J.L. Lebowitz and H. Spohn, J. Stat. Phys. 95, 333 (1999).
- [9] P. Mazur, Physica A 274, 491 (1999).
- [10] C. Maes and K. Netočný, J. Stat. Phys. 110, 269 (2003).
- [11] L. Crochik and T. Tomé, Phys. Rev. E 72, 057103 (2005).
- [12] T. Tomé, Braz. J. Phys. 36, 1285 (2006).
- [13] R.K.P. Zia and B Schmittmann, J. Phys. A: Math. Gen. 39, L407 (2006).
- [14] D. Andrieux and P. Gaspard, Phys. Rev. E 74, 011906 (2006).
- [15] T. Schmiedl and U. Seifert, J. Chem. Phys. 126, 044101 (2007).
- [16] R.J. Harris and G. M. Schütz, J. Stat. Mech. 2007,P07020 (2007).
- [17] U. Seifert, Eur. Phys. J. B 64, 423 (2008).
- [18] R.A. Blythe, Phys. Rev. Lett. 100, 1010060 (2008).
- [19] M. Esposito, K. Lindenberg and C. Van den Broeck, Phys. Rev. Lett. 102, 130602 (2009).
- [20] M. Esposito, U. Harbola and S. Mukamel, Rev. Mod. Phys. 81, 1665 (2009).
- [21] T. Tomé and M.J. de Oliveira, Phys. Rev. E 82, 021120 (2010).
- [22] C. Van de Broeck and M. Esposito, Phys. Rev. E 82, 011144 (2010).
- [23] C. Jarzynski, Annual Review of Condensed Matter Physics 2, 329 (2011).
- [24] T. Tomé and M.J. de Oliveira, Phys. Rev. Lett. 108, 020601 (2012).
- [25] R.E. Spinney and I.J. Ford, Phys. Rev. E 85, 051113 (2012).
- [26] U. Seifert, Rep. Prog. Phys. 75, 126001 (2012).
- [27] M. Santillan and H. Qian, Physica A 392, 123 (2013).
- [28] D. Luposchainsky and H. Hinrichsen, J. Stat. Phys. 153, 828 (2013).
- [29] T. Tomé and M.J. de Oliveira, Phys. Rev. E 91, 042140 (2015).
- [30] M.J. de Oliveira, Phys. Rev. E 94, 012128 (2016).
- [31] T. Tomé and M.J. de Oliveira, J. Chem. Phys. 148, 224104 (2018).
- [32] M.J. de Oliveira, Phys. Rev. E 99, 052138 (2019).
- [33] N.G. van Kampen, Stochastic Processes in Physics and Chemistry (North-Holland, Amsterdam, 1981).
- [34] C.W. Gardiner, Handbook of Stochastic Methods for Physics, Chemistry and Natural Sciences (Springer, Berlin, 1983).
- [35] H. Risken, The Fokker-Planck Equation, Methods of Solution and Applications (Springer, Berlin, 1984).
- [36] T. Tomé and M.J. de Oliveira, Stochastic Dynamics and Irreversibility (Springer, Heidelberg, 2015).
- [37] S.R.A. Salinas, Introduction to Statistical Physics (Springer, New York, 2001).
- [38] E. Merzbacher, Quantum Mechanics (Wiley, New York, 1970). 2ª ed.
- [39] M.J. Oliveira, J. Stat. Mech., 073204 (2019).
- [40] C.E. Fiore, M.J. de Oliveira, Phys. Rev. E 99, 052131 (2019).
- [41] G. Lindblad, Comm. Math. Phys. 48, 19 (1976).
- [42] H. Dekker, Phys. Rev. A 16, 2116 (1977).
- [43] A.O. Caldeira and A. Leggett, Physica 121, 587 (1983).
- [44] A.O. Caldeira, Introduction to Macroscopic Quantum Phenomena and Quantum Dissipation (Cambridge University Press, Cambridge, 2014).
Publication Dates
-
Publication in this collection
14 Sept 2020 -
Date of issue
2020
History
-
Received
21 May 2020 -
rev-received
11 June 2020 -
Accepted
18 June 2020