Classical and quantum stochastic thermodynamics

The stochastic thermodynamics provides a framework for the description of systems that are out of thermodynamic equilibrium. It is based on the assumption that the elementary constituents are acted by random forces that generate a stochastic dynamics, which is here represented by a Fokker-Planck-Kramers equation. We emphasize the role of the irreversible probability current, the vanishing of which characterizes the thermodynamic equilibrium and yields a special relation between fluctuation and dissipation. The connection to thermodynamics is obtained by the definition of the energy function and the entropy as well as the rate at which entropy is generated. The extension to quantum systems is provided by a quantum evolution equation which is a canonical quantization of the Fokker-Planck-Kramers equation. An example of an irreversible systems is presented which shows a nonequilibrium stationary state with an unceasing production of entropy. A relationship between the fluxes and the path integral is also presented.


I. INTRODUCTION
Thermodynamics was conceived as a discipline based on principles and laws that refer to macroscopic quantities such as the principles of energy conservation and of entropy increase, which are the first and second laws of thermodynamics. Although these two principles are valid for systems in equilibrium as well as for systems out of equilibrium, the initial development of the discipline lead to the establishing of a theory of thermodynamics of system in equilibrium. This was possible because the energy of a system in thermodynamic equilibrium is related functionally to the entropy, which allows the definition of temperature.
The derivation of the thermodynamics from the microscopic laws of motion was the aim of the kinetic theory. One of its consequences was the development of the equilibrium statistical mechanics advanced by Gibbs. The statistical mechanics is based on the description of system by the probability distribution that bears the name of Gibbs, which for a system in contact with a heat reservoir is proportional to e −E/kT , where E is the energy function and T , the temperature. The crucial property of the equilibrium distribution is that the probability depends on the states of the system only through the energy function. This property, along with the Gibbs expression for the entropy, leads to the relation between energy and entropy, mentioned above, which characterizes the thermodynamic equilibrium.
The entropy of an isolated system in equilibrium remains invariant. But if it is not in equilibrium, its entropy increases and the increase, in this case, is not due to the flux of entropy because the system is isolated. Entropy is being created spontaneously and in this sense it differs from the energy which is a conserved quantity. If a system is not isolated then the variation of the entropy S with time is the algebraic sum of two terms, where Π is the rate in which entropy is being created, the rate of entropy production, and Φ is the flux of entropy.
The production of entropy is related to irreversible processes occurring inside the system which are understood as processes that are more likely to occur than their time-reverse counterparts. Thermodynamic equilibrium is thus characterized as the state where a process and its time reversal are equally probable. This characterization of equilibrium, embodied in the stochastic thermodynamics, is a dynamical definition, being more comprehensible than the static definition given above in terms of the Gibbs distribution. The stochastic thermodynamics  provides an approach to out of equilibrium an equilibrium thermodynamics that takes into account the dynamical characterization of the irreversible processes by assuming that a system evolves in time according to a microscopic stochastic dynamics.
The elementary constituents of a system are assumed to be acted by random forces in addition to the usual deterministic forces. As a consequence the trajectory followed by a particle is not in general deterministic. There are many possible trajectories that a particle may follow from a given point, each one with a certain probability of occurrence. The approach we follow here uses a representation of the dynamics in terms of the probability of the occurrence of a state at a certain instant of time, which is assumed to be governed by an evolution equation.
The main features of the approach that we follow here are: (1) a stochastic dynamics which is here represented by a Fokker-Planck-Kramers equation [33][34][35][36]; (2) the assignment of an energy function; (3) the definition of entropy as having the same form as the Gibbs entropy; (4) a proper definition of the rate of entropy production.
The first part of this text is dedicated to the classical case. In the second part we extend the results obtained in the first part to the quantum case. In particular, we use as the evolution equation a quantum version of the Fokker-Planck-Kramers equation. In this case, the probability density is replaced by the probability density operator, usually called density matrix. In a third part we generalize the results for the case of many degrees of freedom and present an example of a system that displays a nonequilibrium stationary state with an unceasing production of entropy.

II. EVOLUTION EQUATION
Our object of study is a system of particles that interact among themselves and may also be subject to external forces. In addition each particle is acted by random forces, the origin of which may be internal or external to system. The system evolves in time according to the Newton equation of motion. Due to random forces, the trajectory is not uniquely defined. There are many possible trajectories starting from a given state, each one with a certain probability.
If the system is at a given state at the initial time, one may ask for the probability that it is at a given state at a later time. An answer to this question is provided by the Fokker-Planck-Kramers (FPK) equation which gives the time evolution of the probability density ρ(x, p, t). We will focus initially on a system with just one degree of freedom in which case x is the position and p is the momentum of the particle, and both quantities constitute the state of the system. The probability that at time t the state of the system is inside dxdp around (x, p) is ρ(x, p, t)dxdp.
In contrast to the equilibrium statistical mechanics for which the probability density is constant in time, here it depends on time. If we wish that the system reaches thermal equilibrium for long times, usually called thermalization, then the solution of the evolution equation for long times must be a Gibbs equilibrium distribution, which is characterized by depending on (x, p) only through the energy function associated to the system.
The FPK equation is given by where m is the mass of the particle, f is the ordinary force acting on the particle and Γ is a constant that is associated to the stochastic forces. The force f is understood to be a sum of an internal force F c , considered to be a conservative force, and a dissipative force D, The property of the dissipative force D that distinguishes it from the other forces is that it is an odd function of the momentum. A derivation of the FPK equation (2) can be obtained from a Langevin equation and can be found in reference [36]. The Langevin equation leading to (2) is the equation of motion for a particle of mass m moving along a straight line which in addition to the ordinary force f is also under the action of a stochastic force with zero mean and variance proportional to Γ.
An essential property of the FPK equation, and for that matter of any equation that governs the time evolution of a probability distribution, is that it preserves the normalization of ρ, that is, for any instant of time, where the integration is performed on the whole space of states. If ρ is normalized at the initial time, it remains normalized forever. The demonstration of this fundamental property of the FPK equation is given in the appendix A. As F c is a conservative force, we may write F c = −(∂H/∂x), p/m = (∂H/∂p), where is the energy function, which is the sum of the kinetic energy and the potential energy V . The first term and the one involving F c of the FPK equation become The right-hand side of this equation is written in the abbreviated form as which is called the Poisson brackets. Replacing this result in the equation (2), the FPK equation acquires the form where The FPK equation (2) can also be written in the form where J x = p/m and J p = F c ρ + J. In this form, the FKP equation is understood as a continuity equation and J x and J p are understood as the components of the probability current. This is the form that allowed us to show the property (4), as presented in the appendix A. The part J of the component J p is the irreversible probability current, which plays a fundamental role in the present approach. We remark that without J the FPK equation reduces to the Liouville equation of classical statistical mechanics [37],

III. THERMODYNAMIC EQUILIBRIUM
The FPK equation as it stands may or may not describe a system that for long times will be in thermodynamic equilibrium. For long times the solution of the FPK equation is its stationary solution, that is, the solution obtained by setting to zero the right-hand side of the equation, in which case J may or may not vanish.
A system in thermodynamic equilibrium may be said to be the one described by a Gibbs distribution. However, this is a static definition. We need here a dynamic characterization of thermodynamic equilibrium. This is provided by characterizing the thermodynamic equilibrium as the stationary state such that the irreversible current vanishes. Therefore, the equilibrium distribution ρ e obeys the condition and in addition The solution of this last condition leads to the result that ρ e is a function of H, that is, ρ e depends on x and p through the energy function H(x, p). This characterizes any Gibbs equilibrium distribution. The condition (12) is understood as the relation between dissipation, described by D, and noise or fluctuations, described by Γ. That is, in equilibrium there must be a relation between dissipation and fluctuations. We wish to describe a system in contact with a heat reservoir at a temperature T , in which case the appropriate Gibbs equilibrium distribution is [37] where β = 1/kT and k is the Boltzmann constant. Replacing (14) in equation (12), we find where γ is the constant that connects Γ with temperature, Notice that D is the usual type of dissipation force proportional to the velocity.

IV. ENERGY, HEAT AND ENTROPY
A thermodynamic system may have its energy changing in time. The energy variation is due to the exchange of heat or work with the environment. Here we will treat the case where the external forces are absent so that the variation in energy is only due to the exchange of heat. The energy U of a thermodynamic system, sometimes called internal energy, is the average of the energy function introduced above, that is, and may depend on time as ρ depends on time. The variation of U with time is Replacing the time derivative of ρ by using the FPK equation in the form (8), we find The term involving the Poisson brackets, vanishes after an integration by parts has been performed. Here and in the following, whenever we do an integration by parts, the integrated term is assumed to vanish. As shown in the appendix A, this is so because we are considering that at the limits of integration the probability density vanishes rapidly. Performing the integral in (19) by parts, we find The right-hand side of this equation is interpreted as the rate at which heat is introduced into the system, or the flux of heat Φ q , Thus we may write an equation that may be understood as the conservation of energy. Notice that the flux of heat Φ q involves the irreversible part of the probability current. From thermodynamics, we know that heat is related to entropy through the Clausius equation dQ = T dS, valid for systems in equilibrium, where dQ is the infinitesimal heat exchanged with the system and dS is the infinitesimal increase of the entropy S. Clausius equation is equivalent to the equation Φ q = T (dS/dt) which could be used to define entropy. However, this equation is of no use here because it is valid only for system in equilibrium. The appropriate way to define entropy of a system is to use the Gibbs form which is a generalization of the Boltzmann entropy S = k ln W where W is the number of accessible states. Although, S given by (23) is usually used for system in equilibrium, here we are assuming that this form is also appropriate for systems out of equilibrium.

V. ENTROPY PRODUCTION
If the probability ρ is found as a function of time by solving the FPK equation, then S is obtained as a function of time. Deriving (23) with respect to time, we find There is another part involving the time derivative of ln ρ but it vanishes if we take into account that ρ is normalized at any time, a result given by equation (4).
Replacing the time derivative of ρ in (24), by using the FPK equation in the form (8), we find Again the part involving the Poisson brackets vanishes by an integration by parts. The entropy is not a conserved quantity like the energy and as a consequence its variation with time is not equal to the flux of entropy. In other terms, the right-hand side of (14) cannot be identified as the flux of entropy. In addition to the flux of entropy, there is another contribution related to the creation of entropy. This contribution is the rate of how entropy is being generated or created, which is called the rate of entropy production, denoted by Π. Thus we variation of the entropy of a system with time is written as where Φ is the flux of entropy from the system to the outside. The next step is to define or postulate the expression for one of the two quantities, Π or Φ. Once one of them is given, the other is obtained by observing that their difference should be equal to the right-hand side of (25). The rate of entropy production Π is a quantity that vanishes when the thermodynamic equilibrium sets in and gives a measure of the deviation from equilibrium. As we have seen above, the vanishing of the irreversible probability current J is a condition for the the thermodynamic equilibrium. Since the entropy production is a nonnegative quantity and vanishes when J vanishes, it should be related to J 2 . The expression for the Π that we are about to introduce meets this two conditions. Let ρ 0 be the probability distribution that makes the irreversible probability current J to vanish. Writing J in the form this condition is equivalent to The probability ρ 0 does not need to be necessarily the equilibrium probability distribution ρ e because we are not requiring that {H, ρ 0 } vanishes as it occurs with ρ e . In analogy with the right-hand side of equation (25), we define the rate of entropy production by If we integrate by parts and use the relation that follows from (27) and (28), we reach the expression We see that the rate of entropy production is the integral of an expression proportional to J 2 , as desired. It is nonnegative and vanishes in equilibrium, when J = 0, that is, which is a brief statement of the second law of thermodynamics.
The flux of entropy Φ is obtained from dS/dt = Π − Φ by using the expressions (25) and (29), Performing an integration by parts and using (28), we find The flux of entropy can also be written as after the replacement of J, given by (8) in (34) and performing an integration by parts in the second term. This is an interesting form for the flux of entropy because it can be understood as an average over the probability distribution ρ, which is not the case of the rate of entropy production. From the expressions for the flux of entropy Φ and the rate of entropy production Π, we draw an important conclusion concerning the Liouville equation (11). Since this equation can be understood a the FPK equation without the irreversible probability current J, and since Φ and Π vanishes if J = 0, it follows that the Liouville equation predicts no entropy production nor flux of entropy, and the entropy S is constant in time. If the Liouville equation is employed to describe a closed system that approaches equilibrium, but initially is out or equilibrium, then this prediction of the Liouville equation is in contradiction with thermodynamics which predicts an increase of entropy with time.
Let us consider in the following that D and Γ are related by (12) where ρ e is the canonical Gibbs distribution (14) so that the FPK equation describes a system in contact with a thermal reservoir and at equilibrium. Replacing the results (15) and (16) in (9), the expression for the irreversible probability current becomes Replacing the expression for D given by (12) in the equation (34), we find Comparing with (21), we see that the entropy flux and the heat flux are related by Using equations (22) and (26) we get the following relation valid at any time. Near equilibrium, Π vanishes faster then the other two time derivatives and or dU = T dS which is the Clausius relation, valid at thermodynamic equilibrium. We remark that T here is the temperature of the heat reservoir. The temperature of the system is (∂U/∂S) = T * if U could be written as a function of S. In out of equilibrium, when Π = 0, this is not possible, but in equilibrium, in view of the relation dU = T dS, then U becomes a function of S. The relation dU = T dS is translated into T = (∂U/∂S), which implies T = T * , and T becomes also the temperature of the system. It is worth mentioning that the variation in time of the free energy F , defined by F = U − T S, at T constant, is which follows form (39). Since Π ≥ 0, then dF/dt ≤ 0 and F decreases monotonically towards its equilibrium value. This inequality can also be viewed as the H theorem of Boltzmann. Defining the Boltzmann H by we see that it is equal to −βF plus a constant. Then, it follows from dF/dt ≤ 0 that dH/dt ≥ 0, which is the H theorem of Boltzmann.

VI. WORK
The specific systems that we have consider so far are those that exchange only heat with the environment and are described by the FPK equation (8). Now we wish to consider the case where the systems are also subject to external forces. The appropriate way to treat this case is to add to the ordinary force appearing in the FPK equation (2) an external force F e , so that f now reads Repeating the reasoning leading to (8) from (2), we reach the evolution equation where J is the irreversible probability current, given by (36), which is the one appropriate for the contact with a heat reservoir at a temperature T . Due to the presence of the external force, the variation in time of the energy has another contribution in addition to the flux of heat, where Φ q is the heat flux into the system and Φ w is the work performed by the system per unit time against the the external forces, or power.
To determine the variation of energy with time, we proceed in the same way as we did to derive (22) from the evolution equation (2), but now we use the evolution equation (44). The result is the equation (45) where Φ q is the expression (21) and the power Φ w is The equation (25) for the variation of entropy with time remains unchanged by the addition of the external force. To see this we replace the expression (44) into (24). The term involving the Poisson brackets vanishes as we have already seen. The term involving the external force is where we have performed an integration by parts. But this integral also vanishes if we assume that F e does not depend on p.
The rate of entropy production is defined by (31), and considering that the expression for dS/dT remains unchanged, as we have just seen, so does the expression (35) for the entropy flux. As these relations are not modified by the presence of the external force, then the relation Φ = −Φ q /T , as expressed by equation (38), between the entropy flux and the heat flux, valid for a system is in contact we a heat reservoir at a temperature T , also remains unchanged.
Taking into account that dS/dt = Π − Φ and that dU/dt = Φ q − Φ w , we reach the following relation Considering a process in which T is constant, then the left hand side is dF/dt where F = U − T S is the free energy, that is, Integrating in time, between t 1 and t 2 , we find Πdt.
where W is the work performed by the external force, which is the time integration of the power Φ w . Since Π ≥ 0, it follows that that is, the variation of the free energy is smaller than the work done on the system. The equality holds in an equilibrium process, when Π = 0.
The following remark is in order. When the system is in contact with a heat reservoir at a certain temperature T , it does not mean that T is the temperature of the system, as we have pointed out in the remark just below equation (40). If the rate of entropy production is nonzero, no temperature could be assigned to the system and F = U − T S could not be, strictly speaking, the free energy of the system although U and S are the energy and entropy of the system. But we may suppose that for t ≤ t 1 and t ≥ t 2 , the system is in equilibrium, in which case Π is nonzero only for t 1 < t < t 2 . Within this scenario, T can be considered the temperature of the system at time t 1 and at time t 2 , and F at these two instants of time will be the free energy of the system, and the relation ∆F in (50) will represent the difference in the free energies of the system.
The derivations that we have just carried out, such as that of the inequality (52), made no restriction on the type of external force. It could be a nonconserved force or a time dependent force. This latter type of external force happens, for instance, when the system is driven at our will.

VII. HARMONIC OSCILLATOR
Let us apply the results we have found so far to the case of a harmonic oscillator for which F c = −Kx and It is in contact with a heat reservoir at a temperature T so that the FPK equation is The FPK equation can be solved exactly by assuming the following Gaussian form for the probability distribution where the parameters a, b, c depend on time and The solution is given in the appendix B, where we find the parameters a, b, and c as functions of time.
From the probability distribution (55), we determine the covariances, and other properties. The energy is and the entropy is found by using its definition and is To determine dS/dt, Π and Φ, we need to find J, which is defined by (36). In the present case it reads From J we find and it becomes clear that dS/dt = Π − Φ. From the asymptotic values of the parameter a, b, and c, given in the appendix B, which are a = Kβ, b = β/m, and c = 0, we find the equilibrium values of the various quantities which are xp = 0, dS/dt = 0, Φ = 0, and Π = 0. As the parameters a, b, and c decay exponentially to their asymptotic values, so do the properties obtained above. We remark that the probability distribution approaches the equilibrium probability distribution where H is the energy function (53), and the system thermalizes properly.

VIII. QUANTUM EVOLUTION EQUATION
To extend the stochastic approach developed above to the quantum case we need to provide a quantum version of the evolution equation. One way of setting up the quantum version is to use a procedure known as canonical quantization, which amounts to replace the Poisson brackets of classical mechanics by a quantum commutator. For the case of just one degree of freedom that we are considering here, the Poisson brackets between A and B are given by The canonical quantization is obtained by performing the replacement whereh is the Planck constant and the quantitiesÂ and B on the right are understood as quantum operators, and [Â,B] =ÂB −BÂ.
From the quantization rule above, we obtain two useful rules. The first is obtained by setting A = x in the Poisson brackets, which gives Using the quantization rule, we obtain In an analogous way, if we set B = p, we find and using the quantization rule, we obtain We remark thatx andp on the right-hand sides of (73) and (75) are quantum operators representing the position and the momentum of a particle, and we recall that according to quantum mechanics [x,p] = ih.
The last two rules are useful in the transformation of a differential equation such as the FPK equation into a quantum equation. It should be remarked however that the equation obtained by this procedure is not a mathematical derivation of the quantum equation from the classical equation. In fact, the opposite is true. From the quantum equation one reaches the classical equation by taking the classical limit. Thus, the quantization rules should be used as a guide to find a quantum equation which at the end should be introduced as a postulate.
It is usual to use the hat symbol to denote a quantum operator as we have done above. But from now on we will drop the hat symbol and denote an operator by a letter without the hat. Thus the position and momentum operator, for instance, will be denoted by x and p.
Let us consider the FPK equation in the form (8). According to the quantization rules, the quantum evolution equation is where now, ρ, H, and J are quantum operators. Since the quantum operators can be represented by matrices most properties of quantum operators are better understood if stated in terms of matrices. For instance the density operator ρ, which is the quantum version of the probability density distribution, holds the following property: its diagonal elements are nonnegative and the sum of the diagonal elements, its trace, equals the unity, The operator H is the quantum energy function, or the Hamiltonian, given by where V is a function of x. We remark that without J the equation reduces to the quantum Liouville equation of quantum statistical mechanics. The property (77) is the analogue of the normalization of the probability density that we used before and thus it should be preserved in time. To see that this is the case let us take the trace of the right-hand side of equation (76). There are two terms to be considered and both vanish because each one is the trace of a commutator, and the trace of a commutator vanishes. Therefore, the left hand side, which is the time derivative of the trace of ρ, vanishes and ρ should be constant in time.
The trace of a commutator vanishes because Tr(AB) = Tr(BA). This is a particular case of the cyclic property of the trace Tr(ABC) = Tr(BCA). This cyclic property allows us to write the following property that we will employ further on.

IX. ENERGY AND ENTROPY
The average U = H of the energy function H with respect to the density operator ρ is given by and is the quantum analog of the integral in (17). Deriving it with respect to time, and using the evolution equation (76), we find where the term involving the commutator [H, ρ] vanishes owing to the property (79). Using again this same property we get The right-hand side is interpreted as the heat flux into the system and The definition of entropy for the quantum case is that introduced by von Neumman, and corresponds to the extension of the Gibbs entropy to the quantum case. Deriving this equation with respect to time we find where the term involving the derivative of ln ρ vanishes in view of the normalization property (77). Using the evolution equation, we find where again the term involving the commutator [H, ρ] vanishes owing to the property (79).

X. RATE OF ENTROPY PRODUCTION
The right-hand side of the equation (88) is not equal to to the flux of entropy because the entropy is not a conserved quantity. There is another source of entropy which comes from dissipation inside the system. Thus as before the time variation of the entropy has two term, where Π is the rate of entropy production and Φ is the flux of entropy from the system to the outside. Guided by the classical version, the production of entropy is defined as follows. The quantum irreversible current J has not been defined yet but it is expressed in terms of the density operator, that is, J(ρ). Let us denote by ρ 0 the quantity such that J(ρ 0 ) = 0. If the commutator of ρ 0 with H also vanishes then ρ 0 is identified as the density at thermodynamic equilibrium. However, here we do not demand that this commutator vanishes. The rate of entropy production is defined as and it becomes clear that Π vanishes whenever J vanishes. Taking into account (88) and (89), the expression for the flux of entropy Φ is Let us assume that the irreversible current J, which has not yet been specified, is so defined that the evolution equation describes a system in contact with a heat reservoir at temperature T . In thermodynamic equilibrium J vanishes and ρ 0 should be identified as corresponding to the Gibbs canonical distribution which in the quantum case reads where Z = Tr(e −βH ).
Replacing ρ 0 in the expressions for Π and Φ, we find Now let us compare equations (95) and (84). We see that Φ and Φ q are related by which is the thermodynamic relation that should exist between the flux of entropy Φ and the heat flux Φ q when a system is in contact with a heat reservoir at a temperature T . Other thermodynamic relations that we have obtained for the classical case, such those given by equations (39) and (40), can also be shown to be valid in the quantum case.

XI. IRREVERSIBLE CURRENT
The quantum irreversible current J has not yet been specified. Here we will choose it by applying the rules of the canonical quantization to the expression (9). The form chosen for the irreversible current is where D is a quantum operator representing the dissipative force and Γ is a real constant as in the classical case. Notice that we have written a symmetrized form for the product of the dissipative force and the density operator.
In one of the products, we have used the Hermitian conjugate of D, denoted D † , so that the whole expression is Hermitian.
The matrix A † that represents the Hermitian conjugate of an operator is obtained from the matrix A that represent the operator by transposing the elements of the matrix and taking the complex conjugate of each element. If A ij and (A † ) ij represent an element of these two matrices then (A † ) ij = (A ji ) * . A Hermitian operator is the operator which is equal to its Hermitian conjugate. An important property of such an operator is that its eigenvalue are real.
If we wish to describe a system that at long times reaches the thermodynamic equilibrium then D and Γ should have a relation such that J vanishes when ρ is the equilibrium density operator ρ e . Imposing the vanish of J when ρ is replaced by ρ e we find the condition The solution for D is which is understood as the relation between dissipation, described by D, and noise or fluctuations, described by Γ. In equilibrium there must be a relation between dissipation and fluctuations. That (99) is a solution can be verified by substitution and using the property that the Hermitian conjugate of a product of two operators equals the product of the conjugate of each operator in the inverse order. In the present case, (ρ e D) † = D † ρ e because ρ e is Hermitian.
Next we seek for J that could describe the contact of the system with a heat reservoir. In this case which replace in (99) gives This form of dissipation is certainly not the form of the classical dissipation found above which is proportional to the momentum. However, at high temperatures this is the case. If we expand the terms between parentheses in the right-hand side of (101) up to terms of order β, we find D = −γp where γ = Γβ/2m, or For an arbitrary temperature, the dissipation, according to the present approach, is not proportional to the momentum and is given by (101), which we write, by using (102), as where Replacing the irreversible current (97) into the evolution equation (76) we may write it in the more explicit form which we call the quantum FPK equation.

For a quantum Harmonic oscillator the energy function is
where ω is the frequency of oscillations. To find the solution of the quantum FPK equation from which we may determine the thermodynamic properties it is necessary to know the explicit expression of the dissipation force D = −γg, that is, we need to know g as a function of x and p. For the harmonic oscillator we show in the appendix C that where a and b are real numbers given by A solution of the quantum FPK equation (105) can be obtained by a method similar to that used in the classical case, which is to assume a solution of the form where a, b, and c are real constant that depends on time.
Here we will limit ourselves to write down the equations for the covariances and determine their asymptotic values, which are the values at thermodynamic equilibrium. The time dependent solution can be found in the reference [30]. Multiplying the quantum FPK equation successively by x 2 , p 2 , and xp, and taking the trace, we obtain the following equations for the covariances The equation for px is not needed because xp − px = ih.
At the stationary state, which is a state of thermodynamic equilibrium we find From these results one reaches the expected expression for the average energy of a quantum oscillator, We remark that the probability distribution approaches the equilibrium probability distribution where H is the quantum energy function (106), and the system thermalizes properly.

XIII. MULTIPLE DEGREES OF FREEDOM
Up to this point, we have considered systems with just one degree of freedom. Here we wish to consider the case of a system with multiple degrees of freedom. The derivations of the results for the present case parallel those obtained for one degree of freedom and will not be shown in full detail. We restrict ourselves to the classical case but the quantum case can be obtained in a way similar to the case of one degree of freedom and can be found in reference [30].
An appropriate treatment of a system with many degrees of freedom begins with the generalization of the FKP equation (44) for this case. As we have seen, this equation describes a system in contact with a heat reservoir and is subject to an external force. The generalization that we give below is the one appropriate to describe a system in contact with multiple reservoirs at distinct temperatures and subject to several forces. We denote by x i a Cartesian component of the position and by p i the respective component of the momentum related to a certain degree of freedom. The energy function is a sum of kinetic energy and a potential energy, The FPK equation, which governs the time evolution of the probability density ρ, now reads where F e i are the Cartesian components of the external force, which may be nonconservative and time dependent, and are the components of the dissipative probability current. We choose the dissipation force D i to be of the usual form D i = −γp i and Γ i = 2mkT i so that we may interpret the FPK equation of describing a system in contact with several heat reservoirs at temperatures T i . Therefore, In the absence of external forces and if all heat baths have the same temperature T i = T , then the stationary state is a state of thermodynamic equilibrium because in this case J i vanishes for all i. Indeed, if we replace the Gibbs probability distribution where β = 1/kT , in the expression for J i we see that it vanishes. We remark that the Poisson brackets also vanish. The time variation of the energy U = H has the same form of equation (45), but now Φ q is a sum of the heat fluxes coming from each reservoir, that is, where each heat flux has the form (21), with J replaced by J i , Using the relation ∂H/∂p i = p i /m and performing an integration by parts, the heat flux can be written as It becomes clear that if kT i /2 is larger than the average kinetic energy p 2 i /2m then Φ qi > 0, and heat flows into the system, otherwise, heat flows from the system to the heat reservoir.
The expression for the power Φ w is similar to that of equation (46) but now there is a sum over all components of the force Using again the relation ∂H/∂p i = p i /m and performing an integration by parts, the power can be written as It is useful to understand that the forces F e i are being acted by an external agent which is a power device. The role played by the power device in relation to the transfer of mechanical work is analogous to the role played by the heat reservoir in relation to the transfer of heat. We see from (127) that the power has the usual form of a force multiplied by the velocity. If F e i and p i have the same sign, work is performed by the power device onto the system, otherwise, the system performs work on the power device.
The heat flux and the power in (122) might be understood as functions of the time so that we may write Φ q = dQ/dt and Φ w = dW/dt in which case equation (122) reduces to the form which is the usual way of writing the conservation of energy. However, it should be remarked that both dQ and dW are not in general exact differential, although dU is. The concepts of exact and inexact differentials are commented in the appendix D. The variation of entropy with time is where the rate of entropy production Π and the entropy flux Φ are generalization of the equations (31) and (35) for the present case, where the integration extends over the space of states, and In view of the equation (125), we see that the flux of entropy is related to the heat fluxes by Using this relation, we may derive again all the results involving the free energy obtained in section VI, including the inequality 52.

XIV. NONEQUILIBRIUM STEADY STATE
Here we wish to show by an example that the present approach is indeed capable of describing systems displaying nonequilibrium steady states. That is, for long times the systems reaches a stationary state in which the irreversible currents are nonzero and entropy is permanently being created. One way of setting a system in a nonequilibrium steady state is to place the system in contact with heat reservoirs with different temperatures. Another possibility is to place the system under the action of a power device. The two possibilities are embodied in the development made in the previous section. We examine a system with two degrees of freedom. The potential energy is harmonic and the energy function is so that the conservative forces are linear, F 1 = −Kx 1 + Lx 2 and F 2 = −Kx 2 + Lx 1 . In addition to the conservative forces, the system is under the action of a nonconservative external force given by According to equation (127), which is minus the power performed by the power device. Each degree of freedom is understood as being coupled to one of the two heat reservoirs at the temperatures T 1 and T 2 . The fluxes of heat associated to each reservoir are given by (125) and are From the two heat fluxes we determine the entropy flux by means of the relation (132) From now on we wish to determine the above quantities in the steady state. In the steady state Π = Φ and in view of the relation (138), it suffices to determine the heat fluxes Φ qi and the power Φ w . To find these quantities we need the covariances x i p j and p 2 i at the steady state. Therefore, we should solve the FPK equation (118), with the energy function H given by (133) and external forces given by (134).
In view of the fact that the forces, conservative and nonconservative, are linear, it is possible to solve the FPK equation exactly. The solution is carried out in the appendix E where the covariances at the stationary state are determined. Replacing the covariances in the expressions (135), (136), and (137), we find where .
The range of values of c are such that the denominator be positive.
In the steady state the production of entropy equals the entropy flux which, from (138) is given by which is clearly nonnegative. Let us analyze the results we have just found for L > 0. The various possibilities for the heat fluxes and power are shown in table I. In all cases where c = 0, except one, energy is being dissipated, that is, work is performed onto the system (Φ w < 0) which in turn releases it in the form of heat to one or to both heat reservoirs. The exception is the case in which the system perform works (Φ w > 0) in which case heat flows from the hotter reservoir to the colder reservoir through the system, and the whole system functions as a heat engine.

XV. PATH INTEGRAL
The approach to the stochastic thermodynamics that we have developed here is based on the FPK equation which is understood as an equation that governs the time evolution of the probability distribution ρ. The solution of the evolution equation gives ρ at any instant of time. For convenience here we denote a state by ξ which is understood as the collection of positions and momenta of the particles of the system. I: Heat fluxes and power for the case of the system defined by the energy function (133) and by the external force (134), where ∆T = T1 − T2. The convention for the fluxes are: if Φq1 is positive, heat flows from the reservoir 1 to the system, if Φq2 is positive, heat flows from the reservoir 2 to the system, if Φw is positive, the system performs work on to the external device. Notice that Φw = Φq1 + Φq2. We are considering here L > 0 and |c| < L, and that T1 ≥ T2.
Let us consider now a discretized trajectory, that is, a trajectory for which the system is at state ξ 0 at an initial time t 0 , in ξ 1 at time t 1 , in ξ 2 at time t 2 , ..., and in ξ n at the final time t n . The probability of the occurrence of this discretized trajectory is a successive product of from ℓ = 1 until ℓ = n, multiplied by the probability P (ξ 0 , t 0 ). Omitting the reference to the instants of time, the trajectory probability is P (ξ n |ξ n−1 ) . . . P (ξ 2 |ξ 1 )P (ξ 1 |ξ 0 )P (ξ 0 ).
From now on we consider that the time intervals between two successive instants of time are the same and equal to τ . It is understood that τ is small enough so that the trajectory approaches a continuous trajectory. Generally speaking the probability of a trajectory is a joint probability, which we denote by P (ξ n , ξ n−1 , . . . , ξ 2 , ξ 1 , ξ 0 ).
The identification of (146) with (145) defines a type of stochastic dynamics associated with the name of Markov and the approach we are using here, with the FKP equation as the evolution equation, is thus a Markovian stochastic dynamics. The probability distribution (146) is a joint probability distribution. If we integrate in all variables except one one them, we find the probability of this variable. For instance, if we integrate in ξ 0 , ξ 1 , . . . , ξ n−1 , we find the probability of ξ n = ξ at time t n = t, which is ρ(ξ, t), that is, ρ(ξ) = ... P (ξ, ξ n−1 , ..., ξ 0 )dξ n−1 ...dξ 0 .
In which circumstance should we use the path probability (146)? If we wish to find the average U of the energy function H(ξ) at time t, for instance, it suffices to use the probability distribution ρ(ξ, t). However, if we wish to find the average of the mechanical work, we should use the path probability because the work is a path integral.
Let L be the work along a certain trajectory of the force with components f i , where the index 'path' serves to remember that the integral is an integral along a certain trajectory. In a discretized form, the path integral is written the sum where f ℓ i is the value of f i at the ℓ-th step, and a ℓ i is the increment in x i at the ℓ-th step. In this discretized form we see clearly that the path integral depends on ξ 0 , ξ 1 , . . . , ξ n , and to find its average we should use the path probability (146). This amounts to multiply the right-hand side of (149) by the right-hand side of (146) and integrate in all variables, ξ 0 , ξ 1 , . . . , ξ n . The result of this procedure is indicated by an index 'path' in the signs of the average. Therefore, the average of L which we call W is written as Although we call work both L and W , it should be understood that L is the actual work and W its average. The path integral (148) can be written as a time integral of the power as is well known. A trajectory may be defined parametrically by given ξ i , and thus x i and p i , as functions of this parameter which we take to be the time t. In terms of this parameter, the integral (148) becomes a time integral, where we have replaced dx i /dt by the velocity p i /m. This expression is written as where φ is the power at time t, and given by Taking the average of the expression (152) we find where in the right-hand side the average is the usual average taken by the use of the probability density ρ(ξ, t) at time t because φ depends only on ξ at time t. Thus the average over a path integral is transformed into an average over the ordinary probability density. The integrand on the right-hand side of (154) is understood as the average power, which coincides with the power of the external force given by the equivalent forms (127) or (126), if we recall that Since W is the average of L, we may write The result (156) or its equivalent form (154) where L and φ are given by (148) and (153), respectively, can immediately be generalized by replacing f i (ξ) by any other function of ξ. Let us suppose that it is replaced by J i /ρ where J i is the irreversible current given by (119) The path integral of this quantity is and the associated flux according to the result above is which is the heat flux as given by (124). The quantity is thus the heat exchanged between the two instants of time, and according to the results above may be written as the path average Let us consider that a system in contact with a heat bath at a temperature T is acted by external forces during a certain interval of time. The following equality relating the free energy to the work performed by the system during this interval of time has been shown to be valid [6], Therefore, if the right-hand side of equation (162) is measured, we may obtain ∆F . Using the inequality e η ≥ e η , valid for any random variable η, we find which should be compared with the equation (52). Before we end this section it is appropriate to place a discussion on the experimental measurement of the several quantities presented in the theory. The quantities that are mensurable are those that we call state functions, that is, quantities that are functions of the random variables, in the present case the positions and momenta, and themselves random variables. A experimental result obtained for a state function E, be it the value of one trial or the arithmetic average of several trials, should be compared with the average predicted by the theory which is written as such as the energy, or as a path average as is the case of the mechanical work. Let us consider the case of the entropy which is which sometimes is written as Although one may write in this form, this expression is merely an abbreviation of the expression on the righthand side of (165) and cannot be understood as the average of a state function merely because −k ln ρ is not a state function. Although, sometimes −k ln ρ is called an instantaneous entropy, it is not a mensurable quantity. This point can be better understood if we try to calculate the entropy from a Monte Carlo simulation. One immediately realizes that it is impossible to determine ln ρ along a Monte Carlo run and an alternative should be used. The observation made above with respect to the average (166) is extended to the average in (161) because J i /ρ is not a state function and it is not a mensurable quantity. This is paradoxical because no one denies that heat Q is mensurable. However, a moment of reflection will reveal that heat is measured through the work dissipated and not as the quantity Ψ above.

XVI. DISCUSSION AND CONCLUSION
We have developed an approach to the stochastic thermodynamics based on the use of the FPK equation, which governs the time evolution of the probability distribution. The main feature of the approach in addition to the evolution equation is the assignment of an energy function, the definition of entropy and the introduction of an expression for the rate of production of entropy. According to the approach, these quantities are well defined quantities in equilibrium or out of equilibrium. This is in contrast to other quantities of thermodynamics such as the temperature, which is defined only when the system is in thermodynamic equilibrium.
The evolution equation contains the mechanism of dissipation and stochastic fluctuation or noise which leads the system toward equilibrium, if an appropriate relation exists between dissipation and noise. The mechanism is included in the irreversible current by the term containing the dissipative force and the term containing the quantity Γ, which is a measure of the noise. If the thermodynamic equilibrium sets in, the irreversible current vanishes. In out of equilibrium the irreversible current is nonzero and the production of entropy, which is related to the square of the irreversible currents is greater than zero. The rate of entropy production is thus a measure of the deviation of a system from thermodynamic equilibrium and of the irreversibility.
We have considered systems with a continuous space of states in which case the appropriate evolution equation is the FPK equation. However, the present approach can be extended to a discrete space of states in which case the evolution equation is called a master equation [29,31]. It can be applied to systems of interacting particles with different species including reactions among them [31]. We did not treat the case where the parameters taking place in the evolution equation depends on time but the present approach can also be applied for instance to the case where the temperature oscillate periodically in time [39,40]. In this case, for long times the system may not properly reaches a stationary state in the sense of being independent of time but may reach a state with a probability that oscillates in time.
Stochastic thermodynamics is sometimes called a discipline whose quantities are defined at the level of single trajectories. This denomination emphasizes the fluctuation aspect of the theory, which is a relevant feature in systems with few degrees of freedom, the main application of the theory. In this respect the theory looks like statistical mechanics, which incorporates fluctuations and may also be applied to small systems. Thus, an alternative name to discipline would be stochastic mechanics avoiding the term thermodynamics which is usually associated to macroscopic systems.
The emphasis on trajectories and path integralsl is a distinguish feature as is used for instance in the Jarzynski equality (162) The present approach, on the other hand, the emphasis rests on the fluxes and currents of various types but a relationship between fluxes and path integrals exists as shown in section XV, revealing the equivalence between the two approaches. The present approach also emphasizes the connection with the laws of thermodynamics, particularly the second law expressed by the nonnegativity of the rate of entropy production.
The present approach to the quantum stochastic thermodynamics is based on the quantum evolution equation which is a canonical quantization of the FPK equation. It differs from other approaches such as those based on the Lindblad operators [41]. However, the present quantum FPK equation has similarity with the quantum mas-ter equation derived by Dekker [42] and by Caldeira and Leggett [43,44]. The main features of the quantum FPK equation that distinguishes from other approaches is that it is centered on the irreversible density current operator, the analog of the classical irreversible probability current, which plays a fundamental role in defining the fluxes of various type as well as the rate of entropy production. The similarity of the quantum evolution equation to the classical counterpart is useful because the generalization of the concepts of classical stochastic thermodynamics, such as those associated to the current of probability, to the quantum case become easier. A final distinguishing feature is that a system described by the quantum FPK equation thermalizes properly. That is, for long times the system approaches the equilibrium state, if, of course, the relationship (99) between noise and fluctuation is obeyed.
xp = −c ab − c 2 . (B6) Inverting these relations we find It remains now to determine the covariances as functions of time. To find the equations for the covariances we proceed as follows. We multiply both sides of the FPK equation successively by x 2 , p 2 and xp, and integrate in x and p. Performing appropriate integration by parts, we find The stationary solution of this equation is x 2 = 1/Kβ, p 2 = m/β, and xp = 0. Taking these result into account, we define variables that are deviations of the covariances from their stationary values as follows, A = x 2 − 1/Kβ, B = p 2 − m/β, and C = rp . These variables obey the set of linear equations which we write in matrix form The solution for each variable is of the type e λt where λ is an eigenvalue of the square matrix above. They are and are all negative. The general solution is and the coefficients are not all independent, but are related by Thus only three, say A 1 , A 2 , and A 3 can be chosen to be independent, and they are determined by the initial conditions. It is worth determining the solution for long times. In this case the solution is dominated by the largest eigenvalue, which is λ 1 . The covariances are from which we find the three parameters where a 1 = K 2 β 2 A 1 , b 1 = β 2 B 1 /m 2 , c 1 = Kβ 2 C 1 /m.

Equation (D1) can be written in simplified form
where dx i are the differentials of the variable x i and df is the differential of f . Since f is a function of x i then the following relation is valid Now we raise the following question. Let g be a function of t and let x i depend on time as before and let us assume that where g i are given function of the variables x j . The question now arises whether g could depend on time only through the variables x i , that is, whether If that is possible then according to our reasoning above the given functions g i of the variables x j must fulfill the condition for all pairs i, j. If this condition is not satisfied it is not possible to write g as a function of x i . In this case if we write (D5) in the simplified form we say that dg is not an exact differential. b x 2 p 1 + γ p 2 1 = γmkT 1 (E17) a x 1 p 2 + γ p 2 2 = γmkT 2 (E18) A straightforward calculation leads us to the result x 1 p 2 = − x 2 p 1 = mγk(bT 2 − aT 1 ) 2(mγ 2 K + ab) (E19) x 1 x 2 = − k(aT 1 + bT 2 ) 2(K 2 − ab) (E20) We remark that, as x 2 1 , x 2 2 , p 2 1 , and p 2 2 must be nonnegative, the following conditions should be fulfilled It is worth mentioning that the probability density can also be determined. On account of the linearity of the FPK equation in relation to the variable x i and p i , the solution is a multivariate Gaussian distribution, which we write as where we are using the abbreviations ξ 1 = x 1 , ξ 2 = x 2 , ξ 3 = p 1 , and ξ 4 = p 2 . The matrix L whose elements are L ij is the inverse of the covariance matrix C, whose elements we have just determined, and are C 11 = x 2 1 , C 22 = x 2 2 , C 12 = x 1 x 2 , C 33 = p 2 1 , C 44 = p 2 2 , C 34 = p 1 p 2 ,C 13 = x 1 p 1 , C 14 = x 1 p 2 , C 23 = x 2 p 1 , and C 24 = x 2 p 2 . The expression given by equation (E26) is the probability distribution describing the nonequilibrium stationary state of the present problem.