Pedagogical introduction to equilibrium Green's functions: condensed matter examples with numerical implementations

The Green's function method has applications in several fields in Physics, from classical differential equations to quantum many-body problems. In the quantum context, Green's functions are correlation functions, from which it is possible to extract information from the system under study, such as the density of states, relaxation times and response functions. Despite its power and versatility, it is known as a laborious and sometimes cumbersome method. Here we introduce the equilibrium Green's functions and the equation-of-motion technique, exemplifying the method in discrete lattices of non-interacting electrons. We start with simple models, such as the two-site molecule, the infinite and semi-infinite one-dimensional chains, and the two-dimensional ladder. Numerical implementations are developed via the recursive Green's function, implemented in Julia, an open-source, efficient and easy-to-learn scientific language. We also present a new variation of the surface recursive Green's function method, which can be of interest when simulating simultaneously the properties of surface and bulk.


I. INTRODUCTION
The Green's functions method is a powerful mathematical tool to solve linear differential equations. These functions were named after the English miller, physicist and mathematician George Green (1793-1841) [1][2][3]. His seminal work "An essay on the application of mathematical analysis to the theories of electricity and magnetism" (1828) [4] developed a theory of partial differential equations with general boundary conditions, introducing the so-called Green's theorem (also known today as Green's second identity 1 ), and the Green's functions [5][6][7]. This essay was self-published by Green for private distribution among family and friends, and was later rediscovered by Lord Kelvin, being examined by Sturm, Liou-1 Today, Green's identities are a set of three vector equations relating the bulk with the boundary of a region on which differential operators act, closely related to Gauss' divergence and Stokes' curl theorems. Green's second identity allows the conversion of a triple integral of laplacians within a volume into a double integral ville, Dirichlet, Riemann, Neumann, Maxwell, and others [8]. The Green's functions were born as auxiliary functions for solving boundary-value problems. The latter are differential equations with constraining boundary conditions, which specify values that the solution or its normal derivative take on the boundary of the domain. Boundary-value problems arise in several problems in physics, for instance, heat conduction in solid bodies, described by the diffusion or heat conduction equation: where Ω is a volume bounded by the closed surface S, where ∂u ∂n = n · ∇u, n is the outward normal to the boundary S with unit length. This formula holds for regular functions u and v defined in Ω.
arXiv:1604.02499v2 [cond-mat.mes-hall] 29 Jul 2016 cial geometries, described by: ∂ 2 ϕ ∂x 2 − 1 v 2 ∂ 2 ϕ ∂t 2 = 0, the wave equation [9][10][11]. From the Green's functions, a whole theory of partial differential equations arised, paving the way for the development of functional analysis, the branch of mathematics dedicated to the infinite-dimensional vector spaces and operators. By the end of the XIX century many boundary-value problems were approached in acoustics, hydrodynamics, thermodynamics and electromagnetism. Before we examine the development of the Green's functions in quantum mechanics, we shall review some of the general properties of a Green's function.

A. Classical Green's functions
Formally, a Green's function is a solution of a linear differential equation with a Dirac delta inhomogeneous source (sometimes referred as a delta or unit pulse) with homogeneous boundary conditions. Let us clarify the emphazised concepts. A differential equation is said to be linear if the function f (x) and all its derivatives f (n) (x), n = (1, 2, · · · , n), appear linearly. There is no product of the function and its derivatives, such as f (x).f (x), and no powers of the function or of its derivatives beyond the first power. For example, in an ordinary differential equation, it should read: a 1 (x)f (x) + a 2 (x)f (x) + · · · + a n (x)f (n) (x) = g(x) , (2) On the other hand, the coefficients a n (x) are arbitrary differentiable functions. Linearity of the operators is essential for the validity of the superposition principle, that allows the linear combination of solutions.
If a differential equation has a term on the right-handside (r.h.s.) of the equation that does not depend on your function f (x), we classify it as an inhomogeneous differential equation. For example, in Eq. (2), a linear homogeneous differential equation would have g(x) = 0, and an inhomogeneous one would have a non-zero function g(x) on the r.h.s., or a non-zero constant c.
The differential equation we will be concerned with has a special inhomogeneity function, the Dirac delta δ(x − x ). Put simply, this object is defined to be zero when x = x , and infinite at x = x : Rigorously, the Dirac delta is not a function, since it would require to have a definite value for each point in its domain, but is instead classified as a distribution. Its most important property is where f (x) is any continuous function of x. For other interesting properties of the Dirac delta, please check Refs. [9,10,12].
Lastly, one often needs to impose boundary conditions on the solutions, meaning, conditions on the function or on its derivative at the boundary of the domain. If their values are zero, we call them homogeneous boundary conditions. For example, for a function f (x) with boundary at x = L, homogeneous boundary conditions would correspond to f (x = L) = 0 or f (x = L) = 0. Now shall we return to the classical Green's functions. To put the mathematical problem in perspective, imagine one would like to solve a partial linear inhomogeneous differential equation, say, where D a linear differential operator, f (x) is the desired solution, and g(x) the inhomogeneity source.
The particular solution f (x) can be formally found with the aid of a function G(x, x ): where the Green's function G(x, x ) is defined as the solution of a differential equation with a delta inhomogeneity: To verify this, act with D on both sides of Eq. (6) and make use of the Dirac delta fundamental property, Eq. (4). Note that D acts on the x coordinate, keeping x fixed.
One can interpret Eq. (6) by considering the Green's functions as a "building block" for finding the particular solution f (x), since they are solutions to delta-impulse equations. In signal processing fields, the Green's function is often referred to as a response function, connecting a perturbation or "input signal" g(x) to the "output" f (x).
Before turning to applications, we should remark that if one wishes to find the complete general solution to f (x), the solution of the homogeneous equation Dh(x) = 0 must be added to Eq. (6), which is the particular solution. The solution of the homogeneous equation is found by satisfaction of inhomogeneous boundary conditions [10].
It might be interesting to have an example of how Eq. (6) can work in practice. The equation relating the electric potential ϕ(r) to a given charge density distribution ρ(r) is Poisson's equation: The most common boundary condition is requiring that ϕ(r) goes to zero at infinity. From Eq. (6), the potential ϕ(r) can be obtained with the help of a Green's function where G(r, r ) satisfies the inhomogeneous equation The solution to Eq. (10) can be identified physically. By associating the Dirac delta δ(r − r ) with a point charge at r , we can find a corresponding potential. Considering a point charge q = 1 at r , the electric potential is simply Removing from (11) the prefactor −1/ 0 of Eq. (9), the appropriate Green's function to the localized charge problem of Eq. (10) is which satisfies homogeneous boundary conditions, since in the limit of |r − r | → ∞, G goes to zero. Substituting (12) into Eq. (9), we have that for an arbitrary charge density distribution, the solution of Poisson's equation is given by the following integral over space: which verifies to be a correct result in electrostatics [11]. Although this is a quite simple example, the Green's function technique as presented can be applied to other physical problems described by linear differential equations. During the late half of the XIX century, it became a central tool for solving boundary-value problems. Further examples in vibrations and diffusion phenomena, as well as other ways of constructing Green's functions can be found in the references [7,9,10,13].

B. Quantum Green's functions
By the beginning of the 20th century, Green's functions were generalized to the theory of linear operators, in particular, they were applied to the class of Sturm-Liouville operators [14]. These are second-order linear differential equations that depend linearly on a parameter λ as an eigenvalue problem: Lϕ = λϕ. The study of the existence of eigenvalues λ, and of the complete set of eigenfunctions ϕ became known as Sturm-Liouville theory. From this set, the Green's functions could now be built as a Fourier-like, or spectral expansion. As a generalized technique, the Green's functions allowed conversion of a differential problem into integral operator problems [8].
With the emergence of quantum mechanics, functional analysis and the theory of linear operators gained new significance.
They are present at the very foundations of quantum mechanics, from Hilbert's vector space to Heisenberg's matrix formulation, and in Schrödinger's continuous wave mechanics (the onedimensional Schrödinger's equation is one example of a Sturm-Liouville problem).
Schrödinger's equation is a celebrated piece of the quantum puzzle that tormented early twentieth century physicists. One should remark that the Schrödinger equation cannot be rigorously derived from any physical principle; it was postulated from Hamilton-Jacobi analogues [15] describing the propagation of a scalar field, the wave function, using a diffusion equation. As a very historical note, Schrödinger even referenced the Green's function in a footnote of one of his 1926 papers [15], citing Cornelius Lanczos' work. Lanczos had tried to develop an integral representation of Born and Jordan's matrix equations [16], finding the Green's function along his formulation.
Shortly after Schrödinger's first papers, Max Born proposed a wave-mechanical model of atomic collisions [17,18], developing the probabilistic interpretation of this wave function. His study was based on the freeparticle wave function, a plane wave. In order to find the new scattered wavefunction, Born built a perturbation expansion in the first power of the potential, starting from the free solution. This first order is known today as the first Born approximation, generalized in the Lippmann-Schwinger equation for scattering [19], presented in Quantum Mechanics courses [20]. In its timedependent form, Schrödinger's equation reads Eq. (14) is the quantum nonrelativistic equivalent of second Newton's law dp dt = F, governing instead the time evolution of the wavefunction Ψ(r, t). The right hand side of Eq. (14) has the Hamiltonian operator, H = − 2 2m ∇ 2 +V (r, t), whose expectation value is the total energy. The first term is the kinetic energy, rewritten using the momentum operator p = −i ∇, and the second, the external potential. In this formulation, the eigenstates of the Hamiltonian play an important role, since their time evolution is simple to calculate (i.e. they are stationary).
The time-dependent Schrödinger equation is a linear partial differential equation. Also, it is of first order in time, so an initial condition must be specified. Although it is a homogeneous equation, we can rearrange the terms as in order to treat the potential as a source of inhomogeneity. But note that it is not, since the right-hand-side also depends on the function Ψ(r, t). Ultimately, we will need a recursive solution to find Ψ(r, t), or an iterative procedure. This kind of self-consistent solution is achieved by the Lippmann-Schwinger equation for the wave function or a Dyson's equation [19][20][21][22] for the Green's function.
The basic idea would be to use the free solutions, those in the absence of an external potential, to solve the more general problem, with an external potential. Therefore Eq.(15) is where Green's functions come into play. Instead of solving Schrödinger's equation for wave functions, one can equivalently look for the Green's function that solves the inhomogeneous problem From the theory of Green's functions we already know that an inhomogeneous solution similar to Eq. (6), may be written as Note the difference with respect to Eq. (6), where the solution itself enters the integral. The equation above describes the time evolution of the wave function from a given time and position (r , t ), evolving it to another time and space (r, t). This is why the Green's function is known as the propagator.
In order to give a broader picture the propagating character of the Green's function, let us rewrite the wave function in terms of the time evolution operator 2 U (t, t ) = e − i H(t−t ) . For simplicity, we can represent the wavefunctions as state vectors in the position representation 3 as Ψ(r, t) = r|Ψ(t) . Writing Ψ(t) as the evolution from Ψ(t ), and using the closure relation |r r |d 3 r = 1, which reproduces Eq. (17) if we define (this is not yet our final definition, we will develop them only for pedagogical purposes), where r, t| = r|e − i Ht and |r , t = e i Ht |r . Thus, we have associated the Green's function to the probability amplitude of finding the particle in a state r, t| given that it started at |r , t . It is interesting to note that Paul Dirac [23], while attempting develop a Lagrangian or path-integral formulation of quantum mechanics in the 2 For a time-independent Hamiltonian and in the Schrödinger picture, the solution to Eq. (14) is Ψ(r, t) = e − i H(t−t 0 ) Ψ(r, t 0 ).
Here we see the time-evolution operator U (t, t ) = e − i H(t−t ) that evolves the wave function Ψ(r, t ) to Ψ(r, t) in infinitesimal time intervals. It has important properties such as unitarity, U = U † , which preserves the norm of the wavefunction. 3 In the position representation (and Dirac notation), the bra r| is associated to a spatial function base.
1930's, found the propagator as the overlap of two functions in different positions and times. The eigenstates of the Hamiltonian form a complete set, which we cast |n in the vector notation. Inserting again a completeness relation, n |n n| = 1, Since H acts on the eigenstate |n , and the projection r|n is the eigenfunction ϕ n (r), which has poles at the eigenenergies. Please note that so far, we have inspected the quantum Green's functions as propagators, but we have not constrained the particle to propagate in a certain direction of time, which will be perfomed shortly 4 . In the 1950's and 60's the quantum Green's functions were introduced as propagators in the quantum field theory by Feynman and Schwinger. Feynman [24,25] transformed Dirac's observations on the quantum propagators into a more rigorous formalism. He developed the pathintegral formalism, interpreting Eq. (17) as the sum of the probabilities of the particle taking different individual paths. In addition, Feynman invented a graphical form of representing terms of a perturbation expansion of a scattering formalism, the Feynman diagrams.
At this point we need to switch from the so-called first quantization, from Schrödinger's wave mechanics, to quantum fields, using the technique of "second quantization". Stating very briefly, the Schrödinger equation describes the undulatory behavior of matter, such as electrons, by means of wave functions [26]. But other wave phenomena were shown to behave as particles e.g., phonons (lattice vibrations with a wavelength) or photons (excitations of the electromagnetic field). The second quantization language treats particles and waves as a quantum field. It has several advantages over "first quantization", being more adequate for many-particle physics.
To clarify the definition of a field propagator, let us consider a thought experiment. Imagine that a particle is created in the ground-state of an interacting system 5 . That particle probes the system, which has its own complex interactions, even probably causing excitations, but at the end it is annihilated and the system returns to the ground state. In quantum field theory one does not deal with wave functions, but instead with one special state, the vacuum |0 , andthe creation (c † i ) and the annihilation (c i ) operators, where we already assume fermionic fields. The so-called occupation number representation specifies the number of identical particles in each quantum state.
Feynman introduced a new quantum field propagator. He accounted for the propagation of virtual particles and antiparticles, which propagate forward and backward in space-time, inserting a Wick's time-ordering symbol T , that guarantees causal time orderings (we will detail the properties of this operator in the following section). The Feynman propagator definition reads then [26]: where the expectation values are evaluated over the interacting ground state of the system (later we will generalize to a quantum ensemble of states). The propagator consists of two parts. In the first, a particle is created by ψ † (r , t ) at position r and time t < t and later it is destroyed at the position r and time t. In the second part, an antiparticle is created at r and time t < t and propagates to the position r , where it is annihilated at time t . At this point, we have almost arrived at the manyparticle Green's function definition that we will adopt. The differences are that expectation values can be evaluated in the ground-state or in an ensemble, and we insert a −i factor in our Green's function, in order to avoid the imaginary factor that appeared in Eq. (23). Julian Schwinger realized the power of Green's functions in quantum field theory. In his very interesting lecture "The Greening of Quantum Field Theory -George and I" [27], Schwinger reviewed Green's idea and how it came to post-war developments of quantum field theory, finally reaching condensed-matter physicists. One can find several seminal works on Green's functions in the condensed-matter literature. To give some examples, Martin and Schwinger applied quantum field theory in many-particle physics [28], introducing the "lesser" Green's functions to evaluate particle currents and spectral amplitudes, and exploiting the equation-of-motion technique with approximate two-particle Green's func- 5 Theoreticians often make this distinction between interacting and noninteracting systems. This means that the potential V in the Hamiltonian will be present (e.g. due to particle scattering), or not, so that we return to the simple free-particle system (V = 0). In practice, the solvable system will be a building block for the more complex ones.
tions. Kadanoff and Baym developed the thermodynamic many-particle Green's function using a grandcanonical ensemble average, with periodic boundary conditions along an imaginary time axis [29,30], presenting conserving approximations and their diagrammatics. Within perturbation theory, the Green's functions can be expanded in series and acquires a recursive form, known as Dyson's equations [21,22]. Due to its versatility, the Green's function method is quite popular in many-particle physics. It has also been generalized to particle scattering, far from equilibrium physics, finite temperatures, statistical mechanics, and other fields. These propagators are naturally correlation functions, connecting different positions and times, e.g. G(r 1 , t 1 ; r 2 , t 2 ).
Nevertheless, due to the arid formalism presented in most of the textbooks, the method still scares young students. In view of this, here we aim to provide a pedagogical introduction to the Green's functions with practical examples. We will be focused on an introductory level of noninteracting condensed-matter models i.e., without electron-electron Coulomb interaction. We will apply the Green's functions to quantum equilibrium properties of atomic lattices, described by Hamiltonians in a localized basis "tight-binding" or in an occupation Fock basis, as usually formulated in many-particle physics. The fundamentals and definitions can be found for instance, in Refs. [21,[31][32][33]. For fermions, the operator ordering is of utmost importance and their algebra should be revised. Here we will only add some remarks throughout the text.

C. Electron Green's function
We will start with formal definitions of the electron Green's function, our object of study. The single particle electron Green's function is defined as the statistical expectation value of the product of fermion operators at different positions i and j and different times t and t . For instance, the so-called "causal" Green's function reads where c † j creates and electron at the j-th site at time t and c i annihilates an electron in the i-th at time t. We have already introduced this causal Green's function in Eq. (24). The difference is the imaginary factor −i, which Mattuck describes as "decorative" [34], and the fermionic creation and annihilation operators, expressed in a discrete basis. In this paper we consider atomic units in which we set = e = m = 1, such that the usual prefactor −i/ is simplified. In Eq. (25) we have the time-ordering operator, which guarantees causal orderings. This is due to the properties of the Heaviside function θ(x) 6 . Please verify that in each term of Eq. (26), the fermionic operator that appears on the left always acts at time later than the right one. This rule is known as "later to the left".
Since we are dealing with electron operators, we should recall that the operators satisfy the anti-commutations where the anti-commutator is defined as {A, B} = AB + BA, and the Kronecker function δ ij assumes the values Besides the causal Green's function defined above, we introduce two other Green's functions from which many important physical quantities are more easily extracted. For example, for times t > t and t < t , the retarded and advanced Green's functions are defined as where G r is non-zero only for t > t , such that we can calculate the response of the system after it has been perturbed. This is why it is called retarded Green's function.
The advanced Green's function is defined as the adjoint of the retarded Green's function, [G r ] † = G a . This means that, having determined one of them, we can immediately calculate the other.
It is important to note that the Green's functions carry information about the system excitations, since their time evolution is ruled by the Hamiltonian of the system. In the Heisenberg picture, operators evolve in time via Heisenberg equation, where the Hamiltonian is present. For an arbitrary operatorÔ, it reads where the last term accounts for possible explicit time dependence of the operator.

D. Spectral representation
So far we have presented the Green's function in the time domain. But very often it is convenient to represent it in the energy domain. For example, when our system is at equilibrium or when the Hamiltonian is timeindependent 7 . For such cases the Green's function will 6 The Heaviside step function θ(x) is defined by It has a jump discontinuity at x = 0, for which the value usually taken is 1/2. The derivative of θ(x) is the Dirac delta δ(x). 7 If there is time translational symmetry, it is possible to describe the system via time differences t−t and perform a Fourier trans-depend only on time differences t − t and we can perform a Fourier transform. To illustrate this, let us first consider the spectral representation in the special case of a free particle Hamiltonian, which can be written as where c † m (c m ) creates (annihilates) and electron in the m-th single-particle eigenstate of the system with energy ε m .
In the Heisenberg picture, using Eq. (30), the equations of motion of our operators are Therefore, the creation and the annihilation operators evolve as c n (t) = e −iεnt c n and c † n (t) = e iεnt c † n . From these expressions, the retarded and advanced Green's functions of Eq. (28) and (29) for the free-particle case are simple functions of the time difference t − t : where we have used {c n , c † n } = δ nn . Note that the Green's function is diagonal in the energy basis, which does not happen in the general interacting case, where the time evolution of the single particle operator involves different states. Here we assumed that the particle is in an eigenstate of a noninteracting Hamiltonian.
To write the spectral representations of (34) and (35), let us consider the integral representation of the Heaviside step function: where η → 0 + is a positive infinitesimal real number. Inserting this expression in (34), we obtain By performing a change of variables ω + ε n → ω, we have form to represent the Green's function in the energy domain. Similarly, in the presence of spacial translational symmetry, the representation in the momentum space is also convenient.
Since G r nn (t − t ) is the Fourier transform 8 of G r (ω), we can identify the latter in the integrand of Eq. (38), Analogously, we obtain for the noninteracting advanced Green's function, The Fourier transforms of the retarded/advanced Green's functions have different analyticity properties. This is a consequence of causality, expressed in the step functions of Eq. (28) and (29). The retarded(advanced) Green's function is analytic in the upper(lower) half of the complex ω plane and has poles in the lower(upper) half plane, corresponding to the eigenenergies in this simplified example, and single-particle excitations in the more general case.
Converting to a site basis, G ij = n i|n n|j G nn , thus we obtain There are many physical properties hidden in the Green's function. At this point we can extract at least two important properties of the retarded and advanced Greens functions: 1. For the noninteracting Hamiltonian, the poles of the Green's function correspond exactly to the eigenenergies. This can be immediately noticed since ε n was assumed to be the eigenenergy of the free particle system, governing the time evolution of the creation and annihilation operators. This property refers only to the simplified case of a noninteracting Hamiltonian.
2. The imaginary part 9 of the diagonal (j = i) retarded or advanced Green's function provides the local density of states of the system: Here we used the Cauchy relation. 10 To generalize property 1, let us consider the expansion of the operators in the complete basis of a generic Hamiltonian. It is possible to show that the poles of the retarded/advanced Green's function contain information about the spectrum of the single-particle excitations (i.e., a single electron excitation) of the system. To show this, let H be the Hamiltonian of the interacting manybody system. The Schrödinger equation is H|n = ε n |n , where |n and ε n are the many-body eigenstates and eigenenergies, respectively. Note that |n forms a complete basis with closure relation n |n n| = 1.
Within the Heisenberg picture, a given operator A(t) evolves from t to t as If H is time-independent, the evolution depends only on the difference t − t . The Green's function (28) becomes 8 Here we define the Fourier transform of the retarded Green's function as 9 One should have in mind that the imaginary part of a matrix A is Im(A) = (A − A † )/(2i). We thank K. Pototzky for this remark. 10 Limits of improper integrals can be obtained by the principal value of the Cauchy relation due to the improper nature of the integrals of G r/a , e.g. Eq. (38), with poles in different halfplanes. The imaginary part of the diagonal retarded/advanced Green's function recovers the local density of states of a discrete spectrum, In the lines above we have performed the quantum sta- where Z is the partition function and β is proportional to the inverse of the temperature. For the diagonal Green's function j = i we obtain, We can now set t = 0 and take the Fourier transform, as we did for the noninteracting case: This expression is known as the Lehmann or spectral representation of the Green's functions [21]. Following property number 2 of the retarded/advanced Green's functions shown above, from the diagonal Green's function we can calculate the local density of states: It is possible to show that Eq. (44) is recovered when considering a noninteracting Hamiltonian. In this case the Hamiltonian is separable, and the manyparticle eigenstates are a antisymmetrized product of single-particle states. The expectation value in (50) will connect states m that have one additional electron in the site i compared to state n, thus E m = E n + ε i , where ε i is the energy of an additional bare electron at site i. Careful manipulation of (50) and the partition function Z results in a local density of states independent of the temperature, with poles at single-particle energies ε i .
Among the many interesting properties of the interacting Green's function (48) we can also emphasize that: 1. The poles of the interacting Green's function are exactly at the many-body excitations ε m − ε n of the system; 2. In contrast with the noninteracting case, both the Green's function (48) and the local density of states depend on the temperature. This is characteristic of interacting systems.
Although we have presented a more robust formalism, in the examples treated in this article, we will deal only with noninteracting Hamiltonians, neglecting Coulomb interactions, and our local density of states will map the spectra of each Hamiltonian.

II. THE EQUATION OF MOTION TECHNIQUE
One way of obtaining the Green's function is to determine its time evolution via equation of motion (EOM) technique. Using the Heaviside function θ(t − t ) and the Heisenberg equation of motion for the operator c i (t), we derive the retarded Green's function (28) with respect to time: In the last line, on the right-hand side (rhs) of Eq. (51), there is one propagator that yet needs to be determined, which depends on the commutator of the operator c i with the Hamiltonian. We first note that this result is not restricted to G r but rather, is general: the equation of motion will couple the original Green's function to a new one. In addition, its dependence with the Hamiltonian will influence the dynamics.
From now on, we shall use more frequently the spectral representation for the Green's functions. Therefore, we present a simplified notation for the retarded Green's function in the energy domain, adapted from Zubarev [35], Performing the Fourier transform defined in Eq. (40) on Eq. (51), we will obtain an factor iω on the left coming from the time derivative. Since the Fourier transform of the δ-function is the unity, 11 the spectral representation of the equation of motion (51) acquires the form We stress that the presence of the commutator on the rhs of Eqs. (51) and (54) tells us that the dynamics of the Green's function is fully determined by the Hamiltonian of the system.

A. Simple example: the non-interacting linear chain
Let us consider a linear chain described by the noninteracting Hamiltonian containing a single orbital (en- (53) ergy) per site and a kinetic term that connects all nearestneighbor sites via a hopping parameter t The first sum in Eq. (55) corresponds to a local external potential that is diagonal in a base of sites. The second term corresponds to the kinetic energy, describing the destruction of a particle in the site l + 1 and creation of another particle in the site l with probability amplitude t l+1,l . The third term describes the reverse process. The Hamiltonian is hermitian as it represents an observable, namely, the total energy of the system. To assure hermicity, t * l,l+1 = t l+1,l . To calculate the commutator [c i , H] we simply use commutation rules 12 listed in Sec. I D, from which we obtain We now introduce these commutators into the equations of motion (EOMs) (51) or (54). In the energy domain 13 , see Eq. (54), we have where the propagator G r ij (ω) couples to other propagators through first neighbor hopping. In this work we will consider only Hamiltonians that couple nearest neighbors in different geometries. As the reader becomes familiar with the technique, its operation and usage become clearer.
It is important to emphasize that the local potential and the kinetic energy are single particle operators and do not produce many-particle Green's functions. In a more general case where the Hamiltonian has two-particle operators, i.e., a product of four operators, it will generate multi-particle Green's functions. The resulting system of coupled Green's functions is a priori, infinite, but for practical purposes it is truncated at some level. Despite their importance in condensed matter physics, many-particle Hamiltonians are outside the scope of this work, but can be found elsewhere, e.g. Refs. [21] and [36] and references therein. In the example treated here, Hamiltonians are noninteracting and we can find exact solutions (at least numerically) for the Green The simplest finite lattice has only two sites, see Fig. 1(a). Before deriving an exact expression for the Green's functions of this system, let us review its relevance in quantum chemistry as a prototype of the molecular bond between two hydrogen nuclei. In this model, each atom has its s-type orbital localized around its H nucleus with energy ε 0 , shown in Fig. 1(b). The proximity of the two atoms allows for the hybridization of their individual orbitals with overlap matrix element (hopping) t. This coupled system has two solutions, two molecular orbitals with even and odd symmetry with respect to spatial inversion, 14 known as bonding and anti-bonding states. They have energies ε 0 ∓ |t|, illustrated in the energy diagram of Fig. 1(c). 14 We should notice that we fully neglect spin-orbit contributions in the Hamiltonian. Thus in this problem spatial degrees of freedom are decoupled from spin, since nor the kinetic energy nor the local potential couples to the spin of the particles. For the present case, with N = 2, the Hamiltonian (55) reads where we define the local energy ε 0 , the number operator n i = c † i c i and the hopping matrix element t 21 = t. In this problem, we can distinguish the Hamiltonian for the two isolated sites, h 0 , and a perturbation (inter-site coupling) V. This perturbative perspective allows us to write a Dyson equation for the Green's function of the system, as we will develop below. The matrix representing the Hamiltonian (60) on the local orbitals basis {1, 2} acquires the form The energies of the molecular orbitals ε 0 ∓|t| are easily obtained by diagonalizing the Hamiltonian above.
Returning to the explicit calculation of the Green's functions, we see that the local Green's function for the first site, G r while in energy domain we have, From the equations above we see that is useful to introduce the undressed local Green's functions for the isolated sites (that can be obtained by setting t = 0 in the equations above), where we define the lowercase g referring to the Green's function of an isolated site. This function, which we name undressed Green's function, is diagonal on the isolated site basis, similarly to the unperturbed Hamiltonian. For the hydrogen molecule [ Fig. 1(a)], the dressed Green's function exhibits non-diagonal terms due to the couplings. In matrix form, the undressed and dressed Green's functions read where by inversion symmetry around the center of the mass of the molecule, we can write G r 22 (ω) = G r 11 (ω). In terms of the undressed Green's function (66), we obtain the coupled system of equations These linear equations are rewritten more compactly in a matrix notation, i.e., in terms of Eq. (67), where the coupling potential V was defined in Eq. (61). In this form, the dressed Green's function G r , is obtained by isolating it as To find the explicit expression for the local site Green's function we can eliminate the non-diagonal propagator by replacing Eq. (69) into Eq. (68), or equivalently, (65) in (64) (72) In the last term of (72), g r (ω) can contribute with a real and a imaginary part in the denominator. This means that there can be a change of the position of the resonance energy ε 0 and a broadening of the correspondent peak. Since g r (ω) is the function of an isolated site, its imaginary part is just a δ-like function, resulting in no effective broadening. In Fig. 2 we plot the density of states, which is proportional to Im[G r 11 ] via Eq. (44). The broadening of the peaks was artificially increased with η = 0.01 for visualization. Thus the final effect of the tunneling between the two sites on site 1 is a change of the local energy ε 0 to ε 0 ± |t|. More generally, the coupling of a site to another structure causes a shift of the resonance to a new energyε 0 a broadening Γ, i.e., G 11 (ω) = (ω −ε 0 + iΓ) −1 . In addition, Eq. (72) can be rewritten as a sum of partial fractions, where we identify the two eigenvalues of the molecule, shown in Fig. 1(c). As discussed in Sec. I D, the poles of the noninteracting Green's function correspond exactly to the eigenenergies, and the imaginary part leads to the density of states, shown in Fig. 2.  It is important to mention that, within the perturbative approach, the Green's function of the system can be obtained by a recursive relation called Dyson equation: where G and g are the dressed and undressed (or bare) Green's functions. In writing (74) we assumed that our problem allows a perturbative approach and that we can encapsulate the irreducible diagrams due to many-particle interactions in a operator called selfenergy Σ(ω). The self-energy is an energy-dependent operator that accounts for the effects of self-consistent interactions, the dynamic i.e., energy-dependent, renormalization of the single-particle states. This renormalization will change the position of the level, and its width. This broadening is frequently related with the inverse of the lifetime of the dressed particle, the quasiparticle. For interacting problems and more complex structures, the determination of a consistent self-energy is a challenging problem [31,38]. In our example, see Eq. (70), V has a simple structure and the coupling t is a constant, thus interactions and additional complications in the Hamiltonian are not yet present.
In the next examples we will practice the equations of motion analytically and later numerically, for extended linear lattices.

C. Semi-infinite linear chain
An interesting example that provides an analytical closed solution of the equations of motion is the semiinfinite linear chain, shown in Fig. 3. This extended lattice can be considered a simple model of a crystalline solid or a semi-infinite electrode in a junction. Note that the infinite number of sites prohibits direct diagonalization of the Hamiltonian or the resolvent operator, and the application of Eq. (59) leads to an infinite hierarchy of propagators, with an infinite continued fraction structure. Already from early days of computational physics recursive techniques in tight-binding lattices were recognized as an efficient tool for the study of solids [39]. For instance, the workhorse in quantum transport, the "surface Green's function" method approached in Sec. III A, plays an essential role in the simulation of dynamic properties of materials.
The decimation technique is a very useful tool for the recursive procedure. Basically, it is a strategy to approximate the solution of an infinite system starting from a finite one. This technique relies on finding a change of variables that will bring your coupled equations of motion in the same form of a well known result. For instance, suppose we could add many sites to the hydrogen molecule, always renormalizing the Green's functions in a way to recover an effective site2. Then one would have an effective hydrogen-like molecule, as illustrated in Fig. 4 (note that the isolated sites are not identical). Here we assumed that we have already encapsulated a large number of sites into this effective site2. In the asymptotic limit, this effective site gives the same answer of a semiinfinite lattice.
Let us then consider the effective two-site model, where one undressed surface site is coupled to an effective one. We have already developed the equations of motion of the two-site system, Eq. (68) and (69). For simplicity we will drop the frequency dependence and the retarded 1 t * t 2 Figure 4. Effective hydrogen molecule to evaluate the Green's function of the semi-infinite chain.
index in our notation. The equations of the effective twosite chain read where g 1 andG 2 are the undressed and the dressed effective Green's function.
In the limit of a infinite number of sites in the effective site2, the effective propagatorG 2 describes itself the semi-infinite chain, i.e.,G 2 = G 11 . With this observation, we solve the system in Eq. (75) and (76), finding a second-order equation for G 11 : The two retarded solutions of Eq. (77) are given by or, replacing the undressed function, Eq. (66), We can determine the physical solution examining the analyticity properties of the Green's function [39]. In the asymptotic limit of |ω| → ∞ we must have a vanishing solution, therefore we choose One can verify that G 11 decays as 1/ω in the asymptotic limit. Since the real and imaginary parts of the Green's functions are related by a Hilbert transform 15 , 15 The Hilbert transform is an improper integral, defined by the principal value For an analytic function in the upper plane, the Hilbert transform describes the relationship between the real part and the imaginary part of the boundary values. This means that these functions are conjugate pairs. Given a real-valued function f (x), the Hilbert transform finds a imaginary part, a companion function g(x), so that F = f (x) + ig(x) can be analytically extended to the upper half of the complex plane.
this decay assures a bounded density of states [37]. Note that, by factoring out −1 from the square root of (79) we obtain the imaginary contribution, which is non-zero only in the region |ω − 0 | < 2|t|, i.e., within the bandwidth. This gives the density of states of the edge, or "surface" site: which forms a semi-circle, as illustrated in Fig. 5. In this graph we plotted −|t|Im[G r 11 ] to scale with the real part.

D. Infinite linear chain
Another interesting model that allows analytical solution is the infinite linear chain. The band structure and density of states can be easily obtained in the tightbinding framework by considering Bloch eigenfunctions [40]. Here we will show how to obtain the DOS from the equations of motion.
The infinite chain can be viewed as the coupling between two semi-infinite chains, as shown in Fig. 6(a). This would correspond to two effective sites in a two-site model, as in Fig. 6(b), with solution where G 11 is the diagonal dressed Green's function of the infinite lattice, while the effective propagatorG 1 =G 2 is the previous semi-infinite answer, Eq. (79). One might wonder if this solution is unique. Other couplings are possible, for example, in Fig. 7(a) we couple one undressed site with two semi-infinite lattices. In this case the equations of motion go not only forward but also backward. The dressed Green's function of the central site now reads whereG 1 is given by Eq. (79). It can be shown that the expressions (83) and (84) are identical, as long asG 1 obeys Eq. (77) (with g 1 = g 0 ), which is indeed the case here. Replacing expression (66) for g 0 into Eq. (84) one obtains In Eq. (85) we can see that the resulting Green's function of the infinite chain has a square root singularity at ω − ε 0 = 2t. The infinitesimal η contributes to a softening around the singularity. For values ω − ε 0 < |2t|, the Green's function is in essence purely imaginary, with roughly the profile of an inverse of the semicircle we have seen in Fig. 5 however, with the presence of singularities at the band edges ω − ε 0 = |2t|. These asymmetric spikes are a hallmark of low-dimensional systems (known as van Hove singularities), and indicate the presence of a flat dispersion curve with large accumulation of states. These singularities have effects on the structural, electrical and optical properties of solids and nanostructured materials, such as carbon nanotubes. The density of states of the inner site, obtained with the imaginary part of the Green's functions Eq. (83) or Eq. (84), is plotted in Fig. 8. In source code 1 (see Appendix), we have illustrated how to obtain the graph of Fig. 8 using the Julia programming language. For an introductory course in Julia, please see Ref. [41].

E. Three-site chain: a recipe for recursion
Let us now apply the equation-of-motion technique to a linear chain composed of three sites, shown in Fig. 9. Although it may appear as just another application of Eq. (59), these equations will set our paradigm for the surface-bulk recursive Green's function method presented in Sec. III B. For the widely-used surface Green's function, this 3-site model is revisited briefly, however special attention is required by the surface-bulk method that will be presented.
Let us assume that our three-site chain is described by the non-interacting Hamiltonian Figure 9. Three-site chain with nearest-neighbors hoppings t and t * . The visualization of the sites may help writing of the equations of motion, making it easier and mechanic. The equations of motion of this system will play an important role for the recursive methods presented later on.
From the local potential term of the Hamiltonian above, we see that the undressed Green's functions (66) can be written as g r i = (ω − ε 0 + iη) −1 . We now will write the EOM for the dressed Green's function G r i , and for the non-diagonal propagators G r ij that connect the sites i and j. We will omit the energy dependence (ω) and the index r, for simplicity.
a. Green's function of site 1: G 11 -Let us now calculate the Green's function of the first site of the threesite system, according to Eq. (59). Schematically we see in Fig. 9 that the site 1 couples to site 2 via a nondiagonal propagator G 21 (where the subindex describes the propagator "from the site 2 to the site 1"), One way of visualizing how it works is first to identify the first neighbor of the site in question (see Fig. 9), the direction of the hopping, and the corresponding propagator G kj , keeping in mind that the last index j of the non-diagonal propagator has to be the same as the one of the Green's function G ij under consideration.
The non-diagonal propagators that point to the first site are Inserting Eq. (89) into (88), we obtain .
Using the result of Eq. (91) in (87), we can eliminate G 21 to obtain the dressed Green's function for the first site as: (92) b. Green's function of site 2: G 22 -Applying the practical scheme discussed above we can write an expression for the central Green's function as Since there are only three sites, the expressions for propagators pointing to site 2 are These expressions are inserted into Eq. (93) to obtain the local dressed Green's function of site 2, c. Green's function of site 3: G 33 -The equation of motion for the local dressed Green's function of site 3 gives us To obtain a closed expression for G 33 we can either work on the EOM for the G 23 or just make the replacement 1 → 3, 3 → 1 and t → t * in Eq. (92). The resulting expression is (98) So far these examples not only provided us the opportunity to exercise the method but also introduced the boxed expressions (87), (93) e (97), fundamental to the technique developed in Sec. III B for infinite chains.

A. Surface Green's functions decimation
In early 80's, the investigation of surface and bulk properties of metals, transition metals and semiconductors motivated the development of effective Hamiltonians and iterative techniques to obtain the density of states [42]. The recursive Green's functions (RGF) used computationally efficient decimation techniques from the numerical renormalization group, simulating materials via effective layers [43].
The success of recursive Green's functions was boosted by simulation of transport in materials, in particular in two-terminal ballistic transport. The retarded and advanced Green's functions of the central device in a junction contain information to the calculation of transport properties such as the stationary current and conductivity, or transmission matrix. In essence, the idea of dividing the material in layers, modelling it in a chain, is the spirit of the recursive Green's function method. We will illustrate this procedure using a linear chain of single-site orbitals and two forms of decimation: the most widelyused, the surface technique, and an alternative version that stores information from the central sites.
Let us consider a three-site chain, as shown in Fig. 10(a). We will basically follow the references [43,44] except for the fact that in our notation, the first site is labelled as 1 instead of 0, therefore every index will be shifted by one with respect to the ones in [43,44]. Again, for the first site we have the equations of motion By replacing (100) in (99), we eliminate the nondiagonal propagator G 21 : (1 − g 1 t g 2 t * )G 11 = g 1 + g 1 t g 2 t G 31 .
As a general rule, the non-diagonal propagator G n1 relates first neighbors: Writing analogous expressions of (102) for G n−1,1 and G n+1,1 , and replacing back into Eq. (102), we obtain a recursive expression that eliminates the non-diagonal firstneighbors propagators leaving only non-diagonal secondnearest neighbors functions: Rewriting Eq. (103) in terms of new variables where all undressed functions g i = g are given by (66), we arrive at a shorter recursion relation Starting from G 11 , Eq. (108) generates a recursion relation involving only non-diagonal second-nearest neighbors functions of odd sites.
Starting from Eq. (112), we can now repeat the arguments described above, from Eq. (104) to (112), x times. At each repetition we will obtain a larger effective system with not twice, but 2 x the lattice constant. This process is known as decimation, where one encapsulates the numerous sites into a three-point recursion relation using renormalized parameters. This procedure ultimately provides information about the infinite lattice. After x iterations, Eq. (109) to (112) read for n ≥ 1. The renormalized hoppings are smaller than the original t, since they are multiplied by the undressed g, as in Eq. (104) and (105). Those read where g = (ω − ε x−1 + iη) −1 . After x iterations, we have that site 1 is coupled to a chain of 2 x sites where the effective hopping parameter is much smaller. The decimation will stop when ||α x || and ||β x || are sufficiently small. At this point ε Thus we have an approximation to the local Green function from the surface site 1, at the edge of the chain: To have a picture of the decimation procedure, we illustrated the iterations steps in Fig. 10. Note that it is the reverse of the encapsulating mechanism of the infinite lattice into a finite chain, shown in Fig. 6 and 7. We start with the three-site chain, shown in Fig. 10(a), and eliminate G 21 , represented in the figure by the site 2 in lighter color. In the first iteration, we add two interstitial sites, growing the lattice to 5, shown in Fig. 10(b). Next, we eliminate the even non-diagonal functions, storing the information of the new sites into parameters α, β,ε and ε. With these renormalized parameters one can simulate a chain that grows exponentially fast keeping the three-point structure of Eq. (102). The surface RGF is widely used in transport simulation with several applications [44][45][46] with sophistications [47]. In the next section we will present an alternative version, capable to access the Green's functions of the edge and bulk at once, possibly finding usefullness in topological insulators. 16

B. Surface-bulk Recursive Green's function decimation
Another form of RGF, which we first present here, is based in the 3-site local GF, already introduced in Sec. II E. The decimation is similar to the surface procedure, we will insert interstitial sites at each iteration. The difference is in which functions we eliminate in the hierarchy of equation of motions and in the recursive model.
Although the equation of motion (EOM) procedure is quite mechanic, we will exemplify how the decimation develops in the first iteration of the surface-bulk RGF. By now the reader can probably jump into the effective equations, we elaborate them for the sake of clarity.
Let us add two sites a and b to the 3-site chain, shown in Fig. 11: Figure 11. Illustration of the first decimation step, where we inserted interstitial sites a and b in the three-site chain.
For 5 sites, the equations are more numerous and the surface solution will be more intrincate. We will examine three sites, the edges and the central site.
For the first site of Fig. 11 we know that By replacing (120) in (119), we eliminate the nondiagonal function G a1 Eq. (121) can be rewritten in the form of the Eq. (87) using the renormalized quantities Note that the edge propagator G 11 corresponds to Eq. (92), with the undressed effective functionsg 2 eg 3 , which we will derive, for completeness. The Green's function for the central sites of Fig. 11 has EOMs Eliminating the Green's functions (126) and (127), we obtain Eq. (93), where we used the renormalized Green's functioñ .
In Eq. (129),t * = t * g a t * et = t g b t, considering undressed propagators g a = g b .
Finally, the Green's function for the last site of Fig. 11 obeys the following equations, Comparing these expressions with (97), we will considert * = t * g b t * in the renormalization of g 3 .
In this five-site example we explicited the first step of the decimation recursion based on the three-site system. This procedure is different from the surface Green's function approach, since we kept the three local propagators, eliminating the non-diagonal ones. Figure 12 illustrates the renormalization of the interactions and the mapping of the five-site chain onto the effective three-site one.
In Fig. 13, we plot the imaginary part of the retarded Green's function, associated with the density of states, of the surface site 1, ρ 11 (ω). As the decimation procedure is carried, the number of peaks grows with the number of sites. The correspondent source code is presented in the Appendix.

Semi-infinite lattice
The surface-bulk RGF decimation technique detailed in Sec. III B is an alternative to the widespread surface method that automatically delivers information about the central site. However, both methods scale exponentially with the number of iterations and are easily extended to two-dimensions via a matrix representation.
Here we chose to ellaborate better how the proposed surface-bulk decimation works in practice.
We implemented the surface-bulk RGF algorithm in Julia. The source code 2 (see Appendix) uses the recursive method to evaluate the surface density of states of a semi-infinite linear chain. The results of few steps are plotted in Fig. 13 and Fig. 14.

The ladder
In order to approach two-dimensional materials, a generalization of the RGF decimation technique is usually performed by slicing a region (central device or lead) in layers, from which the surface algorithm follows [42]. In  Fig. 13 we showed the first steps, here we plot from the 11th to 14th iteration, which exhibit several peaks. For η = 10 −4 the numerical RGF recovers the analytical expression around 16 steps, ≈ 66000 sites.
two dimensions it is convenient to adopt a matrix representation of our Green's functions and hoppings. We will approach this generalization in the simplest 2D example of a ladder, where we couple two 3-site chains vertically, as shown in Fig. 15. We will take as a convention a hopping t to the right and upwards, and t * to the left or downwards. Each site will be indexed by its column (layer) i and row j. We need to obtain the propagators G ij,i j . Let us consider now displacements both on the horizontal as well as in the vertical direction. For example, the electron in the 11 site can visit the two first neighbors 21 or 12 (see Fig. 15). The equation of motion of the G 11,11 site will exhibit then a self contribution 11 and two non-diagonal propagators G 21,11 e G 12,11 . The EOMs of this first column i = 1 are G 11,11 = g 11 + g 11 t G 21,11 + g 11 t * G 12,11 (133) G 12,12 = g 12 + g 12 t G 11,12 + g 12 t G 22,12 (134) G 11,12 = g 11 t * G 12,12 + g 11 t G 21,12 (135) G 12,11 = g 12 t G 11,11 + g 12 t G 22,11 . (136) Arranging these equations in matrix form, we obtain G 11,11 G 11,12 G 12,11 G 12,12 = g 11 0 0 g 12 + g 11 0 0 g 12 0 t * t 0 G 11,11 G 11,12 G 12,11 G 12,12 + g 11 0 0 g 12 t 0 0 t G 21,11 G 21,12 G 22,11 G 22,12 . (137) Notice that Eq. (137) corresponds only to the first slice (column 1). Casting the left-hand side (l.h.s.) as G 1 and the undressed function as g 1 , we can identify two hopping matrices, one from same-column sites V, and one between columns W: By isolating G 1 we can write where we have defined that represents the Gren's function of a single slice.
From Eq. (139), we can identify that the same 3-site structure of Eq. (87) is now recovered in matrix form. This is very convenient, since we will be able to implement decimation in two dimensions.
For the second slice (column i = 2), we have Therefore we can also rewrite Eq. (141) in the same form of Eq. (93), from the three-site formulas: From the two identifications above we can perform a mapping to three effective sites, corresponding to these slices, shown in Fig. 16. The decimation method applies, allowing the simulation e.g., of a stripe.
The program in Julia to generate the results of the ladder is shown in the Appendix.
To go beyond the ladder, we can generalize V and W to bigger slices. These matrices will be larger but have a simple form, let us develop them.
First note that, in a given slice, the electron can hop up or down a row. By our definitions (see Fig. 15), the down hopping is t * , i.e., the hopping between (i, j) e (i, j + 1), such as 11 and 12. Ordering the basis according to the row j, for the first column i = 1 we have {11, 12, 13, · · · } (first index is i = 1 and the second is j = 1, 2, 3, , · · · ). The possible hoppings V in the first slice lead to a tridiagonal matrix with null diagonal, reflecting the fact that the hopping V takes the electron of the slice to different  rows, the upper (i, j + 1) or lower one (i, j − 1) one: For the W matrix, the hopping takes place between sites of different columns. Presently we deal with three effective sites, but as the decimation proceeds, the lattice will grow horizontally, forming a stripe. In this process, notice that independently of the column i, automatically all rows j of the slice will be connected since the slices will touch each other. For a given column i = 1, for instance, with base order {11, 12, 13, · · · }, where the second index is the row j = 1, 2, 3, · · · , every row is self-connected, meaning that we have a diagonal matrix: Therefore one can generalize the algorithm of the ladder to a stripe geometry, using the matrices (143) and (144) 17 . In Fig. 18 we plot the density of states of the bulk Green's function G 2 at the middle of the stripe, for different widths L = 2 (ladder), L = 6, and L = 128.
As we increase the width of the stripe, the behavior tends to the limit of an infinite square lattice, given by an analytic expression in terms an ellyptical function of the first kind [48]. It exhibits a cusp at ω = 0, a logarithmic singularity characteristic of two-dimensional lattices. It is associated with critical saddle points in the two-dimensional band structure [49]. square lattice Figure 18. Local density of states of the bulk Green's function G2 evaluated inside a stripe of width L. We plotted the matrix element (L/2, L/2), using η = 10 −3 . The analytical result of the infinite square lattice [48] is shown as a reference of the asymptotic limit.
This last example illustrates the power of this technique in simulating finite lattices, which can go beyond 17 To generalize the source code 3 (Appendix) to a stripe, one should define a variable for the stripe size Ly, which in the case of the ladder is Ly=2. The matrices V and W should be defined according to this size, V = diagm(tv*ones(Ly-1),-1)+diagm(zeros(Ly)) +diagm(tv*ones(Ly-1),1) and W = tw*eye(Ly), where the command eye in Julia defines an identity matrix and diagm a diagonal matrix. the present regular chains to real nano or mesoscopic systems, such as electrodes, cavities, quantum dots and molecular junctions.

IV. CONCLUSIONS
To conclude, we have presented a pedagogical introduction que the Green's function in the many-body formalism. Starting with a general view of Green's functions, from the classical mathematical origin, going through the many-body definitions, we finally reached a practical application within the recursive Green's functions technique. For a young researcher, it is not easy to grasp the whole power and at the same time, the tiny details of the numerical methods available. Therefore we prepared this introduction based on simple condensedmatter models with additional implementations in Julia, an open-source high-level language for scientific comput-ing.
The surface-bulk recursive Green's function is, to the best of our knowledge, a new proposal to the field, which brings an advantage in the investigation of topological materials, where one is interested in the edge and the bulk properties. Like the surface approach, our surface-bulk recursive Green's function can be generalized to other systems and geometries [44,47]. We believe this material will be also useful for researchers unfamiliar with the Green's function method, interested in the new challenges of nanosciences and their implementations.