A didactically motivated reexamination of a particle’s quantum mechanics with square-well potentials

We address two questions regarding square-well potentials from a didactic perspective. The first question concerns whether or not the justification of the standard a priori omission of the potential’s vertical segments in the analysis of the eigenvalue problem is licit. The detour we follow to find out the answer considers a trapezoidal potential, includes the solution, analytical and numerical, of the corresponding eigenvalue problem and then analyzes the behavior of that solution in the limit when the slope of the trapezoidal potential’s ramps becomes vertical. The second question, obviously linked to the first one, pertains whether or not eigenfunction’s and its first derivative’s continuity at the potential’s jump points is justified as a priori assumption to kick-off the solution process, as it is standardly accepted in textbook approaches to the potential’s eigenvalue problem. We show that, by following the indicated detour, the irrelevance of the potential’s vertical segments and the continuity of eigenfunctions and their first derivatives at the potential’s jump points turn out to be proven results instead of initial assumptions.

A lot of quantum-mechanics textbooks [1-16] 1 consider, discuss, and solve the eigenvalue problem related to the onedimensional symmetrical/unsymmetrical finite and/or infinite square-well potential.The subject has been seemingly analyzed in a variety of substantially similar manners, which we group together in and label as standard textbook approaches ( ) for future reference, and the outcome of those analyses is looked upon as established body of knowledge to be taught routinely.So, why would one wish to go through a reexamination?The inspiration came from a student's interesting and subtle remark: We are taught about square-well potentials ( ), such as, say, the one shown in Fig. 1a, as useful idealizations of practical cases; 2 however, when we deal with the eigenvalue problem, we utilize for all intents and purposes the discontinuous potential ( ) shown in Fig. 1b which is a somewhat different representation of the original because the Heaviside's functions ignore the presence of the vertical segments.How do we know beforehand that the omission of the vertical segments, which, after all, are legitimate portions of the potential required by idealizations, is irrelevant for the solution of the eigenvalue problem?A teacher's very probable reaction, naturally banking on the mature body of knowledge offered by the , would be to reassure the student that even if a way could be thought of absorbing into the analysis the 's vertical segments then, in the end, the same results would be obtained and nothing new would be found; a typical student would presumably be convinced by such a reassurance because it conveniently minimizes the learning process, obviously.On the other hand, there exist a fraction, maybe small, of curious students to whom that reassurance would prove less effective.In the back of their mind, the wisdom delivered by that witty master of physics that Feynman was in the last paragraph of his incisive article [17] about science's meaning, It is necessary to teach both to accept and to reject the past with a kind of balance that takes considerable skill.Science alone of all the subjects contains within itself the lesson of the danger of belief in the infallibility of the greatest teachers of the preceding generations.would keep bouncing back and forth together with other tempting reflections such as, "How do I know beforehand that I am not going to find out anything new?And even if that would turn out to be the case, how do I know whether or not I will at least learn something new by following other paths if I do not explore them?"So, imagining such a state of mind, we gathered encouragement and thrust from Feynman's advice, "So carry on.Thank you.", concluding his cited article and went on with the reexamination described in the sequel.Our effort is dedicated particularly to the students in the second camp. 1 Complete literature surveys are unattainable asymptotic ideals.The list we provide contains only the textbooks we consulted but we trust they constitute a sufficiently representative sample. 2 A typical example can be found at page 246 of Bohm's textbook [8] where the Ramsauer effect is described.
Our approach confides in and complies with the natural philosophy's famous principle Saltus natura non facit (Nature does not make jumps) 3 that so much inspired several scientific eminences of the past in different departments of science [18][19][20][21].Indeed, we relinquish the , take as starting point the trapezoidal-well potential ( ) sketched in Fig. 1c, solve the corresponding eigenvalue problem analytically and numerically, and investigate the solution's behavior when the slope of the 's oblique segments becomes vertical ( → 0).It seemed to us a rather straightforward conceptual pathway to follow in order to avoid the omission of the 's vertical segments.We were delighted to discover, although only after having carried out almost completely our study, that the same idea had been proposed and probed by Branson [22] 4 in 1979.We keep in great regard Branson's article because it drew our attention towards another important issue connected with the : the presumed continuity of eigenfunctions and their derivatives at the potential's jump points ( = ± in Fig. 1b); 5 we will tell our point of view about this matter in Sec.IV.We were also pleased to discover in Fig. 6-1 at page 237 of Tipler and Llewellyn's textbook [16] that those authors used the of Fig. 1c with 1 = 2 to characterize the quantum dynamics of an electron between two electrodes in a vacuum tube, a fine schematization not so far from real-life applications.

II. QUANTUM-MECHANICS PROBLEM WITH THE TRAPEZOIDAL-WELL POTENTIAL A. Formulation and preliminary considerations regarding boundary conditions
We consider a particle on the axis subjected to the shown in Fig. 1c.The particle's hamiltonian is simply and its quantum mechanics is governed by the Schrödinger equation The praxis in quantum-mechanics textbooks is to introduce at this point the standard variable-separation technique and to launch onto the analysis of the eigenvalue problem governed by the time-independent Schrödinger equation; as representative example, we mention Griffiths' didactically remarkable textbook [14].We believe that such a way of proceeding is somehow incomplete because it gives the student only a partial view inasmuch as it puts in evidence exclusively the suitableness of the mathematical operators intervening in the Schrödinger equation [Eq.( 3)] for variable-separation techniques and totally disregards the equally important role of the boundary conditions which, we are convinced, deserve attention already at this stage of the problem formulation.From a mathematical point of view, Eq. ( 3) is a second-order partial differential equation whose integration requires an initial condition and appropriate boundary conditions.In the one-dimensional case we are considering, there are two boundaries ( = ±∞) and, therefore, we need two conditions involving wavefunction and its first derivative; we may write them formally as follows in which , are given constants.Of course, Eqs. ( 5) must encode in mathematical terms information about what is physically going on at the boundaries.It may happen sometimes that an explicit and crystal clear grasp of the boundary conditions is not in our possession but that occurrence does not either entitle us to ignore or exempt us from keeping in mind their conceptual necessity, at least formally.Now, within a mere mathematical context, there is really nothing particularly special about the above differential-equation problem [Eq.(3), Eq. ( 4), Eqs. ( 5)]: if initial ( ) and boundary ( 1 , 2 ) conditions are explicitly specified then the set of the mentioned equations is a ready intake to feed numericalsolution machineries.In this regard, an analogy comes quickly to mind: heat-transfer engineers solve routinely a similar set either numerically or via variable separation when possible.Their unknown is the temperature and, obviously, the terms in their Eq.( 3) have different physical meanings; the imaginary unit does not appear but its appearance in our case is an almost irrelevant computational preoccupation because modern 6 programming languages handle complex numbers smoothly.Within a physical context, quantum mechanics casts a peculiar nuance on the differential-equation problem we are considering.From a quantum-mechanical point of view, the acceptable solutions to Eq. ( 3) must conform to a very strict requirement: the wavefunction must be normalizable otherwise the energy operator E = ℏ / is not hermitean and the macroscopic observable energy does not turn out to be real [23] ⟨ ⟩ ≠ ⟨ ⟩ * Non-compliant solutions have, therefore, no physical significance.Incisive statements emphasizing this aspect were expressed, for example, by Bohm [8,pag. 178], If this requirement [our Eq. ( 9)] is not satisfied, then we cannot even normalize the probability, so that it is impossible to give the wave function a meaning in terms of physically observable averages.
and Griffiths [14, pag.13 (his emphasis)], ... non-normalizable solutions cannot represent particles, and must be rejected.Physically realizable states correspond to the square-integrable solutions to Schrödinger's equation.
The normalization condition [Eq.( 9)] has twofold repercussions on the boundary conditions.If the energy operator is hermitean then so must be the hamiltonian The submission of our hamiltonian [Eq.( 2)] to the hermiticity test represented by Eq. ( 12) yields the following constraint on the boundary conditions.Of the explicit examples listed after Eqs. ( 5), the periodicity conditions [Eqs.(7)] are the only ones that comply with Eq. ( 13); the Sturm-Liouville conditions [Eqs.(8)] do only if the coefficients' ratios are real Nothing can be said a priori about the wavefunction-prescription conditions [Eqs.(6)] for arbitrary functions Θ ( ).A more severe constraint is levied by the boundaries' locations being situated at = ±∞.These locations are a bit hostile in view of normalization operations; they restrict the boundary conditions even more than Eq. ( 13) because they require the vanishing of the wavefunction [8,14,15] 7 Equations ( 15) are a particular case of wavefunction-prescription condition [Eqs.(6) with Θ ( ) = 0], comply with Eq. ( 13) and, in so doing, safeguard the hermiticity [Eq.( 12)] of the hamiltonian [Eq.( 2)]; de facto, they also imply the vanishing of the wavefunction's corresponding first derivatives.

B. Boundary conditions with variable separation
We rejoin now the beaten path of the literature by applying the standard variable-separation technique which splits the Schrödinger equation [Eq.( 3)] in two separated and independent differential-equation problems 2) The temporal one [Eq.(17.1)] is easily integrated but we put its integral [Eq.( 18)] on hold for the time being because the exploitation of the initial condition [Eq.(4)] is premature at this moment.The integration of the time-independent Schrödinger equation [Eq.(17. 2)] involves more elaboration.It definitely requires two boundary conditions that, obviously, we should derive from the general ones [Eqs.(5)] by substituting in them the variable-separated wavefunction [Eq.(16)].This move calls for due attention because it leads us to face a crucial conceptual filter that reveals the importance of giving the boundary conditions the deserved attention: if Eqs. ( 5) transform to then we have green light to proceed with separated variables; otherwise this would be the end of the road because the boundary conditions do not permit the existence of variable-separated solutions, the receptive mathematical structure of the Schrödinger equation [Eq.( 3)] towards variable separation [Eq.( 16)] notwithstanding.We are on safe ground with the wavefunction vanishing at (±) infinity because, after the substitution of Eq. ( 16), Eqs. ( 15) go smoothly into the eigenfunctions' vanishing Although these considerations may appear a bit formal, they deliver, we believe, an important didactical message that was best expressed in a generalized manner by Tanner [24] in 1991: Although the Schrödinger equation might be separable in some coordinates, the boundary conditions can recouple the variables.
The applicability extent of such a statement is really wide.It is true, for example, in the case of spatially confined molecules whose time-independent Schrödinger equation cannot be separated in terms of center-of-mass and internal coordinates due to the variable recoupling imposed by the confinement boundary conditions.Tanner also complained that: A representative sample of relevant sections (on the hydrogen atom, center of mass, etc.) of introductory textbooks on quantum mechanics revealed no discussion of this difficulty.
We tend to side with him.Textbooks invariably focus on the separation of the mathematical operators appearing in the Schrödinger equation, be it either time-dependent or -independent.Exceptions paying due attention to boundary conditions are rare; among them, Persico's great textbook [1,2] shines through. 8 The time-independent Schrödinger equation [Eq.(17. 2)] with the [Eq.( 1)] and the eigenfunction-vanishing boundary conditions [Eqs.(20)] constitute the eigenvalue problem we wish to solve.Before launching onto the solution process, however, we wish to spend a few more words to emphasize further the importance of the boundary conditions.In this regard, we ask: how do we know if an eigenfunction corresponding to a determined eigenvalue is a unique solution 9 to Eq. (17.2)? Let us suppose that two eigenfunctions 1 , 2 exist for the same eigenvalue; if they are linearly independent then their Wronskian [25] never vanishes.Both eigenfunctions must verify differential equation [Eq.(17. 2)] and boundary conditions [Eqs.(20)] 8 The textbook in Italian [1] received a very positive review in Nature 139, 394 (1937).The not better identified reviewer, who enigmatically signed as H. T. H. P., valued Persico's efforts as "We owe a deep debt of gratitude to Dr. Persico for undertaking the useful task of presenting, in a single volume of reasonable size, a unified account of all aspects of the subjects."and concluded with "... the only serious defect of the book is that it is in Italian.Will some publisher consider the possibility of an English translation?"His exhortation was fulfilled 13 years later by Prentice-Hall which published the English translation [2] by G. Temmer; the English translation was then reviewed, again positively, by M. Lax in American Journal of Physics 19, 478 (1951). 9Griffiths dedicated problem 2.45 at page 87 of his textbook [14] to this matter but his emphasis was more on the absence of non-degenerate states.
simultaneously by definition.The potential in Eqs.(22) can be any and needs not necessarily be the of Eq. ( 1).We can multiply Eq. (22.1) by 2 , Eq. (22.2) by 1 , and subtract to obtain a vanishing expression that proves the Wronskian's invariance; thus, if the Wronskian is continuous in (−∞, +∞), and we plant here a flag to which we will need to return during the discussion of Sec.IV, then it is constant and we can conveniently evaluate it at the boundaries We understand at once from Eq. (22.7) how the eigenfunctions' uniqueness is crucially hanging on the knowledge of the boundary conditions.We can confide in those [Eqs.(22.3) and (22.4)] we have adopted in our eigenvalue problem because they reassuringly make the Wronskian vanish ( = 0), imply the linear dependence of 1 , 2 and, in so doing, compel unambiguously the uniqueness.So, the eigenstates are not degenerate: for a specified eigenvalue there is one and only one eigenfunction.This conclusion goes hand in hand with two other important properties whose proofs are disseminated throughout the majority of the textbooks cited in the beginning of Sec.I: the eigenvalues are real * = and the eigenfunctions are orthogonal with ′ , ′′ being two distinct eigenvalues.An interesting consequence of the eigenfunction-uniqueness proof is that we are given the freedom to choose the eigenfunctions to be either real or pure imaginary.Indeed, if we break down the eigenfunction explicitly into its real and complex parts and substitute into differential equation [Eq.(17. 2)] and boundary conditions [Eqs.(20)] then we reach again the same structure of Eqs.(22.1)-(22.4) with 1 , 2 replaced by , and the vanishing Wronskian Thus, , are linearly dependent and the eigenfunction is proportional to anyone of them through an inessential proportionality constant that we can choose either real or imaginary as it pleases us.
We imagine the reader to be sufficiently sensitized about the importance of the boundary conditions and, therefore, move on with the analysis of the eigenvalue problem.

Nondimensional formulation
We begin by formulating the eigenvalue problem in nondimensional form.We scale the coordinate with the semiextension of the potential well (Fig. 1c The nondimensional eigenvalue in Eq. ( 27.1) is defined as while the nondimensional descends from Eq. ( 1) and includes three solution-controlling characteristic numbers We assume 2 ≤ 1 for a mere reason of convenience; obviously, the limitation does not restrict the results in any way.
We have graphically illustrated the nondimensional [Eq.(27.4)] in Fig. 2 in view of the forthcoming analysis.The potential subdivides the axis in five zones, in each of which the nondimensional time-independent Schrödinger equation [Eq.(27.1)] must be integrated separately.The zonal solutions can be joined by imposing the continuity of the eigenfunction and of its first derivative at the junction points, that is, the points at which the 's slope is discontinuous; the unquestionable legitimacy of the claimed continuity conditions is guaranteed by the 's continuity.In turn, the continuity of the eigenfunction's second derivative at the junction points is guaranteed by Eq. (27.1).The analytical integration is described in the following sections.In parallel, we have carried out the integration also numerically by a method based on high-order finite differences [26][27][28] implemented in the code HOFiD MSP that can solve multiparameter spectral BV-ODE problems.In our numerical calculations, we transform the integration interval (−∞, ∞) into a finite interval by means of a simple variable change and we utilize 6th-order formulae on a grid whose resolution consists of 2505 points, distributed in groups of 501 equispaced points in each zone.

Analytical integration in the zones with constant potential
If we introduce the dummy parameter 0 = 0 and set for brevity in the zones where the is constant [Eq.(27.4) and, from that, we obtain the general integral The imposition of the boundary conditions [Eq.(27.2)] in the left-and right-most zones yields Physically meaningful solutions can be extracted from Eqs. (31) only if the arguments of the trigonometric functions are complex; that, in turn, implies the negativity of the parameters .This occurrence produces the limitation [Eq.(28)] and bounds the eigenvalues to lie below the lowest potential level, 2 in our case.In accordance with Eq. (32), we rearrange Eq. ( 28) as and transform the trigonometric functions of Eqs.(31) into exponential functions The crossed terms in Eqs.(34) vanish in the limit; thus, boundary-condition compliance requires  (32).Obviously, the general integral [Eq.(30)] stands valid and becomes real but the imposition of the boundary conditions [Eqs.(31)] remains idle because the limits for → ±∞ of the trigonometric functions are indeterminate.Thus, rewinding to Eqs. ( 9) and ( 12) through the sequence Eq.(26.2),Eqs.(20), Eq. ( 16), Eq. ( 15), Eq. ( 13), we reach an unavoidable impasse: the hamiltonian's hermiticity test fails and the wavefunction cannot be coerced into normalization.There is nothing else left to do than to enforce Griffiths' verdict quoted just before Eq. ( 11): non-normalizable solutions must be rejected because they correspond to physically irrealizable states.Does this mean that we should throw those solutions away?No, they can still be of service as mathematical ingredients to compose physically acceptable solutions but this angle of the subject is somewhat tangential to our main theme focused on the bound states and, therefore, we refer the interested reader to the lucid explanations provided by Griffiths in Sec.2.4 at page 59 of his textbook [14].
In the central zone, the vanishes [Eq.(27.4) central] so that 0 = − 0 = and the general integral [Eq.( 30)] becomes With regard to the argument of the trigonometric functions in Eq. ( 38), it is worth noticing that, as far as the imposed boundary conditions [Eqs.(31)] are concerned, there is really nothing in them preventing the existence of negative eigenvalues.The latter occurrence should not be ruled out simply on the basis of the presence of in Eq. (38).Indeed, assuming hypothetically < 0, we could write and transform the trigonometric functions with complex argument − into exponential functions with real argument − .The exponential functions in Eq. ( 40) would be harmless and well behaved because the continuity conditions for the determination of the coefficients have to be imposed at the boundaries of the central zone located at = ∓1.Yet, we will discover soon the reason for eigenvalue positivity; for now, we just have to wait patiently a little bit longer.

Analytical integration in the zones with linear potential
The integration of the nondimensional differential equation [Eq.(27.1)] in the zones with linear potential requires familiarity with the Airy's differential equation and functions [29,30] but it is very straightforward.Let us begin with the left zone.The decreases linearly [Eq.(27.4), 2nd line from top] from the level 1 down to the bottom of the well and the differential equation [Eq.( 27.1)] becomes The independent-variable linear transformation converts Eq. ( 41) into the Airy differential equation whose general integral is a linear combination of the Airy functions Things are pretty much similar in the right zone.The increases linearly [Eq.(27.4), 2nd line from bottom] from the bottom of the well up to the level 2 and the differential equation [Eq.( 27.1)] becomes The independent-variable linear transformation converts Eq. ( 45) into another Airy differential equation with general integral With the obtainment of Eqs. ( 44) and (48), our task is quickly completed.However, it is useful to introduce here for future reference some characteristics and consequences of the independent-variable transformations [Eqs.( 42) and ( 46)], we took advantage of to carry out the integration, in view of their recurrent use in the forthcoming sections.The differentials help to derive transformations between derivatives with respect to old and new variables The inverse transformations are also useful because they allow to obtain the characteristic values of the new variables and at the junction points 1-1', 1'-0, located respectively at = −(1 + ) and = −1, that belong to the left zone and 0-2', 2'-2, located respectively at = +1 and = +(1 + ), that belong to the right zone; we find respectively in the left zone and in the right zone.The overlined values are always positive; the circumflexed values' sign depends on that of the eigenvalue.They conform to the following, easily demonstrable, limitations and, expectedly, they fix the ranges of the new variables

Eigenfunction's and its first derivative's continuity at junction points
The eigenfunction's components [Eqs.( 36)-( 38), Eqs. ( 44) and ( 48)] we obtained by analytical integration involve the presence and require the determination of the eight coefficients 1 , 1 ′ , 1 ′ , 0 , 0 , 2 ′ , 2 ′ , 2 and of the eigenvalue ; nine unknowns in total.They can be found by imposing the continuity of the eigenfunction and of its first derivative, two conditions therefore, in the four zone-junction points; accordingly, this imposition permits the formulation of eight equations.The additional equation needed to balance the number of unknowns descends from the reformulation of the wavefunction's normalization condition [Eq.( 9)] in terms of the eigenfunctions; as well known, the most convenient choice is the normalization of the eigenfunctions which, according to the adopted variable scaling [Eqs.(26)], goes into the nondimensional form Let us begin with the junction point 1-1' located at = −(1 + ) and in correspondence of which = ̄ > 0 [Eq.(52.1)].The eigenfunction's components [Eqs.(36) and (44)] can be soldered mathematically with the continuity joint The right-hand side of Eq. (58.2) descends from the derivative transformation indicated in Eq. (50.1).After derivatives are done and all necessary substitutions are in place, Eqs.(58) evolve into the algebraic system On the right-hand side of Eq. (59.2), we have complied with the standard notation [29,30] reserved for the first derivatives of the Airy functions.Equations (59) fix two coefficients in terms of a third one; to that aim, they can be subtracted and rearranged as The factor 1 ′ in Eq. ( 61) is conveniently set to to simplify the notation; it is always real because ̄ > 0. The coefficient 1 can then be obtained from Eqs. (59) in two different but, obviously, equivalent ways from which we also extract, as collateral result, a useful identity that permits to interchange Airy's functions with their first derivatives and viceversa; we can also deduce Eq. ( 64) from appropriate rearrangement of Eq. ( 62); after proper generalization, it will prove useful in Sec.III B.
Basically, we must apply repeatedly the procedure followed for the junction point 1-1' to the other junction points.Let us see where it leads to for junction point 1'-0 located at = −1 and for which = ̂ .The continuity requirement generates the algebraic system in which only the coefficient 1 ′ appears because we exclude the coefficient 1 ′ with the aid of Eq. ( 61).Member-tomember division of Eqs.(66) eliminates the former coefficient The factor 1 ′ in Eq. ( 67) is set to again to simplify the notation; it is either real or pure imaginary according to the sign of the eigenvalue [Eq.(52.2)].We hold on Eq. (67) as it stands instead of proceeding to solve for one of the two coefficients appearing in it; the reason behind this decision will surface in Sec.II C 7. The coefficient 1 ′ follows from Eqs. (66) in either of the two equivalent forms We trust the continuity-implementation recipe to be sufficiently clear by now.Its application to the junction points 0-2' and 2'-2 is nothing else than the conceptual mirroring of what we have done so far with the junction points 1-1' and 1'-0.Therefore, we believe we can confidently skip the details and list only the final output.The continuity requirement at the junction point 2'-2 located at = 1 + and for which = ̄ > 0 leads to and The continuity requirement at the junction point 0-2' located at = 1 and for which = ̂ 0 (1) = 2 ′ ( ̂ ) (74.1) produces a second equation involving the coefficients 0 and 0 0 sin( ) + 0 cos( ) with and fixes the coefficient The factor 2 ′ is always real because ̄ > 0 [Eq.(53.2)]; the factor 2 ′ is either real or pure imaginary according to the sign of the eigenvalue [Eq.(53.1)].
As anticipated in the beginning of this section, we have obtained eight equations [Eq.(61), Eq. (63), Eq. (67), Eq. (69), Eq. (71), Eq. (73), Eq. (75), Eq. (77)] to determine the eight coefficients and the eigenvalue; we still have Eq.(57) in reserve but, right now, its exploitation is not required yet.Equations ( 67) and (75) are those of utmost importance and deserve particular attention because they generate the eigenvalues.We take up their study in next section.
We wish to conclude with a reassurance to the reader concerned with the listed equations' seeming mathematical cumbersomeness, perhaps particularly perceived from the presence of Airy functions and their first derivatives.We did the coding in , a programming language within which Airy functions and derivatives are built-in intrinsic functions, and the calculations went smooth and flawless.

Eigenvalues
Let us rewrite Eqs.(67) and (75) in a slightly rearranged but more convenient form and let us look at it as a homogeneous algebraic system for the coefficients 0 , 0 .The vanishing of its determinant leads to the transcendental equation that generates the eigenvalues.Basically, all eigenvalue-generating equations encountered in the literature we consulted, textbooks [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16] as well as specialized papers [31][32][33][34][35][36][37][38][39][40][41][42][43][44], are particular cases embedded in Eq. ( 79), all with = 0 obviously, most of them with symmetrical potential ( 1 = 2 ) and just a few [3,4,7] with unsymmetrical potential ( 1 ≠ 2 ).Very ingenious analytical as well as graphical ways have been proposed and exploited to extract the roots of those transcendental equations; however, the exploitability of these options, although sometimes still rather elaborated mathematically, is possible only for relatively simplified situations, such as the one involving symmetrical potentials for example.The mathematical transcendence of Eq. ( 79) with respect to the variable is extreme in our case with ≠ 0 because it concatenates the complexity of Eqs. ( 52) and (53), Eqs. ( 62) and (72), Eqs. ( 68) and (76).Therefore, we had no other option than to follow a numerical approach based on the Newton-Raphson method, a fruitful idea proposed by Memory [45] already in 1977.Of course, Barsan's warning [43, bottom of page 3023]: ... the eigenvalue equations ... are transcendental equations, whose analytical solutions are difficult to obtain.Of course, they can be calculated numerically, with high precision, but their dependence on the physical parameters of the problem is totally lost.did not escape our attention but we believe that the fear of the loss mentioned in his last sentence is unfounded if one works with nondimensional variables.
The first question we may wish to settle regarding Eq. (79) concerns whether or not it can produce negative eigenvalues.Usually, approaches in the literature [4,5,11,14,16,31,39,43,44] deduce the answer a posteriori within the search of the eigenvalues with graphical methods; but we feel more comfortable with an approach helped by analytical support.The path to follow consists in assuming hypothetically < 0, applying the switch of Eq. ( 39) and working out the consequences on the function ( ).The expression found at the end of the mathematical manipulations turns out to be a pure imaginary non-linear combination of hyperbolic functions The quantity in square brackets is real and, therefore, represents Im [ ( )] because the factors 1 ′ , 2 ′ are pure imag-  68) and ( 76)].The hyperbolic functions are always positive; therefore, the responsibility for the sign of Im [ ( )] falls on their coefficients, which, let us not forget, also depend on .Given the mathematical cumbersomeness of the coefficients, what we need to do is to draw their graphs versus to understand their behavior.Figure 3 provides two examples: a with 1 = 1, 2 = 0.5, = 1 in Fig. 3a and a virtual10 with 1 = 1, 2 = 0.5, = 10 −9 in Fig. 3b.They indicate that the coefficients of the hyperbolic functions are monotonic and positive and the function Im [ ( )] never vanishes on the left of = 0. We have tested several combinations of the characteristic numbers 1 , 2 , and found out that the curves expectedly shift a bit but their monotonicity is never compromised and the general picture remains similar to those shown in Fig. 3. So, we can rest assured that negative eigenvalues do not exist and conclude that the eigenvalues are also bounded from below; then, Eq. ( 32) upgrades to the final form or even better Thus, all eigenvalues reside within the potential well; accordingly, we can forget Eq. ( 40) and retain exclusively Eq. ( 38) as eigenfunction's component in the central zone.We return now to the original transcendental equation [Eq.( 79)] and concentrate on the determination of its roots.The strategy consists in plotting the function ( ) versus / 2 , detecting visually the intersections with the horizontal axis in order to extract initial-guess values for / 2 and then launching the numerical algorithm based on the Newton-Raphson method.The latter involves the derivative / whose determination requires a bit of careful attention to mathematical details but, in general, it works very well with convergence residuals of the order 10 −10 achieved in just a few iterations.We have discovered that the coefficients of the trigonometric functions in Eq. (79) and, by reflection, the function ( ) may present vertical asymptotes for some specific triplets of 1 , 2 , , as shown in the example of Fig. 4, but, fortunately, such occurrences do not hamper the convergence of the method's iteration procedure if the initial value of / 2 is appropriately chosen.Nevertheless, we have probed Eq. ( 79) from different angles in order to obtain alternative forms freed from unaesthetic infinities inside the interval [0,1].An effective cure, we found out, consists in introducing the reciprocal factors whose substitution in Eq. ( 79) leads to another transcendental equation on the basis of which the graphs of Fig. 4 evolve into those of Fig. 5 that give evidence of how the curves acquire a beneficent monotonicity and are better behaved; true, there is still a vertical asymptote at = 0 but it is innocuous because its position is frozen with respect to the values of the triplet 1 , 2 , .Further improvement is possible and can be achieved by following the guidelines of the smart idea proposed by Sprung and coauthors [35] in 1992.Let us write for brevity and define the normalization factor Then the ratios / and / are bound within the interval [-1, + 1] and permit the introduction of the angle defined by cos = − (85.1) The minus-sign choice in Eqs.(85) counteracts the negativity of the ratio / which tends to -1 when approaches zero and compels the convenient initial condition ( → 0) = 0.11 In the sequel, we will imply the dependence of on , 1 , 2 , and will explicit it only if and as required by the context.The division of Eq. ( 83) by the normalization factor produces an ulterior version of transcendental equation We can obviously go one step further and pull out from Eq. (86) the solution in terms of angles or after a slightly more convenient rearrangement.Typical graphs of the functions * ( ) and ( ) are shown in Fig. 6; we believe that they are definitely more visually representative and elegant than those of the functions ( ) and • ( ) illustrated in Figs.4b and 5b, respectively; nevertheless, in our experience, the Newton-Raphson method works well with each one of the mentioned functions.We believe appropriate to remark two important aspects.First, we have to keep in mind that Eqs. ( 86) and (87.2) are still transcendental equations, although they look apparently simpler with respect to Eqs. ( 79) and (83); the complexity is hidden behind the angle and involves all the formulae, encountered previously, which we need to navigate through to obtain it.Second, Eq. (87.1) tells us that the angle can be interpreted as a quantitative measure of the conceptual difference between the eigenvalue spectrum of our (Fig. 2) and that of the infinite , for which we are led to anticipate that → 0 from the visual inspection of Eq. (87.1), in terms of well's finiteness, asymmetry and trapezoidal shape.
We selected two test cases to validate both the Newton-Raphson algorithm to find the roots of the described transcendental equations and the finite-difference numerical method that solves Eqs.(27) and produces collaterally the eigenvalues; Reed [34] considered an electron ( = 9.1093837015 ⋅ 10 −31 kg) in a finite symmetrical of depth 0 = 1 = 2 = 100 eV and semi-width = 1A • ; the four eigenvalues he found by utilizing a bisection method (a) Reed [34] (b) de Alcantara and Griffiths [39] FIG.7: Eigenvalue-detection graphs for the selected test cases.[34].The dashed lines emphasize graphically how the zeros of * ( ) correspond systematically to integer values of ( ) in compliance with Eq. (87.2).Our results are tabulated in the upper section of Table I; columns 4 and 5 from left contain those obtained, respectively, with the Newton-Raphson algorithm via Eq.( 86) and with the finite-difference numerical method.Columns 6 and 7 contain the data generated from those of columns 3 and 4 post-processed to match Reed's format.De Alcantara and Griffiths [39] also considered a generic particle in a finite symmetrical and specified directly the characteristic numbers 15 ( 0 in their notation); the ten eigenvalues they found are tabulated in Table I at page 44 (bottom of left column) of their article.The eigenvalue-detection graph shown in Fig. 7b confirms the existence of ten eigenvalues and the values we found are listed in the lower section of Table I, again in columns 4 and 5; the data of columns 3 and 6 correspond to de Alcantara and Griffiths' format.For both test cases, the Newton-Raphson algorithm and the finite-difference numerical method are in full agreement and our eigenvalues match all the significant digits of the eigenvalues reported in the original articles.

Existence of eigenvalues
It is well evidenced in the literature [3,7] that eigenvalues may not exist for unsymmetrical s with sufficiently deep gap ( 1 − 2 ).The transcendental equations involving the angle [Eqs.( 86) and (87.2)] suggest that such an occurrence TABLE I: Test cases for the validation of the algorithm based on the Newton-Raphson method [Eq.( 86)] and of the finite-difference numerical method.The main header contains our notation; the sub-headers contain (in parentheses) the original notation adopted in the indicated references.All our calculations were carried out with = 10 −9 .and

Ref
when / 2 ranges in the interval [0,1]; if these conditions are fulfilled then, again, eigenvalues do not exist.The inequalities indicated in Eqs.(88) are exemplified in Fig. 8 for the triplet 1 = 1, 2 = 0.15, = 1; an increase of either the well's gap ( 1 − 2 ) by lowering 2 from 0.2 to 0.15 (Fig. 8a) or the well's steepness by reducing from 1.5 to 1.0 (Fig. 8b) expels the intersection points outside the interval [0,1] and makes the eigenvalue disappear.Which physical interpretation should we attach to the absence of eigenvalues?A simple and straightforward one: that, notwithstanding both the receptive mathematical structure of the Schrödinger equation [Eq.( 3)] towards variable separation [Eq.( 16)] and the benevolent imprimatur of the boundary conditions [Eq. ( 19)], still separated-variable solutions are not allowed by the potential.The condition for the absence of eigenvalues can be formulated by noticing from Figs. 6 and 7 that the function ( ) is monotonically increasing with respect to the ratio / 2 in [0,1] then the inequality indicated in Eq. (88.2) is verified a fortiori.Equation ( 90) is the condition whose fulfillment entails the absence of eigenvalues.Unfortunately, its exploitation is not feasible in any analytical fashion in the general We deduce right away from Eq. (91) that the angle [ ( , , )] = we are looking for is not going to depend on and separately but on the product appearing on the right-hand side of Eq. ( 91), product that defines formally the new characteristic number Its usefulness will become apparent in a few lines from here.Further coincidence takes place for the factors 1 ′ , 2 ′ [Eqs.( 62) and ( 72)] for the factors 1 ′ , − 2 ′ [Eqs.(68) and (76)] and, consequently, for their reciprocals [Eq.( 82)] With these simplifications, the algorithm based on Eqs.(84) and Eqs.(85) becomes somewhat lighter computationally and furnishes the angle [ ( , , )] = required in the eigenvalue-absence condition [Eq.( 90)] adapted to the present case The nice feature of the angle [ ( , , )] = being dependent only on the lately defined characteristic number [Eq. ( 92)] suggests the clever move to extract √ from Eq. ( 92) to substitute it into Eq.( 96) and rearrange the condition into the separated form The first term on the left-hand side of Eq. ( 98) depends only on the potential's steepness and is unconditionally positive.The responsibility for positivity or negativity falls on the second term; this term, however, turns out to be a universal function of the characteristic number whose graph, illustrated in Fig. 9a, reveals to be also positive and monotonic.These valuable features of the function ( ) ensure the falseness of the inequality in Eq. ( 98) and sanction the conclusion that eigenvalues certainly exist for symmetrical s.More graphical evidence supporting this conclusion is illustrated in Fig. 9b.The rightmost curve ( = 10 −9 ) corresponds essentially to a symmetrical and, therefore, it has always at least one intersection with the level ( ) = 1, no matter how shallow the well's depth is.If the well's steepness decreases then the potential becomes a symmetrical ; the curve shifts leftward, so does the intersection, and, again, we can conclude a fortiori that also a symmetrical possesses at least one eigenvalue.FIG.9: Graphical evidence that at least one eigenvalue always exists for symmetrical s.

Eigenfunction's coefficients
The successive step, after the calculation of the eigenvalues, consists in the determination of the eigenfunction's coefficients.On account of the determinant's vanishing [Eq.( 79)], the algebraic system composed by Eqs.(67) 1 and (75) 1 coalesce into one single equation connecting the coefficients 0 , 0 .Accordingly, an instinctive manner to proceed could comprise the following sequence of operations: (a) decide which of the two coefficients should be assumed independent and solve either Eq.(67) 1 or Eq.(75) 1 for the dependent one; (b) determine the other coefficients in terms of the independent one from the group of equations listed in the beginning of the paragraph following Eq.(77), after setting aside Eqs. ( 67) and (75), of course; (c) obtain the independent coefficient from the exploitation of the eigenfunction's normalization condition [Eq.( 57)].And, indeed, this sequence would work smoothly and swiftly for unsymmetrical wells; yet, we found out that failure is lurking behind operation (a) if the potential well is symmetrical.This is the particular case in which eigenfunction's parity, even and odd, must be explicitly contemplated; it is thoroughly discussed in the literature for symmetrical s but it turns up also for symmetrical s.Let us see the details.The well symmetry implies the following simplifications.The overlined and the circumflexed values [Eqs.(52) and Eqs.(53)] come, respectively, to coincide ̄ = ̄ (100.1) and so do the factors 1 ′ and 2 ′ [Eqs.( 62) and ( 72)] The factors 1 ′ and 2 ′ [Eqs.( 68) and ( 76)] become mathematically opposite The algebraic system in Eq. ( 78) simplifies to the form Equations (104.1) and (105.1)tell how unwise the operation (a) mentioned in the beginning of this section would be without knowing beforehand which parity situation we are dealing with. 14The coefficient-vanishing possibility is the reason, mentioned just below Eq. ( 68), behind our decision to keep Eq.(67) in that form instead of solving it for one of 13 Equation (103.2) is obviously the simplified form which Eq. (79) reduces to with the help of the trigonometric formulae and if the identity indicated in Eq. ( 102) is enforced. 14This is precisely the trap which one of us (DG) walked head-on into.An unexpected and mystifying sign change of the eigenfunction at the junction point 0-2' for the second excited eigenstate shown in Fig. 11c was the unequivocal omen that something had gone wrong with the calculation of the eigenfunction's coefficients and triggered the debugging investigation that lead to the understanding of the details explained in the text.A good lesson learned from a mistake.
the two coefficients.This turn of events is particularly critical when a numerical method, such as the Newton-Raphson method we used, is adopted to calculate the eigenvalues because the necessity of parity distinction is basically invisible to the numerical algorithm that operates on the transcendental equation, that being any of Eq. (79) or Eq.(86) or Eq.(87.2).The three characteristic numbers 1 , 2 , constitute all that the numerical algorithm needs to know to grind out the eigenvalue and the circumstance of symmetrical well is handled as mechanically as that of unsymmetrical well.
There is no automatic mechanism built in the algorithm that, in the former circumstance, raises a parity-distinction flag to be remembered and taken into account at the moment of calculating the eigenfunction coefficients.The if-then-else situation created by the necessity of parity distinction for symmetrical wells must be programmed in the algorithm.It is doable and is not a serious preoccupation, of course, but it is a perhaps rather tedious inconvenience.Luckily, there is a simple stratagem to circumvent it. 15Let us introduce two new coefficients defined as These coefficients never vanish, even if the well is symmetric; in that case, they are either opposite ( 0 = − 0 ) or equal ( 0 = 0 ) if the parity is even ( 0 = 0) or odd ( 0 = 0), respectively.Therefore, one of them can always be expressed in term of the other one without fear of disrupting the coefficient-calculation procedure.Now, we can invert Eq. ( 106) and substitute Eq. ( 107) into the algebraic system in Eq. ( 78) to derive an analogous system but in terms of the new coefficients It is a straightforward consequence of matrix algebra, and an easy exercise to verify, that the determinant of this new algebraic system is proportional 16 to that of the old one [Eq.( 78)] and, therefore they share the same transcendental equation [Eq.( 79)].If we select the coefficient 0 as independent then Eq. ( 108) gives In the case of symmetrical wells, the simplifications in Eq. ( 102), Eq. (104.2),Eq. (105.2) apply and the fractions in Eq. (109) reduce to −1 for even parity or +1 for odd parity.The coefficients 0 , 0 follow from Eq. ( 107), harmlessly in case of symmetrical wells, and then the formulae discussed in Sec.II C 4 become operative to determine the remaining required coefficients.We summarize them here for convenience: 15 This is a bright example of how knowledge in one department of science, tensor algebra in this case, can help to inspire ideas in another one.A second-order tensor can always be separated in a symmetric part = ( + )/2 and an antisymmetric part = ( − )/2; then, addition of the parts returns the tensor = + while subtraction returns the tensor's transpose = − .Alright, it is not exactly the same situation we are dealing with because the coefficients 0 , 0 are independent but it is the spark that inspired Eq. ( 106). 16The determinant of the product of matrices is the product of the determinants of the matrices and Two remarks are in order with a view to carry out calculations with these equations.First, the exponentials in Eqs.(63) 1 and (73) 1 call for attention; they are latent numerical troublemakers because they can definitely overflow calculations when the characteristic numbers 1 , 2 are sufficiently great [Eq.(33)].A good cure to make the exponentials harmless is to merge them with the exponentials of the corresponding eigenfunction's components [Eqs.( 36) and ( 37)]; as preparatory work, we define the auxiliary coefficients and rewrite Eqs.(63) 1 and (73) 1 as Second, the double expressions in most of the equations between Eq. (109) and Eq. ( 111) are obviously analytically equivalent; yet, numerical operations are always burdened with round-off errors and expectedly the numerical outputs from corresponding double expressions differ slightly.In order to contain somehow the impact of round-off errors and to make both expressions count, we calculated the corresponding coefficient as arithmetic average of the numerical outputs from the double expressions; for example, the coefficient ̃ 1 defined in Eq. ( 110) was actually calculated as Likewise for the other concerned coefficients.
The linear dependence on 0 originated in Eq. ( 109) propagates to all the other coefficients; the completion of the task of this section, therefore, requires the determination of this last coefficient.In order to achieve that, we must assemble the global eigenfunction from the zonal components [respectively: Eq. ( 36) with Eq. (63) 2 ; Eq. ( 44) with Eq. (61) 1 ; Eq. ( 38); Eq. ( 48) with Eq. (71) 1 ; Eq. ( 37) with Eq. (73 and pass its square through the integral of the eigenfunction's normalization condition [Eq.( 57)].The integral splits in five contributions, one for each zone.The contributions of the zones with constant potential can be easily obtained analytically; instead, the contributions of the zones with linear potential are refractory to analytical handling and require recourse to numerical integration, a minor formality with modern programming languages.The integration-operation algebra calls for moderate skills and particular attention to the differentials' transformations [Eqs.( 49)] in the zones = 1 ′ , 2 ′ but it is rather straightforward; so, we skip the details and jump directly to the final result in which are the integrals that require numerical evaluation.Equation ( 113) balances the number of equations with the number of coefficients and, in so doing, fixes the coefficient 0 .

Eigenfunctions
With the coefficients in hand, the analytical eigenfunctions can be calculated straightforwardly from Eq. ( 112); the numerical eigenfunctions are provided by the finite-difference method briefly described at the end of Sec.II C 1. We show two validation examples.The first one, in Fig. 10, illustrates the eigenfunction of the single eigenstate belonging to the unsymmetrical well 1 = 1, 2 = 0.5, = 1 whose eigenvalue-detection graph is displayed in Fig. 6.Numerical results (solid circles) superpose to analytical results (lines) very satisfactorily.We adopted a thinner line for the analytical eigenfunction's curves in the zones with linear potential in order to emphasize the smooth transition among the zones with constant potential and to appreciate graphically eigenfunction's and its first derivative's continuity at the junction points; we have systematically used the data-representation style adopted in Fig. 10 in all forthcoming figures related to eigenfunctions.Analytical and numerical approaches concur also about the eigenvalue: they both give = 0.31447.The second example is relative to the symmetrical well 1 = 2 = 10, = 0.5 and is illustrated in Fig. 11.The eigenvaluedetection graph (Fig. 11a) reveals the existence of three eigenstates whose eigenfunctions are shown in Figs.11b-d.FIG.10: Eigenfunction of the single eigenstate belonging to the unsymmetrical well 1 = 1, 2 = 0.5, = 1; the eigenvalue-detection graph is displayed in Fig. 6.The expected continuity of the eigenfunction's second derivative at the junction points, an aspect which will become of full relevance in the upcoming Sec.III D, is hardly verifiable visually from the graphs but this graphical limitation is of little concern because it can be surmounted analytically by repeatedly differentiating Eq. ( 112) to obtain first derivative and second derivative and by evaluating Eq. ( 116) at the junction points.In Eqs. ( 115) and ( 116) and for the zones 1' and 2', we have used the derivatives transformations indicated in Eqs.(50) and replaced the terms / ( = 1, 2) by inverting Eqs.(52.1) and (53.2); additionally in Eq. ( 116), we have expressed the second derivatives of the Airy functions according to the corresponding differential equations [Eqs.( 43) and ( 47)].Let us see now what happens, for example, at the junction point 1-1' whereat = −(1 + ) and = ̄ : Eq. ( 116) (zones 1 and 1') gives which, with due account of Eq. ( 110) (left equality), confirms the continuity of the eigenfunction's second derivative at the junction point under consideration.Similar processes apply to and same confirmations are reached for the other junction points.

D. Wavefunction's general solution
The determination of the eigenfunctions completes the study of the eigenvalue problem and we can concentrate again on the time-dependent problem [Eqs.( 3) and ( 4)].The standard paradigm requires to assemble a specific solution for each eigenstate according to wavefunction's variable separation [Eq.( 16)], based on the integral [Eq.( 18)] of the temporal problem and the eigenfunction ( ), and then to build up the wavefunction's general solution as a linear combination of the eigenstates' contributions In Eq. ( 119), represents the total number of eigenstates permitted by the potential.We consider appropriate to recall here the discussion centered around Eqs. (88) and the conclusion drawn from it: the absence ( = 0) of eigenstates is a possibility (Fig. 8) and, correspondingly, the quantum-mechanical problem does not entail separated-variable solutions; thus, the significance of Eqs. ( 118) and (119) fades away.The existence of eigenstates ( > 0) grants the applicability of Eqs. ( 118) and ( 119) and the determination of the coefficients , which have absorbed the constants Φ (0), constitutes our next task.For that purpose, we have at our disposal the initial-wavefunction condition [Eq.( 4)] and the moment has come to exploit it.In principle, the initial wavefunction ( ) should be looked at as arbitrary to some extent although, in spite of its presumed arbitrariness, it cannot escape two important constraints attached to the initial time ( = 0): it has to be consistent with both the normalization condition [Eq.( 9 and the boundary conditions [Eqs.(5)] which, more specifically for our problem [Eq.( 15)], reduce to The substitution of the general solution [Eq. ( 119)] into the initial condition [Eq.( 4)] gives It is then seemingly rather straightforward from a mathematical point of view to take advantage of eigenfunctions' orthonormality [Eqs.(23) and ( 56)] to invert Eq. ( 123) and to obtain the coefficients And that is fine, of course.However, we wish to look at Eq. ( 123) from a slightly different angle with respect to the standard one of the literature and point out an aspect that, we believe, is hardly emphasized in quantum-mechanics textbooks, at least in those we have consulted. 17If the number of eigenstates is finite then Eq. ( 123) must be read from right to left: the initial wavefunction cannot be arbitrary but must conform to the mathematical structure of a linear combination of eigenfunctions, say with in compliance with Eq. ( 120), as necessary condition for the existence of separated-variable solutions [Eq.( 119)].Then and These considerations are brought forth with dramatic evidence by the potential of Fig. 10 which produces only = 1 eigenstate (Fig. 6).In that case, 1 = 1 = 1; if the particle occupies initially the unique eigenstate shown in Fig. 10 [ ( ) = 1 ( )] then its wavefunction is simply and the particle will continue to occupy that unique eigenstate forever.Otherwise [ ( ) ≠ 1 ( )] there are no other separated-variable solutions and the differential-equation problem [Eq.(3), Eq. ( 4), Eq. ( 15)] requires numerical integration.In general, separated-variable solutions to the Schrödinger equation with finite-well potentials do not exist for arbitrary initial wavefunctions; they do exist only for properly structured initial wavefunctions [Eq.( 125)].We believe it is even more instructive didactically to press the argument into graphical evidence by considering the triangular-shaped function 130) 17 For example, Griffiths [14] dealt with the method, that he colorfully called "Fourier's trick", to obtain the coefficients in Sec.2.2, at page 30 of his textbook, dedicated to the infinite , a potential with an infinite number of eigenstates; but there is no mention to the "Fourier's trick" in Sec.2.6 at page 78 where the finite is considered, a potential that gives rise to a finite number of eigenstates (Fig. 7).A similar situation can be found also in Bransden and Joachain's textbook [11].This is a perfectly legitimate initial wavefunction because it complies with both normalization [Eq.( 120)] and boundary conditions [Eq. ( 122)].It generates the coefficients from Eq. ( 124) with due account of the adopted variable scaling [Eqs.(26)] and eigenfunction's analytical expression [Eq.( 112)].We have carried out calculations of the inital condition [Eq.( 123)] for the of Fig. 11 which includes = 3 eigenstates and for the considered by de Alcantara and Griffiths [39] which includes = 10 eigenstates (Fig. 7b). Figure 12a refers to the former potential and illustrates how poorly the left-hand side of Eq. ( 123) approximates the triangular-shaped initial wavefunction; in particular, the sum of the quantum-state probabilities = 2 , tabulated in the figure, differs appreciably from unity.The situation corresponding to the latter potential is shown in Fig. 12b and reveals a noticeable improvement in accuracy due to the existence of more eigenstates but the match is not rigorously exact.The inversion operation from Eq. (123) to Eq. ( 124) to obtain the coefficients if the initial wave- FIG.
12: Numerical test of accuracy of Eq. ( 123) with a triangular-shaped initial wavefunction for two potentials with finite number of eigenstates.
function ( ) is arbitrary acquires physical significance and works exactly only if the number of eigenstates is infinite ( → ∞), 18 and that happens only to infinite-well potentials.Then the separated-variable wavefunction is truly a general solution built as a series expansion based on infinite eigenfunctions that constitute a complete set in the sense explained by Griffith [14].

A. Introductory remarks
With the completion of the study of the , we have acquired all the elements necessary to deal with the implications of a vanishing and we are ready to explore the circumstances under which the (Fig. 2) turns into a (Fig. 13).If → 0, geometrically the potential's ramps become vertical and the zones with linear potential shrink to points; analytically, overlined and circumflexed values [Eqs.( 52) and ( 53)] coincide and vanish ( ̂ = ̄ = 0; ̂ = ̄ = 0), the variables and freeze at = = 0 [Eq.( 55)], the original variable gets nailed down at the fixed values = ∓1 [Eqs.( 42) and ( 46)], and the potential's functional definitions [Eq.(27.4), second and fourth line from top] go into mathematical indeterminate forms of the kind 0/0, which is another way of saying that the potential turns into a multi-valued function spanning all values comprised in [0, 1 ] at = −1 and in [0, 2 ] at = +1.The task ahead of us consists mainly in finding out how the collapse of the zones = 1 ′ , 2 ′ affects the eigenvalue spectrum and the eigenfunctions obtained for the (Sec.II C), particularly its repercussions on the solutions in the collapsed zones (Sec.II C 3).The main questions whose answers are of particular interest to us regard whether or not the collapsed eigenvalue spectrum checks with the one ensuing from the (Sec.III C), and what happens to the continuity property of eigenfunction and its derivatives if the potential's junction points become jump points (Sec.III D).FIG.13: Nondimensional .
Before embarking in the accomplishment of the described task, it is convenient to forge briefly a few preparatory tools meant to facilitate the forthcoming mathematical operations.

B. Mathematical tools
The factors 1 ′ and 2 ′ [Eqs.(62) and (72)] can be both collected into the generic function The ratio of the dummy variables is not affected by , as it is easily verified by member-to-member division of Eqs. ( 52) and (53) respectively so, with the shrinking → 0, the dummy variables are forced to vanish ( , → 0), because of what they represent, but their ratio stays finite.The function ( ) goes into the numerical constant that we have already met in Eq. ( 93).The left-hand side of Eq. ( 134) attains the numerical constant and so must do the apparently indeterminate form on the right-hand side A corroborating check, perhaps more convincing and certainly more elegant from a mathematical point of view, of the trueness of Eq. (138) consists in processing the limit according to de L'Hôpital's theorem, an exercise that we did for the sake of completeness 19 and were pleased to see its outcome to fall inline with Eq. ( 138).Other recurrent limits are similar to Eqs. ( 137) and (138) but the dummy variables are mixed, as in the numerator and denominator of the function ( , ) for example.The limit of the numerator of Eq. ( 135) is easy

C. Eigenvalues
Our first check consists in the verification of the retrieval of the same spectrum produced by the .If → 0, according to the limit indicated in Eq. ( 141), the factors 1 ′ and 2 ′ become and the transcendental equation [Eq.( 79)] that produces the eigenvalues goes into the slightly simpler form As example to verify that Eq. ( 143) is indeed in line with the transcendental equations proposed in the literature, we take the considered by Reed [34]; in his case, 1 = 2 = ; 1 = 2 = = − and the simplified transcendental equation [Eq.( 143)] reduces even further to By taking into account the notation conversions based on Reed's definitions and collected in Table II, it is straightforward to prove that Eq. ( 144) coincides exactly with Reed's Eq. ( 15) that we reproduce here for the reader's convenience.
Further verification can be achieved with regard to the determination of the angle needed in Eqs.(87).We start again from Eqs. (142); then, in cascade, we evaluate the reciprocal factors [Eq.(33), Eq. ( 82)] The square-root terms / and 1 − / ( = 1, 2) are both contained in [0, 1] and the sum of their squares adds up to unity; therefore, they uniquely identify an angle in [0, /2] [3,4,7]; see footnote 20. [ then the angle becomes = arcsin that matches exactly those proposed by Messiah [3], ter Haar [4] and Landau and Lifchitz [7]; 20 verification is straightforward via the notation conversions collected in Table III for the reader's convenience.
Another verification, that deserves mentioning, concerns the eigenvalue-absence condition [Eq.( 90)] which becomes formally The angle appearing in Eq. ( 154) descends from Eq. (152 The substitution of Eq. ( 155) into the eigenvalue-absence condition [Eq.( 154)] leads to the final form in full agreement with the Landau and Lifchitz's condition 21 given in their Eq.( 2) at page 66 of [7].Moreover, by taking into account the angular equivalence which reconfirms the unconditional existence of eigenvalues because it is never verified.

D. Eigenfunctions and derivatives
The successful verifications we have carried out in Sec.III C on transcendental equations imply reassurance regarding the eigenvalue spectrum: we retrieve exactly the same spectrum of the .With a comfortable sensation of being on the right track, we turn to next investigation which involves the eigenfunctions and their derivatives.
The simplification of the factors 1 ′ and 2 ′ [Eqs.(142)] has a modest impact on the formulae for the calculation of the coefficients 0 , 0 , 0 [Eqs.(107), Eq. ( 109)] but affects more markedly the other coefficients.The most important are After them, the coefficients ̃ 1 and ̃ 2 follow from Eqs. (110) and ( 111) Finally, the equation meant to fix the coefficient 0 [Eq.( 113)] generated by the eigenfunction's normalization condition [Eq.( 57)] simplifies to because the terms involving the integrals [Eqs.(114)] corresponding to the zones with linear potential vanish and do not contribute.The mathematically coherent step to deduce the eigenfunction and its first and second derivatives for the consists in passing to the limit for → 0 those of the .The passage to the limit is smooth and unambiguous for eigenfunction [Eq.( 112)] and first derivative [Eq.( 115)] Their continuity is preserved through the shrunk zones at ∓ 1 with the endorsement of Eqs. ( 162) and (163).The passage to the limit for the second derivative [Eq.( 116)] is still smooth in the zones 1,0,2 but becomes indeterminate in the zones 1' and 2' due to the presence of the ratios / ̄ and / ̄ ; different limits may be reached according to whether the variables and approach either the overlined or the circumflexed values in the limit.There is a simple way to circumvent this ambiguity.Let us begin with the left zone.From Eq. (116), we evaluate the second derivative first at the junction point 1-1' where = −(1 + ) and = ̄ and then at the junction point 1'-0 where = −1 and = ̂ If → 0 then the junction point 1-1' shifts rightward and goes to superpose on the junction point 1'-0 at = −1; both ̄ , ̂ vanish so that Eq. (167.1)gives but Eq. (167.2) yields instead if the potential jump is finite. . . .In Sec.II we describe why most textbook explanations of conditions (b) are, in our view, unsatisfactory, and in the remaining sections we present arguments which are, we hope, more acceptable.
We definitely recommend the reader to familiarize with the mathematical arguments expounded by Branson in Sec.II of his paper; one of the "more acceptable arguments", proposed in his Sec.V, is indeed the idea to consider the limit of a continuous potential such as our .Confronted with such an unsettled situation, we take a pragmatic stance: we listen to Branson's warning and assume eigenfunction's and its first derivative's differences formally prescribed at the jump points but without necessarily committing to Bohm's opinion of a priori continuity Then, we proceed to the determination of the four coefficients by exploiting Eqs.(172) with the hope to encounter down the road a compelling physical reason to enforce mathematical continuity in order to save physical consistency.Let us see what happens.The first logical step consists in solving the system composed by Eqs.(172) for the four coefficients ̃ 1 , 0 , 0 , ̃ 2 .For the sake of notation simplification, first we conveniently predefine the auxiliary coefficients and subsequently proceed to solve the system.The coefficients ̃ 1 , ̃ 2 are easily extracted in terms of the coefficients 0 , 0 hidden inside the auxiliary coefficients; in turn, the coefficients 0 , 0 have to be determined from the algebraic system and here we already encounter the first surprise: Eq. (175.3)indicates that the eigenvalue spectrum is continuous [compare with Eq. ( 78) with due account of Eqs.(142)] because the algebraic system is not homogeneous due to the presence of the discontinuities on the right-hand side.We concede that the expectation of a discrete eigenvalue spectrum qualifies as sufficiently physical motivation pushing in the direction of Eqs.(173).However, the rejoicing in the continuity camp is short lived because the push is not strong enough: the physical necessity for a discrete eigenvalue spectrum only requires the vanishing of the global terms 3)] becomes homogeneous, its determinant coincides with the one we found for the with → 0 and, obviously, its vanishing [Eq.( 143)] generates the same discrete eigenvalue spectrum.As a side note, we wish to point out that this occurrence clearly implies that the eigenvalues are real but we are forbidden to use this information within the perspective of this section to respect self-inclusiveness; however, we can certainly keep this expectation in mind for later in order to eventually check whether or not we are on the right track.Thus, to continue, the requirement of a discrete eigenvalue spectrum does not rule out discontinuous eigenfunctions.Nevertheless, it leads at least to a first improvement by reducing the number of independent differences [Eqs.( 172 As numerical example, we have chosen the ground state of the symmetrical considered by Reed [34] ( = 1 in Fig. 7a; Table I).A comparison between the continuous eigenfunction (hollow squares) and the discontinuous eigenfunction (solid line) corresponding to Δ (+1) = −Δ (−1) = 0.5 is shown in Fig. 14a; the squared eigenfunctions are shown in Fig. 14b to illustrate the conservation of the geometrical area in compliance with the eigenfunction-normalization condition [Eq.( 57)]. Figure 14a seemingly leaves no doubt that, at least within a mathematical perspective, the discontinuous eigenfunction is as acceptable as the continuous one because they both satisfy same differential equation and boundary conditions.In the same figure, we also see portrayed the flagrant groundlessness of Bohm's statement " 2 / 2 can be finite, however, only if / is continuous" and the veracity of Branson's concern "most textbook explanations [compare with Eqs.(169) and (170)].Well, there is not much to argue: the continuous eigenfunction comes accompanied by a ballast of infinite discontinuous eigenfunctions each one of which possesses the status of mathematical solution as legitimate as that of the continuous eigenfunction and we should be prepared to consider the wavefunction's general solution The index enumerates the infinite eigenfunctions that belong to the eigenvalue , or its nondimensional counterpart ; we reserve the first place ( = 1) for the continuous one.In our opinion, the latter's selection and the others' disregard on the basis of unsatisfactory mathematical arguments, whether it may be seen either as an educated guess by an optimist who sticks to the or a sheer hit of luck by a pessimist who decides to go through the detour of the limit with → 0 of a , for the purpose of shortcutting the teaching effort is not a didactically honest pass.Yet, the probable desperation generated by Eq. (181) in the continuity camp is once again short lived because a more attentive look at Fig. 14a reveals the second surprise: the blatant infringement of the conclusion, "So, the eigenstates are not degenerate: for a specified eigenvalue there is one and only one eigenfunction", that we drew when elaborating the proof of eigenfunction's uniqueness involving the Wronskian in the middle of Sec.II B from Eq. ( 21) until just before Eq.(23).Indeed, in Fig. 14a we see two independent eigenfunctions corresponding to the same eigenvalue; as a matter fact, we can produce infinite independent eigenfunctions for the same eigenvalue by arbitrarily varying the discontinuities Δ (−1), Δ (+1).Can this infinite degeneracy be reconciled with the eigenfunction-uniqueness proof?No, it cannot!A quick reexamination of the proof shows unequivocally that it breaks down with discontinuous eigenfunctions.We must remember the flag planted near Eq.(22.6), rewind the discourse to that equation, switch to nondimensional mode and adapt the notation 1 , 2 → , to the case of the in Fig. 13; then we have The Wronskian's discontinuities at the jump points implies that the integration of Eq. (22.6) 1 must now take place separately in the three zones and, consequently, the Wronskian turns out to be only piecewise constant.In zone 1, the integration yields a vanishing Wronskian So, 1 and 1 are not independent and one can be expressed in terms of the other via a constant 1 that we are free to choose either real or complex.Expectedly by symmetry, the same situation occurs in zone 2 -mechanics problem with the trapezoidal-well potential A. Formulation and preliminary considerations regarding boundary conditions B. Boundary conditions with variable separation C. The eigenvalue problem 1. Nondimensional formulation 2. Analytical integration in the zones with constant potential 3. Analytical integration in the zones with linear potential 4. Eigenfunction's and its first derivative's continuity at junction points 5. Eigenvalues 6. Existence of eigenvalues 7. Eigenfunction's coefficients 8. Eigenfunctions D. Wavefunction's general solution III.The square-well potential as limit when → A. Introductory remarks B. Mathematical tools C. Eigenvalues D. Eigenfunctions and derivatives IV.On the continuity conditions at the square-well potential's jump points
are listed at page 504 (bottom of the left column) of his article.From Eq. (27.5), with the recommended 12 values ℏ = 1.054571817 ⋅ 10 −34 J⋅s, 1 J = 6.24150907446076 ⋅ 10 18 eV and 1 A • = 10 −10 m, we obtained 1 = 2 ≃ 26.2468 and set = 10 −9 to simulate the potential's squareness.The eigenvalue-detection graph shown in Fig. 7a confirms the existence of four eigenvalues; the * ( ) curve is a stretched sinusoid similar to the one provided by Reed in his Fig. 1 at page 504 of

FIG. 8 :
FIG.8: Graphical evidence of the possibility that eigenvalues do not exist for some triplets of the characteristic numbers 1 , 2 , .

5 )
of attention.We must first adapt the square root by taking advantage of Eq. (136) with the limit on the right-hand side of Eq. (140.3)With the mixed-variable limits [Eqs.(139) and (140.5)] in hand, the important limit of the function ( , ) follows easily

FIG. 14 :
FIG.14:Comparison between continuous and discontinuous eigenfunctions for the ground state of the symmetrical considered by Reed ( = 1 in Fig.7a; TableI).

TABLE III :
Notation conversions relative to Messiah's, ter Haar's, and Landau and Lifchitz's transcendental equations