Acessibilidade / Reportar erro

TOWARDS AN EVALUATION OF THE NORMALISATION THESIS ON IDENTITY OF PROOFS: THE CASE OF CHURCH-TURING THESIS AS TOUCHSTONE

Abstract

This article is a methodological discussion of formal approaches to the question of identity of proofs from a philosophical standpoint. First, an introduction to the question of identity of proofs itself is given, followed by a brief reconstruction of the so-called normalisation thesis, proposed by Dag Prawitz in 1971, in which some of its core mathematical and conceptual traits are presented. After that, a comparison between the normalisation thesis and the more well-known Church-Turing thesis on computability is carried out in three main parts: the first dedicated to highlighting some of the analogies between them; the second, their most remarkable differences; and the third, to the possible relations of dependence between the two. Based on these considerations, some concluding remarks concerning the potential of the normalisation thesis and similar approaches to the question of identity of proofs are made in the last section.

Keywords:
Normalisation; Identity of proofs; Church-Turing thesis; Derivation; Normal form

0. INTRODUCTION: THE QUESTION OF IDENTITY OF PROOFS1 1 The reader already familiar with the literature of general proof theory on identity of proofs may well skip straight into section 2.

According to Prawitz, “In general proof theory, we are interested in the very notion of proof and its properties and impose no restriction on the methods that may be used in the study of this notion.” (Prawitz 197122. D. Prawitz, Ideas and results in proof theory, in: J.E. Fenstad ed., Proceedings of the Second Scandinavian Logic Symposium, North-Holland, Amsterdam (1971), pp. 235-307., p.236) - and arguably, the question of identity of proofs is at the heart of such a study, for, as e.g. Kosta Došen 20038. K. Došen, Identity of proofs based on normalization and generality, The Bulletin of Symbolic Logic, vol. 9 (2003), pp. 477-50. puts it, only by means of an adequate account of it can we hope to answer satisfactorily the driving question “what is a proof?”. Now, why would that be so? Why is it that one asks (that is, if there is any reason at all to do so) oneself about the identity of proofs? This question itself admits various interpretations, to which correspond accordingly different answers - which are in turn justified by reference to different phenomena.

Firstly, it should be observed that one can frequently prove or argue validly for the same things in a variety of significantly different ways. It is thus convenient to put the matter in the following rough yet expressive enough terms: it seems clear that the identity of a proof is not entirely determined by what it establishes - e.g. some consequence relation between propositions or sets of them -, but also by how it does this much. This, however, by no means implies that every two different ways of proving something are significantly different from one another - which means that it remains rather unclear which differences between two given proofs of a certain thing are significant with respect to their identity and which are not.

Secondly, one should also notice that there is no reason to rule out without further the possibility of proofs of different things being in such a way analogous that their differences are not to be regarded as significant. Therefore, it also remains unclear whether two proofs must be deemed significantly different by force of the mere fact that they prove different things.

Bearing these two observations in mind, the question as to why one asks about the identity of proofs can be answered in at least two versions. A brief research on the specialised literature (see e.g. Widebäck 200133. F. Widebäck, Identity of Proofs, doctoral dissertation, University of Stockholm, Almqvist & Wiksell, Stockholm (2001)., p.9) shows that one does ask oneself about identity of proofs because of what the first observation points at, namely, that what a proof proves - which is something mostly taken for unequivocal, transparent and undisputed - is not evidently enough to determine its identity. Even if tacitly, though, it is more often than not assumed that two proofs are significantly different whenever they are proofs of different things, and more precisely because of the mere fact that they are proofs of different things. The second observation points out that this move lacks justification, and therefore gives us a further reason why one could ask oneself about the identity of proofs - even though it is not a reason that has taken many to actually do it -, namely, because it might well be the case that what a proof proves is not even a necessary trait to the determination of its identity. In other words: what one usually takes for granted and plainly transparent about given proofs, namely what they are proofs of, is neither sufficient nor (necessarily) necessary to assess whether the proofs considered are significantly different or not; and these are reasons why one respectively does and could ask oneself about the identity of proofs.

Still, these answers are of little use in the task of understanding what importance identity of proofs has in the investigation of the notion of proof, since they only help us understand that the identity of a proof, whatever this means, is not something easily, clearly or even necessarily understandable in terms of things that are purportedly already clear, such as e.g. what a proof is a proof of. Yet there is still a version of the persistent question of the first paragraph - the answer to which could in fact be more useful to understand why identity of proofs is relevant to the understanding of what a proof is - that remains unanswered, namely: why should one ask oneself about the identity of proofs?

The first points to be fixed in order to avoid getting lost in dealing with this question here are then these: firstly, since we want to understand how identity of proofs may contribute to our understanding of what a proof is - if at all -, we should not have to rely on a previously fixed conception of what a proof is in order to adequately account for the question of identity of proofs; and secondly, we shall need to formulate more clearly just what is being investigated here under the label identity of proofs. Thus: under an investigation concerning identity of proofs, we understand a study of the conditions under which the semantic or epistemic values of things which are understood to be proofs are the same viz. equivalent.

There is, of course, a great many ways of answering the question as to why one should investigate identity of proofs, which shall usually vary in accordance with ideological inclinations and goals. I here merely suggest a sketch of a possible answer that, I hope, shall suffice for now. A brief explanation of this may be given by appeal to the general Quinean slogan “no entity without identity”. In a brief text on the concept of identity2 2 Sundholm, G.B. (1999) Identity: Propositional, criterial, Absolute, in The Logica 1998 Yearbook, Filosofia Publishers, Czech Academy of Science, Prague, pp. 20-26. , Sundholm points out the following:

“Consider the two types +×+ of ordered pairs of positive integers and ℚ+ of positive rationals. Formally they have the same application criterion:

p : + q : + p , q : α

<2,3> and <4, 6> are equal elements of type ℚ+, but not of the type +×+. In order to individuate the types in question different criteria of identity are needed: the type +×+ is individuated by the identity criterion

p : + q : + r : + s : + p = r : + q = s : + p , q = r , s : + × +

and the type ℚ+ by the criterion

p : + q : + r : + s : + p × s = q × r : + p , q = r , s : + , ,

The same observation could be applied to the present context of discussion of proofs: we have an application criterion (say, being [represented by/expressed as/carried out by means of] a derivation), out of which a distinct, particular notion of proof (as a type) could only be made in case a specific identity criterion is associated with it, different identity criteria yielding correspondent altogether different notions of proof. Notice that an identity criterion is necessary not only to determine the identity of the individual proofs inside the type, but also the identity of the very type itself. It is in this sense that Martin-Löf, according to Sundholm, suggests as an easy way to reconstruct the Quinean slogan “no entity without identity” the combination of the type-theoretic maxims “no entity without type” and “no type without identity”. One could understand the claims that identity of proofs is a central question to the field of general proof theory and, more generally, to the task of providing an answer to the question “what is a proof?” rather in such a spirit.

1. SOME ESSENTIAL TRAITS OF THE NORMALISATION THESIS3 3 The reader already familiar with the normalisation thesis on identity of proofs may well skip this section and start from section 2.

The normalisation thesis (henceforth NT) “official” formulation is the one given to it by Prawitz in his 197122. D. Prawitz, Ideas and results in proof theory, in: J.E. Fenstad ed., Proceedings of the Second Scandinavian Logic Symposium, North-Holland, Amsterdam (1971), pp. 235-307.Ideas and Results in Proof Theory, p.257 : “Two derivations represent the same proof if and only if they are equivalent”4 4 Prawitz, D. (1971) Ideas and Results in Proof Theory in: J.E. Fenstad ed., Proceedings of the Second Scandinavian Logic Symposium, North-Holland, Amsterdam (1971), pp. 235-307. . As dull and tautological as it may sound put this way, this thesis is, in several respects, not trivial at all.

Firstly, the notion of equivalence in terms of which it is formulated is a very specific one: it concerns natural deduction derivations, and can be defined as the reflexive, transitive and symmetric closure of the relation of immediate reducibility between derivations. The reductions considered are those involved in the normalisation of derivations; thus, a derivation reduces immediately to another derivation (see Prawitz 197122. D. Prawitz, Ideas and results in proof theory, in: J.E. Fenstad ed., Proceedings of the Second Scandinavian Logic Symposium, North-Holland, Amsterdam (1971), pp. 235-307., Section II.3.3) when the latter is obtained from the former by removing a maximum formula (i.e. a formula with a connective * that is the conclusion of an introduction of * and the major premiss of an elimination of *). For example, in the case of conjunction, the derivation

Π 1 Π 2 A B A B A Π 3

immediately reduces to the derivation

Π 1 A Π 3

Derivations also reduce immediately to others by immediate expansions. These are reductions that can be performed on the minimum formulas of normal derivations, thus conforming them to what Prawitz calls expanded normal form, where all the minimum formulas are atomic. For example, let the following derivation be normal and AB be a minimum formula in it:

Π 1 A B Π 2

By immediate expansion, it reduces immediately to the derivation below5 5 One might ask oneself why the application of immediate expansions is restricted to these cases. The reason, as I hope will become clear, has nothing whatsoever to do with identity of proofs, but rather with normalisation of derivations; which is the very purpose for which these reductions were devised in the first place. It turns out that the restriction upon immediate expansions is what allows the obtention of strong normalisation in the presence of the reductions by means of which maximum formulas are removed (on this and closely related matters, see C.Jay, N.Ghani, The virtues of eta-expansion. J. Functional Programming 5 (2): 135-154, April 1995, Cambridge University Press). :

Π 1 Π 1 A B A A B b A B Π 2

Let us note in passing that there is a certain analogy between the two kinds of reduction just presented, namely: both involve derivations in which one draws, from a given premiss, a conclusion which plays no role, so to speak, in the obtention of the end-formula from the top-ones - in some cases, such as that of conjunction, one even immediately returns to the said premiss after drawing the kind of conclusion in question. Thus, in the first case, the derivation from Γ to A to be reduced is characterised by the introduction and immediately subsequent elimination of a complex formula, which is removed by means of the reduction without prejudice to the derivation of A from Γ; and dually, in the second case, the reduced derivation from Γ to A is characterised by the elimination and immediately subsequent introduction of a complex formula, which comes about by the insertion of one or more simpler formulas by means of the reduction.6 6 Here one might also wonder why the second reduction, contrarily to the first, inserts, instead of removing, a formula which is dispensable for the obtention of the end-formula of a derivation from its top-ones. The answer, once again, is not connected to identity of proofs, but rather to the confluence of the reduction procedure: if we took the immediate expansions in the other direction - i.e. as contractions rather than expansions -, then e.g. the unicity of the normal form of derivations in general would be lost in the presence of the reductions by means of which one removes maximum formulas. Again, see C.Jay, N.Ghani, The virtues of eta-expansion. J. Functional Programming 5 (2): 135-154, April 1995, Cambridge University Press).

Other reductions by means of which a derivation immediately reduces to another are the so-called permutative reductions, which concern the eliminations of disjunction and of the existential quantifier. By means of them, it is possible to remove maximum segments (which are called “maximum” by a similar reason as maximum formulas are: they are sequences of repeated occurrences of a same given formula in a row viz. immediately below each other, such that the first one is the conclusion of an application of an introduction rule and the last one is the major premiss of an application of an elimination rule. Notice that maximum formula and maximum segment can be so defined that the first are a special, limiting case of the latter, where only one occurrence of the given formula happens. Longer maximum segments may come about by virtue of applications of the elimination rules of disjunction and existential quantifier. See Prawitz 196521. D. Prawitz, Natural Deduction. A Proof-Theoretical Study. Almqvist & Wiksell: Stockholm (1965). p. 49, and Prawitz 197122. D. Prawitz, Ideas and results in proof theory, in: J.E. Fenstad ed., Proceedings of the Second Scandinavian Logic Symposium, North-Holland, Amsterdam (1971), pp. 235-307., II.3, p.248 3.1.2). There are also the reductions called immediate simplifications, which aim at removing eliminations of disjunction where no hypothesis is discharged; there are further similar immediate simplifications that concern the existential quantifier, and also so-called “redundant” applications of the classical absurdity rule.

Thanks to the analogy discovered between natural deduction and typed lambda-calculus known as the Curry-Howard correspondence7 7 The unfamiliar reader is referred to M. H.Sørensen, P. Urzyczyn, Lectures on the Curry-Howard Isomorphism, Elsevier, 2006. - especially effective in the conjunction-implication fragment -, it is possible to verify that the equivalence relation yielded by the reduction system described here and in terms of which NT is formulated corresponds to βη-equivalence, and to β-equivalence in case immediate reductions are left out of the system. NT can thus be formally regarded as the identity clause of a definition of “proof” as a type, the central idea of which is that β- and η-conversions - or, correspondently, that the conversions respectively associated to the reductions that eliminate maximum formulas and to the immediate expansions - are identity preserving.

It should also be observed that the thesis puts forward two separate claims: one to the soundness, another to the completeness of the equivalence relation defined in terms of the reductions with respect to the informal relation holding among derivations representing same proof. The soundness or if part says that equivalence suffices for two derivations to represent the same proof, i.e. that all derivations equivalent to one another represent the same proof, or, in other words, that the formal relation of equivalence between derivations is sound with respect to the informal relation holding among derivations representing same proof. The completeness or only if part of the thesis, in turn, says that equivalence is necessary for two derivations to represent the same proof, i.e. that there are no derivations non-equivalent to one another that represent the same proof, or, in other words, that the formal relation of equivalence between derivations is complete with respect to the informal relation holding among derivations representing same proof.

Rather than an arbitrarily conceived criterion, this idea is inspired to a significant extent by simple and in some senses appealing philosophical conceptions and formal results, as we shall try to make clear in what follows.

1.1. Informal idea and relevant (arguably supporting) formal results

In accordance with the initial assumptions made regarding how one manipulates the notions of proof, derivation and the relation between these in a semantical framework, NT seems to make sense as an answer to the question regarding identity of proofs viz. synonymy of derivations if understood as a formal account of a very simple and reasonable idea:

  • (α) that any two proofs the difference between which resolves into irrelevant features are indeed not significantly different;

  • (β) that any two proofs that differ with respect to any other feature are significantly different.

To this, advocates of the thesis usually add that

  • (γ) a specific proof can always be given with no irrelevant features.

In short, the idea is that any two proofs are significantly different if and only if they are different even when given without any irrelevant features.

Besides, the following three formal results seem crucial, both from the historical and the conceptual viewpoint, to the proposal now under scrutiny, namely: the normal form theorem, the (strong) normalisation theorem and the uniqueness of normal form.

The normal form theorem states that every formula A that can be derived from a set of formulas Γ can be derived from Γ normally (that is, without the occurrence of maximum formulas viz. segments in the derivation); i.e., every valid consequence relation between some Γ and some A is provable without resort to any non-normal derivation. This result can be regarded as a kind of completeness theorem that seems essential to NT, which could be stated in the following fashion: every provable result can be proved normally; or: normal derivations alone can prove all provable results. Furthermore, the normalisation theorem yields a mechanical procedure by means of which derivations can be reduced to normal ones, showing that the (syntactical) difference between them resolves into certain deductive patterns that “play no role” in the deduction of the end-formula from the undischarged top-formulas - no matter in which particular way the deduction was performed -, and that can thus be properly removed viz. inserted without consequences for the fulfillment of this task - namely, the ones involved in the reductions described above. The soundness part of NT seems to amount to the claim that the removal viz. insertion of these patterns from viz. in a derivation is innocuous8 8 Prawitz acknowledges that this may not be the case for the “redexes” of permutative reductions associated to the eliminations of disjunction and existential quantifier. , in some sense, to its semantical value; and hence supposedly irrelevant to the determination of which proof is represented by the derivation itself in which it is eventually performed. The chief reasons given for such a claim are discussed in de Castro Alves 20182. T. de Castro Alves, Synonymy and Identity of Proofs - a Philosophical Essay, doctoral dissertation, Eberhard-Karls-Universität Tübingen, Tübingen (2018)., section III.2.a. Uniqueness of normal form, in turn, adds to the normalisation procedure the flavour of an evaluation; one in which derivations are unambiguously judged with respect to which proof they represent, normal derivations playing the role of identity values - or, more in tune with the terms of the official formulation of the thesis, unique canonical representatives of proofs, which are the actual identity values. The situation can be regarded as analogous to what happens with e.g. numerical expressions in general and canonical numerals as representatives of natural numbers: the very fact that “3+2”, “4+1”, “1+2+2”, etc. all ultimately reduce to “5” and to “5” alone can be regarded as suggestive of the fact that these numerical expressions have the same value, namely 5 - i.e. the “direct”, disquotational value of “5”. This particular kind of understanding of the normalisation procedure is presumably one of the main motivations behind one of the most popular reformulations of NT, used by e.g. Troelstra28. A.S. Troelstra, Non-extensional equality, Fund. Math. 82 (1975), pp. 307-322. in his NON-EXTENSIONAL equality, namely: Two proofs corresponding to deductions π and π’ are the same iff π and π’ reduce to the same normal form.

2. TOWARDS AN EVALUATION OF THE NORMALISATION THESIS: THE CASE OF CHURCH-TURING THESIS AS TOUCHSTONE

The core reason why it is ultimately impossible that the comments to be made in this section represent any sort of proper evaluation of the success of NT itself is this: the purpose or purposes for which NT has been put forward or according to which it should be evaluated have not been stated explicitly or clearly enough, and are not really well understood or agreed upon as yet.

This is hardly the first occasion where this sort of observation is made regarding NT. Troelstra, for instance, in a passage of his Non-extensional equality (p.318)9 9 “If we think of the objects of theory as ‘direct proofs’ whose canonical descriptions are normal deductions (cf. numbers and numerals in the case of arithmetic), then the inference rules →I, →E correspond to certain operations on direct proofs; the normalistion theorem then establishes that these operations are always defined, and permits us to regard any deduction as a description of a direct proof. The conjecture [i.e. NT] in the direction <= [soundness] then becomes trivially true. But if the deductions are seen as descriptions of a more general concept of proof (...) (in the same manner as closed terms are descriptions of computations in the case of arithmetic) it may be argued that intensional equality should correspond to (literal) equality of deductions; and then the conjecture is false. So, without further information about the intended concept of proof, the conjecture in the direction <= [soundness] is meaningless because, as just explained, ambiguous.” , addresses the very same issue: there, he calls the thesis ultimately “meaningless”, because “ambiguous” with respect to the notion of proof intended - i.e. since it is not possible to determine which notion of proof the thesis aims to describe, its merit could not be evaluated. The formulation of his diagnosis does itself involve some confusion, though; for ambiguity comes about not where meaning lacks, but rather contrarily, where there is a kind of superabundance thereof. It is also not quite clear if it is due to ambiguity, vagueness or some other phenomenon concerning the terms and concepts involved that we lack reasons to properly judge how well NT fares overall.

Unlike Troelstra, however, we shall here not simply rest contented with stating the impossibility of making a precise evaluation of the merits of NT due to our difficulty in determining its actual purposes. There is, after all, quite a number of relevant and clear enough philosophical projects which one could think such a thesis might be directed at, or maybe not unreasonably considered as a candidate to adequately perform - and thus we can, supposition after supposition, come to a better understanding of, at the very least, the extent to which some of the philosophical projects NT is at least in principle capable of performing are worth pursuing. In what follows, a comparison will be made between NT and the Church-Turing thesis concerning some key aspects of them. Due to their significant conceptual similarities and connections, as well as convenient differences, and to the possibility of a more solid historical assessment of the latter, this will be useful to give us a measure for the reasonability of our eventual judgements and expectations concerning the former. This should provide for a gain in clarity concerning how NT fares as a characterisation of its intended notions, namely, proofs and their identity, and why; as well as of its potential of serving other, related philosophical purposes. One can also expect from such a comparison a better understanding of the contentual, so to speak, connection between the two theses - to be explained in sections 2.1.5 and 2.3 below -, which not only contributes to the previous purpose, but is also worth pursuing for its own sake.

2.1. The Church-Turing thesis and the normalisation thesis compared

The famous epithet Church-Turing thesis (henceforth CTT) is employed in the literature to refer to a variety of related theses that typically characterise some notion of computation or effectiveness by means of the formal notions of Turing machines or λ-definability. Here, we will use it to primarily refer to the following formulation: All and only effective computations can be carried out by a Turing machine viz. are λ-definable.

The idea of using CTT to highlight and explain aspects of NT by means of comparison can already be found in proof-theoretical literary sources on the subject of identity of proofs. Kosta Došen 20038. K. Došen, Identity of proofs based on normalization and generality, The Bulletin of Symbolic Logic, vol. 9 (2003), pp. 477-50., p.4, for instance, presents NT as analogous to CTT with respect to how it addresses its object:

“The Normalization Conjecture is an assertion of the same kind as Church’s Thesis: we should not expect a formal proof of it. (...) The Normalization Conjecture attempts to give a formal reconstruction of an intuitive notion. (Like Church’s Thesis, the Normalization Conjecture might be taken as a kind of definition. It is, however, better to distinguish this particular kind of definition by a special name. The Normalization Conjecture (...) might be taken as a case of analysis (...))”

The initiative of analysing the two theses comparatively in greater level of detail is thus the unfolding of a movement already started by relevant literature on identity of proofs and, more specifically, on NT.

2.1.1 Other theses more closely analogous to Church-Turing thesis

To be more accurate, the analogy of CTT seems to be stricter with other theses put forward within the literature of general proof theory, closely related to NT yet distinct from it. One, for instance, is claimed by Prawitz 197122. D. Prawitz, Ideas and results in proof theory, in: J.E. Fenstad ed., Proceedings of the Second Scandinavian Logic Symposium, North-Holland, Amsterdam (1971), pp. 235-307., and could be stated as follows: every proof in first order logic can be carried out by means of some Gentzen-style natural deduction derivation, and by means of every such derivation can some such proof be carried out. Indeed, one can see that Prawitz underlines still a further point of analogy between the specific formulation of CTT in terms of Turing machines and the thesis he claims in his text in terms of Gentzen-style natural deduction derivations, namely: both these formalisms would yield “completely analysed” versions of what they render formally. Such analogy is explicitly stressed by Prawitz in his text (Prawitz 197122. D. Prawitz, Ideas and results in proof theory, in: J.E. Fenstad ed., Proceedings of the Second Scandinavian Logic Symposium, North-Holland, Amsterdam (1971), pp. 235-307., 2.1.3, p. 246):

“(...)Gentzen’s systems of natural deduction are not arbitrary formalizations of first order logic but constitutes a significant analysis of the proofs in this logic. The situation may be compared to the attempts to characterize the notion of computation where e.g. the formalism of μ-recursive functions or even the general recursive functions may be regarded as an extensional characterization of this notion while Turing’s analysis is such that one may reasonably assert the thesis that every computation when sufficiently analysed can be broken down in the operations described by Turing.”

Dummett’s so-called “fundamental assumption” is also more strictly analogous to CTT than NT. In Dummett’s own words, the fundamental assumption states that ‘‘if we have a valid argument for a complex sentence, we can construct a valid argument for it which finishes with an application of one of the introduction rules governing its principal operator’’ (Dummett 19919. M. Dummett, The logical basis of metaphysics. Duckworth, London (1991)., p.254) - i.e. a valid argument in canonical form. Turing 195430. A.M. Turing, Solvable and Unsolvable Problems, Science News, 31, (1954), pp. 7-23; reprinted in Copeland 2004, pp. 582-595 puts forward a thesis closely related to CTT and with respect to the nature of puzzles that illustrates this analogy most clearly:

“(…) the normal form for puzzles is the substitution type of puzzle.

More definitely we can say:

Given any puzzle we can find a corresponding substitution puzzle which is equivalent to it in the sense that given a solution of the one we can easily use it to find a solution of the other.”

Where Turing speaks of puzzles in this passage, one could well speak of effective computations, valid arguments or proofs; where he speaks of the substitution type of puzzle, one could talk in terms of, respectively, Turing machines or valid canonical arguments; and where he speaks of a correspondence viz. equivalence between solutions, one could speak of sameness of, respectively, inputs and outputs or assumptions and conclusion.

Realise, further, that the status of such theses depends crucially on how one approaches the notion characterised by the thesis in question. If one, for instance, takes the valid arguments referred to by the fundamental assumption to be natural deduction derivations in intuitionistic first order logic, the “assumption” becomes in fact a theorem - indeed, a corollary to the normal form theorem. A more general or a less sharp viz. formally stated view of what one understands by a valid argument for a complex sentence, however, may lend the assumption more definitional contours. In fact, this is exactly what Turing observes about the above stated thesis on puzzles, right after formulating it:

“This statement is still somewhat lacking in definiteness, and will remain so. I do not propose, for instance, to enter here into the question as to what I mean by the word ‘easily’. (...) In so far as we know a priori what is a puzzle and what is not, the statement is a theorem. In so far as we do not know what puzzles are, the statement is a definition which tells us something about what they are.”

We will come back to this passage (probably still a few times) later, for it also applies in important senses to NT.

2.1.2. First relevant point of analogy: characterisations of a notion in formally tractable terms which claim to be sound and complete

Notwithstanding the observations just made, the analogy observed by Došen between NT and CTT is still significant and illuminating. In fact, all of the above mentioned theses share a very important trait with NT, namely: all theses offer a characterisation of a notion - say, an “explicandum” - in terms of a formally tractable concept, and claim that this formal characterisation is sound and complete with respect to the characterised notion. Thus, just as CTT suggests the possibility of soundly and completely characterising the informal notion of effective computability at stake in e.g. Turing’s 193629. A.M. Turing, On Computable Numbers, with an Application to the Entscheidungsproblem”, Proceedings of the London Mathematical Society (Series 2), 42 (1936-37), pp. 230-265; reprinted in Copeland 2004, pp. 58-90. paper10 10 Which is a quite specific one; more on this topic later. in terms of the formal notions of Turing-machines and λ-definability; just as Prawitz 197122. D. Prawitz, Ideas and results in proof theory, in: J.E. Fenstad ed., Proceedings of the Second Scandinavian Logic Symposium, North-Holland, Amsterdam (1971), pp. 235-307. characterises first order logic proofs as soundly and completely11 11 This “completely” holds both in the sense that all proofs can be analysed in this way and in the sense that the analysis is complete i.e. breaks them down into their most elementary component steps; yet the first of these senses is the obviously the relevant one to the described analogy. analysable in terms of natural deduction derivations; just as Dummett assumes that the existence of canonical valid arguments of appropriate form is necessary and sufficient to explain the validity of arguments in general; and just as Turing proposes that the specifically defined notion of substitution puzzle soundly and completely characterises the general notion of puzzle; so does NT intend to provide a sound and complete characterisation of the informal viz. semantical notion of identity between proofs viz. semantical equivalence between derivations in terms of the formal viz. syntactical notion of (βη-)equivalence based on the reductions involved in the normalisation of natural deduction derivations. By means of NT, though, another thesis that is again closer in form to those mentioned above is suggested, namely: that proofs are soundly and completely expressed by normal derivations alone (thus, a strengthening of Prawitz’s aforementioned thesis, which is acknowledged by Prawitz 197122. D. Prawitz, Ideas and results in proof theory, in: J.E. Fenstad ed., Proceedings of the Second Scandinavian Logic Symposium, North-Holland, Amsterdam (1971), pp. 235-307. himself to follow from his argumentation (see p.258, 4.1.2.1 )).

2.1.3. Second relevant point of analogy: theses, not conjectures

a. No mathematical statements, therefore no candidates to be mathematically proved or disproved

The cogency of Došen’s point that a formal proof of NT should not be expected has been advocated in detail elsewhere12 12 de Castro Alves, 2018, section III.2. Actually, we were, it seems, rather more radical than Došen on this point: he seems to side with Kreisel and Barendregt in stating that the Post-completeness argument is an efficient way of justifying the completeness part of NT once its soundness is granted; a point specifically against which we argued in section III.2.b. and will be here taken as a departure point. CTT13 13 Most comments to be made will also apply to the other mentioned related theses. , likewise, is usually not treated as subject to formal justification14 14 Although there are exceptions: see, e.g. Kripke 2013, who claims that CTT is a corollary of Gödel’s completeness theorem for first-order predicate logic with identity. Gandy 1988, Mendelson 1990 and Sieg 2008 also figure among such cases. The reader interested in this particular topic is referred to Copeland 2017 SEP entry for further details and references on this kind of approach. The present effort takes the much more frequently adopted stance on the topic - namely, the contrary one - as a working departure point for building its argument. It is nevertheless probably worth stressing that it might well be the case that no real disagreement ultimately occurs between examples of these two apparently opposite claims. . Turing himself, for instance, in the 193629. A.M. Turing, On Computable Numbers, with an Application to the Entscheidungsproblem”, Proceedings of the London Mathematical Society (Series 2), 42 (1936-37), pp. 230-265; reprinted in Copeland 2004, pp. 58-90. article where the thesis as put forward by himself can be first identified, expresses a similar opinion in a most positive way: “All arguments which can be given [to show the thesis] are bound to be, fundamentally, appeals to intuition, and for this reason rather unsatisfactory mathematically”. When commenting the closely related 1954 thesis on puzzles, he puts the point in an even more eloquent way: “ The statement is moreover one which one does not attempt to prove. Propaganda is more appropriate to it than proof(...)”. The reason for such a diagnosis - or, seen from another angle, the diagnosis to which this observation gives rise - can be regarded as the same in all cases: the theses in question, no matter how one employs or understands them, are of a rather informal nature - i.e. despite being formulated (at least partially) in mathematical terms, they are no mathematical statements (in the sense that they are no statements of mathematics as e.g. 1+1=2 is); therefore, they cannot possibly be expected to be decided (or shown undecidable) mathematically. This is, by the way, the first reason why NT - just as much as CTT - cannot be reasonably regarded as a mere mathematical fancy: it is not, strictly speaking, entirely or even essentially mathematical in nature. It is yet to be argued, though, why, if at all, it should not be regarded as a mathematically formulated metaphysical or ideological fancy, in mathematics or elsewhere.

The terminology “conjecture”, most frequently employed when referring to NT, has also been purposefully eschewed here for this reason (though not only). In contexts that are mathematical enough - such as that of the present discussion of identity of proofs -, a conjecture is usually something that is, at least in principle, a candidate to be mathematically proved or disproved - or, at the very least, being mathematically shown to be undecidable.

b. Definitions, not “scientific” conjectures viz. hypotheses: definitions can be used in many ways

Furthermore - and this is the second reason why the label “conjecture” is rejected here -, these theses need not in principle be taken as formal reconstructions of some given informal notion. Another aspect they share pointed at by Došen is their definitional character; and definitions may well be taken as stipulative, creative, or, more interestingly, propositive rather than descriptive in nature. Church, for instance, in the 1936 paper where the thesis is first put forward by himself in terms of λ-definability, refers to his proposal as a “definition of effective calculability”; and rather in the same spirit, one could regard NT as a definition of, say, semantical equivalence between natural deduction derivations. Such definitions could in turn be taken to be attempts to actively, positively build or give substance to a notion which was previously not there, or that, at best, was too fuzzy, volatile or obscure to serve certain purposes, instead of to truly, faithfully or precisely capture or depict some informal notion assumed to be somehow previously viz. independently determined. Thus, inasmuch as they are taken as attempts at formally reconstructing an informal notion given beforehand, both theses are just as “true” as they are faithful renderings of the given intended informal notion into formally tractable terms. Otherwise, inasmuch as they are proposals as to how we can positively specify and deal with certain ideas not assumed to be previously or independently given or determined in any particular way, these theses should not be looked upon as having any pretension of being “true” in any sense, and are just as good as they are successful, when used as definitions, in allowing (resp. blocking) uses of the defined notion in accordance with the goals one has, whatever these are. Turing summarises the points made here in 2.1.3.a and 2.1.3.b in the already transcribed comments to the mentioned analogous 195415 15 Turing, A.M., 1954, “Solvable and Unsolvable Problems”, Science News, 31: 7-23; reprinted in Copeland 2004b: 582-595. thesis on puzzles:

“The statement is moreover one which one does not attempt to prove. Propaganda is more appropriate to it than proof, for its status is something between a theorem and a definition. In so far as we know a priori what is a puzzle and what is not, the statement is a theorem. In so far as we do not know what puzzles are, the statement is a definition which tells us something about what they are.”

Where Turing speaks of puzzles in this passage, one could well speak of effective computations or identity of proofs - or even simply proofs -, and the observation would still hold. Thus, in yet another sense, it is a thesis and not a conjecture or a hypothesis the proper object of our interest when addressing NT on identity of proofs: the conjecture that can be made by suggesting this thesis as a possibly true hypothesis - of mathematical or other nature - is merely one among the possible ways one can employ the thesis, and it has no privilege in this investigation. Analogously, we do not speak of, say, the normalisation stipulation or postulate on identity of proofs - for, just the same, a, say, non-veritative way of putting the thesis has no reason to be privileged here. The terminology thesis, unsatisfactory as it may be in the discussed respect, follows that which is already sedimented by use in the case of CTT, and seems at any rate less problematic than the considered alternatives.

2.1.4. Third relevant point of analogy: initial acceptance of soundness, not of completeness

Another important aspect in which CTT and NT are similar is the fact that while their proponents deem their soundness to be somehow obvious, they dedicate significant efforts to argue for their completeness, which they do not take for granted.

A reconstruction of some assumptions under which the soundness part of NT could be taken for granted was given in de Castro Alves 20182. T. de Castro Alves, Synonymy and Identity of Proofs - a Philosophical Essay, doctoral dissertation, Eberhard-Karls-Universität Tübingen, Tübingen (2018)., section III.2.a, and some relevant objections to it have also been explored in sections III.2.a.1 and III.2.a.2. Objections to - or, as a matter of fact, discussion of a satisfactory level of philosophical depth in general of - the soundness of CTT are nevertheless relatively less frequent16 16 Two that appear in the literature of which I am aware can be respectively found in Péter, R. Rekursivität und Konstruktivität, Constructivity in Mathematics, Amsterdam 1959, pp. 226-233.; and Porte, J. Quelques pseudo-paradoxes de la “calculabilité effective”, Actes du 2me Congrès International de Cybernetique, Namur, Belgium,1960, pp. 332-334. , the standard, overwhelmingly hegemonic attitude being its tacit acceptance.

On the other hand, the completeness of both theses is more frequently questioned, and quite understandably so for especially one before all - many - other reasons: they impose clear and sharp limits to notions manipulated in a great variety of senses which are frequently not sharp at all. This is even more true in the case of proofs and their identity, due to the frequent presence of talk of proofs in most variegated contexts in everyday life. In spite of the fact that the notion of (effective) computability has close relations with distinct notions such as constructibility, feasibility etc., it is usually employed in much more restrict contexts - usually philosophical or mathematical, being quite infrequent in non-theoretical or academic contexts.

Now, it could be claimed that it is the norm that arguing that a formal notion comprises all instances of a “pre-theoretical” content is harder than just claiming that the notion is sound; and that the analogy just remarked is thus not very significant. To this, a twofold counter-objection should be given.

First, this line of reasoning seems to assume that a formal notion n picked for the characterisation of some intended informal notion N is, as a norm, initially selected because it seems possibly just as comprehensive as N, but possibly stricter, though clearly not looser. Even in case one is willing to concede this assumption to be contingently - say, historically - true, it is clearly controversial: it seems in principle just as sensible a strategy to start this kind of enterprise by picking a notion n that seems possibly just as comprehensive as N, but possibly looser, though clearly not stricter. Indeed, if one thinks of Aristotle’s procedure of trying to characterise things by first providing a genus and then appending to it, if and as needed, a specific difference, one may be tempted to regard the second strategy as playing a role closer to that of a “norm” along the history of philosophy and scientific investigation than the first.

Second, regardless of whether or not the existence of such a purported “norm” applies to a more general setting in mathematics and/or elsewhere, it is important to notice that in the specific matter of identity of proofs, this is not the case. Another - or maybe one should say the other - major formal proposal for characterising identity of proofs, namely, Lambek’s category-theoretical approach based on the idea of generality of a proof, clearly has a stronger claim to completeness than to soundness - the second being evidently the non-obvious claim, in clear need of justification.17 17 The unfamiliar reader is referred to Došen 2003, pp.8-10, where this point is made explicit and the so-called “generality conjecture” on identity of proofs is explained in adequate level of detail.

2.1.5. Yet another connection: proofs, programs and the Curry-Howard correspondence

Now, it could also be argued that, while all these analogies obtain, the case for choosing specifically CTT as our touchstone for evaluating NT is still somehow feeble. Why not pick another mathematical characterisation of an informal notion for this role - say, e.g., the Cauchy-Dedekind definition of continuity - instead? Well, apart from our firm decision to avoid falling prey to Buridan’s ass’ most unfortunate fate - one had to pick one! -, there are good reasons for this specific choice. We mention three of them.

First, philosophical and historical discussion of CTT in the literature is significantly more frequent, extensive and detailed when compared to that on almost any other such thesis - which makes it presumably more adequate to play the role of the touchstone for a philosophical evaluation as the one pursued here.

Second, the historical development of CTT has lent it a very special epistemic role within the fields of investigation to which it is relevant - a trait to be better explained in section 2.2.a below -, which makes it very peculiar amongst other such mathematical characterisations and, as such, perfectly suitable as a measure for an evaluation of certain aspects of NT’s success and potential.

Third, compared to other such theses, CTT has a special status when it comes to its material relation, so to speak, to NT. The so-called “proofs-as-programs” analogy, enabled by the Curry-Howard correspondence, seems to suggest that NT determines a partial criterion of identity for effective computations as (a type) characterised by CTT (as the application criterion of such type). In plain vernacular: CTT and NT can easily be taken as addressing a subject-matter in common. Indeed, this gives way to a possible relation of dependence between the two, to be explored in relevant level of detail in section 2.3 of this text. Other mathematical characterisations of “pre-formal” concepts, such as e.g. the mentioned Cauchy-Dedekind definition of continuity, clearly do not partake in such connection with NT.

2.2. A brief comparative evaluation of the success of the two theses

a. Church-Turing thesis’s unequivocal triumph as the fertile soil of a science, not as one of its fruits

CTT is possibly one of the most successful propositions of its kind. The diagnosis made by Turing as early as 1948 is the following:

“It is found in practice that L.C.M.s can do anything that could be described as ‘rule of thumb’ or ‘purely mechanical’ [i.e. effectivelly computable]. This is sufficiently well established that it is now agreed amongst logicians that ‘calculable by means of an L.C.M. [a Turing machine]’ is the correct accurate rendering of such phrases.”

Copeland 20175. B. J. Copeland, The Church-Turing Thesis, The Stanford Encyclopedia of Philosophy (Winter 2017 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2017/entries/church-turing/>.
https://plato.stanford.edu/archives/win2...
, for instance, recognises the virtual consensus amongst logicians on the matter as applicable even to today’s situation - which shows that the thesis is way past the possibility of being called a mere fancy. As Turing exemplifies with his opinion, the reason for such high acceptance is usually attributed to “practice”: In the first place, in tune with Turing’s own way of putting the matter, all examples of functions deemed effectively computable that have been considered so far actually happen to be computable by Turing machines. But perhaps even more importantly, all relevant concurrent attempts to give mathematically tractable substance to the notion of effective computability - in particular Church’s and Turing’s - have turned out to be extensionally equivalent in spite of the sometimes very distinct ideas and formal apparatus in terms of which they are formulated. This is actually sometimes described as indicative of a formalism independent character of the thesis’s claim: the exact same class of functions is, after all, selected as effectively computable in all approaches.

But notice that practice only offers “arguments” - and quite indecisive ones - to support, first, the completeness part of the thesis; for the lack of knowledge of counterexamples - or of how they could be produced from potential examples - and the overwhelming variety of examples show, if anything, this: first, our lack of concrete material to believe that the thesis may leave out something that should be regarded as an effective computation; and second, the “non-ideological” character of the very notion of effective computability yielded by the mentioned theses: the extensional coincidence of all considered “ideologies” arguably suggests this much. But what about soundness? Which aspect of practice offers any arguments to favour the idea that everything that Turing machines can compute is in fact effectively computable or purely mechanical?

I believe that a proper look at why people have been persuaded of CTT’s soundness in the past and nowadays is perhaps one of the best ways to understand both the reason and the meaning of the hegemonic acceptance of the whole thesis. It is fairly clear that the thesis was put forward - at least in the versions proposed by Church and Turing - as an attempt not so much to reconstruct, but rather to make a certain informal, in some senses deemed “vague” notion of effective computability or calculability a precise one. Now, such an enterprise involves establishing sharp limits for the notion to be defined; but this task obviously cannot be guided simply by rigorous observance of the limits of the aimed notion itself. In fact, there was no such thing as limits that could be rigorously observed attached to the informal notion in question. In this sense, rather than a reconstruction, Church-Turing’s thesis is, not only from a historical but also from a conceptual, semantical viewpoint, more likely a contribution; an actual addition to a previous notion of computability which was in fact transformed by it. The evaluation of the soundness of the thesis, thus, cannot really be understood as mere comparison with some related previous informal notion of computability which is supposedly to be successfully mimicked by it; it is simply peremptory to even assume there was such thing.

This suggestion goes against, it seems, Copeland’s 2017 view on the matter. He claims that “effective”, “mechanical”, “systematic” etc. are informal “terms of art” in logic, mathematics and computer science that depart significantly from their everyday usage in these contexts. He goes on, and states that these terms are all synonymous when employed in these specific contexts, and that their meaning is as precisely determined as described by the following clauses, which I quote:

“A method, or procedure, M, for achieving some desired result is called ‘effective’ (or ‘systematic’ or ‘mechanical’) just in case:

1. M is set out in terms of a finite number of exact instructions (each instruction being expressed by means of a finite number of symbols);

2. M will, if carried out without error, produce the desired result in a finite number of steps;

3. M can (in practice or in principle) be carried out by a human being unaided by any machinery except paper and pencil;

4. M demands no insight, intuition, or ingenuity, on the part of the human being carrying out the method.”

Now, as much as this suggestion has a reasonable claim to hold with respect to today’s employment of this vocabulary in the mentioned fields - which, more than mere manipulation of terminology “of art”, is sometimes rather technical18 18 Maybe now is the proper time for a brief digression, in which we are to ask ourselves this: What does this mean: a term of art? Where does such a use of a term lie between a technical, i.e. explicitly stipulated in a fix and possibly arbitrary fashion, and a non-theoretical, spontaneous or unreflective one? In the cases that now interest us, as in many others, it is most certainly not the case that philosophers, mathematicians and scientists just chose to use a given term to express a certain specific concept of their disciplines out of no particular reason and in a way independent from its meaning in other contexts where its rules of employment are well enough sedimented by linguistic practices. Terms of art become terms of art usually due to their behaviour elsewhere in language, and not independently of it, let alone in spite of it. Thus, e.g. “effectiveness” in logic, mathematics and computer science is not some extraterrestrial appropriation of a term which is employed in most other contexts to refer to some distinct, “usual” notion of effectiveness; an appropriation alien to the sense in which one says that e.g. using a blunt razor blade is not very effective for shaving one’s beard. Of course one must recognise that e.g. while writing truth tables is a mathematically effective procedure to test whether or not any given sentence of classical propositional logic is tautological, it could hardly be regarded as effective in a practical sense for testing the tautologousness of sentences with many different variables - it may simply take too long, to the point of unfeasibility. But does this mean that we are dealing with different notions of effectiveness in each of these cases? Some fallacies, for instance, may be deemed rhetorically very effective arguments, despite their being clearly epistemically ineffective. Would it not rather be the case of effectiveness - a single, non-technical and democratically accessible notion, which means, quite obviously, the character of methods or procedures that produce or are capable of producing a certain effect - being measured by different standards, according to different requirements and goals involved in respective tasks or activities? This reflection applies to both cases considered here, of course. This means that proofs, as well as their identity, are also not to be looked upon in the present context as completely isolated instances of notions which are not but homonymous to some supposed everyday notions of proof and their identity; insulated notions, over the determination of the meaning of which logicians, mathematicians and computer scientists would have complete and exclusive power. I fail to see reasons why anyone should feel entitled to consider what would be such linguistic aberrations as relevant, even as mere possibilities, to the present discussion. -, it is hard to see evidence that the mathematical community uniformly employed - let alone was aware of employing - an informal notion of effectiveness as clearly determined as the one just depicted before the publication of, more than any other, Turing’s 193629. A.M. Turing, On Computable Numbers, with an Application to the Entscheidungsproblem”, Proceedings of the London Mathematical Society (Series 2), 42 (1936-37), pp. 230-265; reprinted in Copeland 2004, pp. 58-90. own groundbreaking informal account of the notion. The fact that reasonable - yet thus far clearly defeated from a historical viewpoint - criticism from quite capable practitioners of mathematics has arisen until at least as late as 1960, as already observed here19 19 Footnote 16. , specifically against the soundness of the thesis speaks in favour of this.

Moreover, the notion of computability by a Turing machine, besides being a remarkably accurate formalisation of Turing’s own informal understanding of computability - which is the actual principium motus of his contribution to the debate on effective computability -, allows for a mathematically very fruitful way of employing the until then somewhat elusive notion of effective computability. This last observation could hardly be overstated - we are talking of a formal development which yielded solutions to the halting problem and to the Entscheidungsproblem just to start with.

Thus, the fact that CTT is a remarkably faithful formalisation of Turing’s own proposed informal account of effective computability is hardly the most decisive factor for the initial endorsement of the soundness of this thesis - at least insofar as it is considered as formulated here, namely: “All and only effective computations can be carried out by a Turing machine viz. are λ-definable” -; rather, such endorsement seems to stem from the fact that Turing’s solid informal conception of effective computability was probably as appealing as possible when measured within the mainstream ideological framework on effectiveness and computability at his time. A remarkable illustration of this point is the consideration of Gödel’s appreciation of the significance of Turing’s work for the proper support of CTT. He remained clearly unconvinced of the thesis even after it was demonstrated by Church and Kleene in 1935 that λ-definability and general recursiveness are extensionally equivalent, only to enthusiastically endorse the thesis after Turing’s account of computability in his 1936 article. Regarding this, he said:

“Turing's work gives an analysis of the concept of "mechanical procedure" (alias "algorithm" or "computation procedure" or "finite combinatorial procedure"). This concept is shown to be equivalent with that of a "Turing machine."” (quoted from Davis 19656. M. Davis (ed.), The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable Problems and Computable Functions, New York: Raven (1965)., p. 72. My emphasis.)

The “concept” referred by Gödel as yielded by Turing’s analysis is not the formal one of a Turing machine, or else his claim to its being shown to be equivalent to that of a Turing machine would be pointless. He further claims that:

“We had not perceived the sharp concept of mechanical procedures sharply before Turing, who brought us to the right perspective.” (Quoted in Wang 197432. H. Wang, From Mathematics to Philosophy, New York: Humanities Press (1974)., p. 85)

As observed above, then, the crucial aspect of Turing’s account of computability is his convincing philosophical presentation of (a) why a certain informal notion of effective computability is very appealing for clear reasons and (b) how it can be very faithfully formalised in a certain remarkably simple and interesting, fruitful way (i.e. Turing machines); which only as a bonus happens to be extensionally equivalent to other suggested formal criteria to define effective computations. These other formal criteria were never sufficiently underpinned from a philosophical point of view; they were not shown to germinate from a previous appealing informal account of effective computability, and thus remained unjustified in their general claims both to soundness and to completeness. Gödel’s appreciation of the matter is thus, in this respect, very close to the one presented here.

However, according to the present appreciation of the matter - which in this respect departs from that of Gödel, as will be clear -, the appeal of Turing’s informal account of computability resides in the fact that it is both (a) compatible with the vague notion of effective computability employed at the time, in the sense that it is neither clearly unsound nor clearly incomplete with respect to it; and (b) by extending and transforming this previous vague notion of computability into a more detailed and solid - yet still informal - one, it offers the basis for a mathematical handling and further development of this notion of unparalleled power. This is in fact why one should not take Turing’s wording as mere verbal frolic when he talks of propaganda, rather than proof, being more adequate to support the thesis: this is true in senses probably deeper than the mathematical one. As just suggested, thus, the decisive factor in the acceptance of the thesis - and most especially of its soundness - has to do with the conceptual and mathematical possibilities it was capable of inaugurating; it is therefore not a matter of accepting it because it depicts faithfully a notion of effective computation which was previously there - somehow “in the air”20 20 See e.g. p.51 of Gandy, R., 1988, “The Confluence of Ideas in 1936”, in R. Herken (ed.), 1988, The Universal Turing Machine: A Half-Century Survey, Oxford: Oxford University Press: 51-102. , whatever this means -, but rather because it outlines a notion of effective computation which is remarkably powerful from both a conceptual and a technical point of view. And it is hardly relevant here whether or not people believe they have accepted CTT due to its faithful correspondence to a previous notion of computability; the fact is that such a notion had never really been expressed before Turing, which makes it take a leap of faith to believe that it was even already there beforehand viz. independently of his formulation. This sort of understanding seems to be emblematically exemplified by, again, the attitude of Gödel towards CTT, who, in the second of his quoted passages above, refers to what seems to be an assumed previously given “sharp” concept to which Turing’s analysis corresponds. It is especially in this respect that his view departs from ours. Closer to our viewpoint is the one suggested by Church 1937. Post (Post 193620. E.L. Post, Finite Combinatory Processes - Formulation 1”, Journal of Symbolic Logic, 1 (1936), pp. 103-105., p.105) criticised Church’s understanding of his own identification of λ-definability and “effective calculability” as a definition of effective calculability as masking the fact that it is rather a “working hypothesis”, in need of “continual verification”; in reviewing Post’s article, Church then responded to the criticism by claiming something close to what is stated here about there being no reasons to assume that there is a previous notion of calculability against which the “hypothesis” is to be verified:

“[Post] takes this identification as a "working hypothesis" in need of continual verification. To this the reviewer would object that effectiveness in the ordinary (not explicitly defined) sense has not been given an exact definition, and hence the working hypothesis in question has not an exact meaning. To define effectiveness as computability by an arbitrary machine subject to the restrictions of finiteness would seem to be an adequate representation of the ordinary notion, and if this is done the need for a working hypothesis disappears.”

Church thus seems to reduce Post’s view of CTT as a “working hypothesis” to quasi-nonsense: either the “hypothesis” would be meaningless due to the lack of definition of the supposedly intended object of description; or, in the presence of a satisfactory such definition, it would be pointless as a hypothesis - for in such a case one would of course already be in possession of what the hypothesis aims to clarify.

In any case, conceding as much as we can, CTT is at the very least, if you will, a case where the so-called paradox of analysis finds an instance which clearly goes far beyond a mere theoretical fuss: even if we do accept that it is, as so frequently suggested, the formal version of a mere analysis of an independently and previously given notion of effective computability, it is difficult to see how such an analysis could have brought that much transformation in the use of the analysed notion without having a meaning which is essentially different from that of the latter. But then how, in which sense could one still call it a mere analysis?

The “propaganda” in favour of the employment of CTT to guide the handling of notions such as “effective computability” and equivalents has indeed been so successful that a remarkable phenomenon can be observed in the present stage of their discussion: computable by Turing machines, λ-definable and equivalent notions - especially the first of these, due to the very simple and familiar terms in which it is formulated - became indeed paradigmatic cases of what one deems effectively computable in an “intuitive”21 21 My experience is that the use of this notion - “intuitive” -, while hardly being of any significant utility at all in philosophical discussion, usually comes at the expensive cost of bringing about a great deal of confusion to it. Therefore it is purposefully avoided here as much as possible. sense. So, in practice, the soundness of CTT is not altogether detachable from the frequently supposed “intuitive”, “pre-theoretical” departure point of the discussion of computability. At the very least not anymore. In a sense, thus, to deny the soundness of CTT in most contexts nowadays is literally close to nonsensical - which shows that this thesis has succeeded in establishing the paradigm of what it is to be effectively computable, rather than in truly describing this notion.22 22 A view similar to the one just put forward on CTT is advocated in Shapiro 2006 and Shapiro 2013, though in far greater level of detail and with very different purposes. Shapiro argues at length for some theses very close to some we have presented here: e.g., that the informal notion of computability at stake in the early 1930s had no clear-cut borders and CTT historically contributed to sharpen it (in a way that nowadays cannot be, in a sense, undone). The views presented here were developed in complete unawareness and independence of Shapiro’s mentioned work, but I of course waive any claim to originality on the matter. A proper discussion of how Shapiro’s views compare to the ones presented here would lead us far too much astray, but a few rather inexhaustive notes are due. Providing a satisfactory defence of the view of CTT presented in this section was of course never a goal here at all - conveying the idea as a reasonable interpretive possibility is more than enough for the present purposes, which, in essence, do not but instrumentally employ this interpretive possibility as a means of analysis and interpretation of another thesis: NT. Shapiro’s mentioned efforts, on the other hand, do have CTT as their main subject-matter, and thus unsurprisingly argue and claim a lot more concerning it than the brief six-page-long instrumental interpretive characterisation offered here. Thus, for instance, Shapiro insistently subscribes to the truth of CTT - a matter concerning which no particular stance is explicitly taken here. In fact, as some passages of this effort suggest, a discussion of the eventual truth value of CTT - especially of its soundness clause - seems rather incompatible with the current angle of consideration. Besides, Shapiro seems to tie CTT’s successful establishment of a paradigmatic conception of effective computability to a loss of room for what he calls “open-texture” of this notion, especially within mathematics. Once again, no particular judgement on this matter is explicitly expressed here - although I would like to stress, bringing the remarks made in footnote 20 above as background, that the philosophical discussion of effective computability among mathematicians is rather young in historical standards, and it is nested in a long and dense tradition of thought which it frequently neglects, and whose bearing on the current and future possibilities for this debate - within mathematics and elsewhere - has not really been adequately studied as yet. It is also worth noticing that, seen from a more abstract perspective, Shapiro’s texts share with this one the employment of a sort of general strategy for approaching its main subject-matter, namely: a comparison of the thesis under investigation with other conveniently similar, historically or conceptually related mathematical characterisations of informal notions, aimed at highlighting noteworthy aspects of the first one by analogies or contrasts. More concrete and well developed notes on Shapiro’s views are left for a more convenient opportunity.

b. The normalisation thesis: subtotal

Matters stand quite differently when it comes to NT. Although explicit and mathematically oriented attempts to account for identity of proofs in a systematic fashion is a rather recent business - even more than that on computability -, we have argued elsewhere23 23 de Castro Alves 2018, section I. that discussion of identity of proofs - or, if you will, of semantical equivalence of formally valid deductive arguments - of a rather formal nature can be found in classical philosophical literature since at least as early as the 18th century; and, maybe even more importantly, the core informal concept at stake in the thesis - namely, proof - has been employed as a “term of art” in philosophy, logic and mathematics for almost as long as there have been such disciplines. This means that the philosophical consideration of the concepts addressed by NT go way farther back in history than the relatively young discussion of computability, which one could take pains to trace back to, at the earliest, say, Leibniz and other 17th century thinkers, and which in any case did not acquire some of the most essential features of its shape before Hilbert and his foundational enterprise.

Quite contrarily to the expectations that such a picture could generate, however, the notion of proof at stake in NT has far more clear-cut determined traces than the notion of effectiveness addressed by CTT: from the very outset, all eventual proofs that cannot be expressed as natural deduction derivations are not contemplated, while in principle all procedures/computations are possible candidates to effectiveness. NT thus works only in an environment where a clear and specific application criterion for proofs is granted, and is either evidently false or simply ill-formulated in its absence - unlike what happens in the case of CTT, which itself provides such a clear application criterion to a notion of effective computation that is not necessarily previously specified in any way.

Now, this point actually highlights one of the most important discrepant aspects between the two compared theses. Very generally stated, it is this: NT and CTT work as answers to fundamentally different kinds of questions regarding their respective objects. As stated previously: while CTT is an attempt at providing the notion of effective computation with an application criterion, NT, as remarked before, works only under the previous assumption of a specific application criterion for proofs, and provides an identity criterion for proofs. Thus, while CTT answers the question “what is it for a computation to be effective?” - first, in the intensional sense of “what are the necessary and sufficient conditions for the predicate ‘effective computation’ to apply to something?”, and then consequently also in the extensional sense of “to what does the predicate “effective computation” applies?” -, NT needs to first take for granted that the answer to the question “what is it for something to be proof?” (asked in the same sense as in the case of effective computation) is “to be expressible as a derivation.”, to only then present itself as an answer to the question “what is a proof?” - in the sense of “what are the necessary and sufficient conditions for something that is a proof to be the proof it is and no other, regardless of which proof it is?”. Notice that CTT has absolutely nothing at all to say about the identity or individuation criteria of that which it selects as effective computations.24 24 The problem of algorithm identity has already received attention in the literature, though. Yiannis Moschovakis, for instance, is a locus classicus: he proposes a conception of algorithm identity based on the idea of recursor isomorphism. The reader is referred to Moschovakis 2001 for an explicit formulation of his thesis (p.18). For a criticism of Moschovakis’ stance and a sceptical view on the possibility of giving this question a proper answer, see the homonymous Gurevich 2012; and also Blass, Dershowitz and Gurevich 2009. San Mauro 2018 also discusses the surroundings of this problem.

This means that NT does, together with the mentioned assumption on which it depends, offer an account of what it is to be a proof, while CTT does not - or at the very least not necessarily - offer an account of what it is to be a computation; if so, then only a partial one.

The fact that the implications of NT are, in the sense just described, clearly farther-reaching with respect to the depiction of the contours of its object - and by that now we mean proofs rather than identity of proofs - than that of CTT are with respect to its respective object - namely effective computations - could be regarded as a relevant point in the explanation of why the first is so clearly less accepted than the second, even if one only considers their respective claims to soundness. But probably the most drastic factor is really the lack of appeal and potential for publicity of the first within the current ideological status quo of the mathematical and philosophical establishment, as opposed to that of the second. While it feels, as already mentioned, almost absurd to contradict the soundness of CTT in any discussion of its subject-matter that is not really radically deviant from mainstream, it is simply too easy to think of quite unextravagant counterexamples to the soundness of NT - some notable of which were discussed in de Castro Alves 20182. T. de Castro Alves, Synonymy and Identity of Proofs - a Philosophical Essay, doctoral dissertation, Eberhard-Karls-Universität Tübingen, Tübingen (2018). (see e.g. section III.2.a.1). And this is, again, of course not due to one being true and the other false, but rather to the acceptance of one having served, from the very beginning, as a remarkably good springboard for the further development of what was, until its appearance, mainstream ideology on matters to which effectiveness of computations is a central issue, such as e.g. solvability of mathematical problems; while the acceptance of the other seems to allow no clearly foreseeable significant further development towards a clearly more powerful or somehow advantageous way of dealing with any interesting question which is already stated. In fact, it might be relevant in explaining this to consider the reason why identity of proofs as a problem itself, unlike that of effective computability, never really caught the eye of mainstream mathematics: it has not yet been clearly connected with any “bigger” issue in this area. Those who have given it their attention for not exclusively ludic reasons seem to see it as a question worth being answered mainly for what they consider to be its own intrinsic conceptual import, intimately tied to a philosophical endeavour, namely getting to understand what a proof is. So, while CTT eventually became the main paradigm for the rather uniform and precise understanding of effective computability that came about largely due to its own effects - and so remains, at least until now -, NT offers something that is still no more than a suggestive, arguably promising way of characterising proofs and their identity - and maybe so due more to its methodological than to its properly contentual virtues. It is thus fair enough to let Kreisel’s 197115. G. Kreisel, A survey of proof theory II, in: J.E. Fenstad ed., Proceedings of the Second Scandinavian Logic Symposium, North-Holland, Amsterdam (1971), pp. 109-170. words echo here, for they remain a quite accurate diagnosis: “at the present time, the criterion [of identity of proofs corresponding to NT] above has a pedagogic use: it corrects the common assumption that “nothing” precise (and reasonable) can be done on questions about synonymity [viz. identity] of proofs.”

2.3. On a possible interdependence between the two theses

One could be tempted to say that, in this context, NT can work as a partial answer to the more general question concerning the identity of effective computations. Mimicking mutatis mutandi the formulation of the question which NT proposes to answer, we could formulate this one so: when do two Turing machines / λ-terms represent the same computation? The idea would thus be that the mentioned background assumption of NT can actually be looked upon as the statement of a particular case of CTT. As just stated, this assumption consists in the acceptance of a specific application criterion for proofs - i.e. roughly, a criterion that determines the necessary and sufficient conditions to be satisfied by something so that the predicate “proof” applies to it. Now, one of the theses mentioned and described previously as more closely analogous to CTT than NT itself offers precisely such a criterion - namely, the one claimed by Prawitz 197122. D. Prawitz, Ideas and results in proof theory, in: J.E. Fenstad ed., Proceedings of the Second Scandinavian Logic Symposium, North-Holland, Amsterdam (1971), pp. 235-307., according to which every proof in first order logic can be carried out by means of some Gentzen-style natural deduction derivation, and by means of every such derivation can some such proof be carried out. Thus, providing an identity criterion for proofs could be taken as ultimately amounting to providing an identity criterion for some specific subclass of effective computations. This consideration involves, however, a very significant mistake. More specifically, it would be conditioned by a failure to observe the fact that the notions of proof (the informal one, formally specified by the background assumption of NT, of course) and effective computation (also the informal one, as characterised by CTT), despite having intrinsically related application criteria, are still at least in principle essentially distinct viz. not necessarily related - in particular, there is no reason at all to see proofs as a subclass of effective computations, since the identity criteria of the latter are left undefined and might thus be completely distinct from those of the first (as e.g. in the case of positive rational numbers and ordered pairs of positive natural numbers, mentioned in the introduction of this text). It is not difficult to understand that the background assumption of NT implies not that first order logic proofs are all expressible by effective computations of a special class; but rather that these proofs are all expressible by that which, according to CTT, also expresses certain effective computations - namely those formal expressions of effective computations (typed λ-terms, etc.) that somehow correspond (Curry-Howard) to formal expressions of proofs (natural deduction derivations). Nevertheless, it would suffice to assume that first-order logic proofs indeed are a particular subclass of effective computations in Turing’s informal sense - which would not seem extravagant at all, given the fact that the acceptance of the background assumption of NT is far greater than that of NT itself - to restore the claims just shown to be in principle unfounded.

3. CONCLUDING REMARKS: WHAT CAN WE (NOT) ACCOMPLISH WITH THE NORMALISATION THESIS

The observations made thus far suggest that, even before NT itself, the very question concerning identity of proofs - even if restricted to the version to which NT actually presents an answer -, unlike the one on effectiveness of computation, answered by CTT, has no clear potential, import or utility to broader mathematical and conceptual problems. Its only clear intrinsic connection seems to be to the philosophical question which motivates it, namely: what is a proof?

Answering this particular philosophical question does not seem to be within the prioritised scope of interests of significantly many significant mathematicians or philosophers - in many cases, not even of those few who occupy themselves with proof theoretic investigations. Notwithstanding, it seems that enough propaganda, both in quantity and in quality, could change this situation.

Let us suppose, then, that one could triumph in this enterprise of publicity and bring proofs and their identity to the very center of mathematicians’ and philosophers’ considerations and concerns, rather like it was with effectiveness and computability in the first half of the 20th century. What would then be our reasons to endorse something such as NT?

It is easy to see that any reasons to do this much would not be “scientific”: no informal notion of proof that is not too fabricated and/or partial to perform the (already chimerical enough) task of “capturing what is essential about proof” corresponds faithfully to that formally outlined by NT. Those that do, e.g. a proper version of BHK, are interesting enough to explain certain phenomena about proofs as semantical counterparts of derivations; but obviously its coverage of the addressed notion is not enough. As discussed rather extensively elsewhere25 25 de Castro Alves 2018 (e.g. all subsections of III.2 and IV.2 ) . , the counterexamples are too many, too significant and too unextravagant.

This brings as a consequence that NT could also not serve as a “scientific” springboard to the investigation of proofs even nearly as efficiently as CTT served to the investigation of effective computation. Since too much obvious and obviously interesting material cannot be handled as proofs under its acceptance, eventual results obtained would lack the necessary conceptual comprehensiveness to yield acceptable enough characterisations of what a proof is. The background assumption of NT, however, which is, as observed above, significantly more analogous to CTT than NT itself, has been exerting the role of such a springboard - though, it seems, with much less coverage with respect to proofs (unless, arguably, we were to restrict ourselves to a realm of “effective” first-order proofs) than CTT has with respect to effective computations. This is of course hardly surprising: for as already mentioned, the historical development and provenance of the notion of proof is significantly more intricate and remote than that of the notion of (effective) computation, which makes it sound rather reasonable that it should be expected to be more difficult to coherently and cohesively describe, let alone formalise.

But propaganda, it seems, could in principle change even this. In such a case, we would of course be talking of an enterprise of publicity far more prodigious than overruling some relevant and sensible yet few voices contrary to the soundness or to the completeness26 26 For a case built specifically against the complenetess of CTT, see L.Kalmár, 1959, “An Argument Against the Plausibility of Church’s Thesis”, in A. Heyting (ed.), 1959, Constructivity in Mathematics, Amsterdam: North-Holland: 72-80. of a certain notion of effective computation. Notwithstanding, the right amount of time and the wrong kind of interests could really allow for portentous possibilities. When considering related issues in the conclusion of his 2003, Kosta Došen says that:

“The complaint might be voiced that with the Normalization (…) [Conjecture] we are giving very limited answers to the question of identity of proofs. What about identity of proofs in the rest of mathematics, outside logic? Shouldn’t we take into account many other inference rules, and not only those based on logical constants? Perhaps not if the structure of proofs is taken to be purely logical. Perhaps conjectures like the Normalization (…) [Conjecture] are not far from the end of the road.

Faced with two concrete proofs in mathematics-for example, two proofs of the Theorem of Pythagoras, or something more involved-it could seem pretty hopeless to try to decide whether they are identical just armed with the Normalization Conjecture(...). But this hopelessness might just be the hopelessness of formalization. We are overwhelmed not by complicated principles, but by sheer quantity.”

The question would then be: would this be desirable? Do we want something as NT to take over as the touchstone of our investigations - even if only our mathematical investigations - concerning proofs? If so, why? Given all the points raised in chapters III and IV of de Castro Alves 20182. T. de Castro Alves, Synonymy and Identity of Proofs - a Philosophical Essay, doctoral dissertation, Eberhard-Karls-Universität Tübingen, Tübingen (2018). against the soundness and the completeness of NT - points that purposefully do not even resort to the issue of “limitedness” mentioned by Došen - it seems to me that our manipulation of the notion of proof, even when considered confined to the very limited realm of formal logic, would inescapably have to be severely distorted and deformed in order to fit into the straightjacket NT offers as a royal garb.

REFERENCES

  • 1
    A. Blass, N. Dershowitz, and Y. Gurevich. When are two algorithms the same? Bulletin of Symbolic Logic, 15(02) (2009), pp.145-168.
  • 2
    T. de Castro Alves, Synonymy and Identity of Proofs - a Philosophical Essay, doctoral dissertation, Eberhard-Karls-Universität Tübingen, Tübingen (2018).
  • 3
    A. Church, Review of Turing 1936, Journal of Symbolic Logic, 2 (1937a), pp. 42-43.
  • 4
    A. Church, Review of Post 1936, Journal of Symbolic Logic, 2 (1937b), p.43.
  • 5
    B. J. Copeland, The Church-Turing Thesis, The Stanford Encyclopedia of Philosophy (Winter 2017 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2017/entries/church-turing/>.
    » https://plato.stanford.edu/archives/win2017/entries/church-turing/
  • 6
    M. Davis (ed.), The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable Problems and Computable Functions, New York: Raven (1965).
  • 7
    N. Dershowitz and Y. Gurevich, 2008, “A Natural Axiomatization of Computability and Proof of Church’s Thesis”, Bulletin of Symbolic Logic, 14: 299-350.
  • 8
    K. Došen, Identity of proofs based on normalization and generality, The Bulletin of Symbolic Logic, vol. 9 (2003), pp. 477-50.
  • 9
    M. Dummett, The logical basis of metaphysics. Duckworth, London (1991).
  • 10
    R. Gandy, The Confluence of Ideas in 1936, in R. Herken (ed.), 1988, The Universal Turing Machine: A Half-Century Survey, Oxford: Oxford University Press (1988), pp.51-102.
  • 11
    G. Gentzen, Untersuchungen über das logische Schließen, Math. Z. 39 (1935), pp. 176-210, 405-431 (English translation: Investigations into logical deduction, in: The Collected Papers of Gerhard Gentzen, M.E. Szabo ed., North-Holland, Amsterdam, 1969, pp. 68-131).
  • 12
    Y. Gurevich, “What is an algorithm?” in SOFSEM: Theory and Practice of Computer Science (eds. M. Bielikova et al.), Springer LNCS 7147 (2012), pp. 31-42
  • 13
    C.B. Jay, N. Ghani, The virtues of eta-expansion. J. Functional Programming 5 (2), Cambridge University Press (1995), pp. 135-154.
  • 14
    L. Kalmár, An Argument Against the Plausibility of Church’s Thesis, in A. Heyting (ed.), Constructivity in Mathematics, Amsterdam: North-Holland, (1959) pp. 72-80.
  • 15
    G. Kreisel, A survey of proof theory II, in: J.E. Fenstad ed., Proceedings of the Second Scandinavian Logic Symposium, North-Holland, Amsterdam (1971), pp. 109-170.
  • 16
    E. Mendelson, Second thoughts about Church’s thesis and mathematical proofs. The Journal of Philosophy (1990), pp. 225-233.
  • 17
    Y.N. Moschovakis, What Is an Algorithm?, in: Engquist B., Schmid W. (eds) Mathematics Unlimited - 2001 and Beyond. Springer, Berlin, Heidelberg (2001).
  • 18
    R. Péter, Rekursivität und Konstruktivität, in A. Heyting (ed.), Constructivity in Mathematics, Amsterdam: North-Holland (1959), pp. 226-233.
  • 19
    J. Porte, Quelques pseudo-paradoxes de la “calculabilité effective”, Actes du 2me Congrès International de Cybernetique, Namur, Belgium (1960), pp. 332-334.
  • 20
    E.L. Post, Finite Combinatory Processes - Formulation 1”, Journal of Symbolic Logic, 1 (1936), pp. 103-105.
  • 21
    D. Prawitz, Natural Deduction. A Proof-Theoretical Study. Almqvist & Wiksell: Stockholm (1965).
  • 22
    D. Prawitz, Ideas and results in proof theory, in: J.E. Fenstad ed., Proceedings of the Second Scandinavian Logic Symposium, North-Holland, Amsterdam (1971), pp. 235-307.
  • 23
    L. San Mauro, Church-Turing thesis, in practice, in: Truth, Existence and Explanation, Springer (2018), pp. 225-248.
  • 24
    W. Sieg, “Calculations by man and machine: Conceptual analysis”, in: W. Sieg, R. Sommer, and C. Talcott (eds), Reflections on the Foundations of Mathematics. Essays in Honor of Solomon Feferman (Lecture Notes in Logic: Volume 15), Natick, MA: Association for Symbolic Logic (2002), pp. 396-415.
  • 25
    W. Sieg, “Church Without Dogma: Axioms for Computability”, in B. Lowe, A. Sorbi, and B. Cooper (eds), New Computational Paradigms, New York: Springer Verlag (2008), pp. 139-152.
  • 26
    G. Sundholm, Identity: Absolute. Criterial. Propositional. in: Timothy Childers (ed.), The Logica Yearbook 1998, The Institute of Philosophy, Academy of Sciences of the Czech Republic, Prague, (1999) pp. 20-26.
  • 27
    M.H. Sørensen, P. Urzyczyn, Lectures on the Curry-Howard Isomorphism, Elsevier (2006).
  • 28
    A.S. Troelstra, Non-extensional equality, Fund. Math. 82 (1975), pp. 307-322.
  • 29
    A.M. Turing, On Computable Numbers, with an Application to the Entscheidungsproblem”, Proceedings of the London Mathematical Society (Series 2), 42 (1936-37), pp. 230-265; reprinted in Copeland 2004, pp. 58-90.
  • 30
    A.M. Turing, Solvable and Unsolvable Problems, Science News, 31, (1954), pp. 7-23; reprinted in Copeland 2004, pp. 582-595
  • 31
    A.M. Turing, “Intelligent Machinery”, National Physical Laboratory Report, (1948), in: B.J. Copeland, The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life, Oxford: Oxford University Press , 2004, pp. 410-432.
  • 32
    H. Wang, From Mathematics to Philosophy, New York: Humanities Press (1974).
  • 33
    F. Widebäck, Identity of Proofs, doctoral dissertation, University of Stockholm, Almqvist & Wiksell, Stockholm (2001).
  • 1
    The reader already familiar with the literature of general proof theory on identity of proofs may well skip straight into section 2.
  • 2
    Sundholm, G.B. (199926. G. Sundholm, Identity: Absolute. Criterial. Propositional. in: Timothy Childers (ed.), The Logica Yearbook 1998, The Institute of Philosophy, Academy of Sciences of the Czech Republic, Prague, (1999) pp. 20-26.) Identity: Propositional, criterial, Absolute, in The Logica 1998 Yearbook, Filosofia Publishers, Czech Academy of Science, Prague, pp. 20-26.
  • 3
    The reader already familiar with the normalisation thesis on identity of proofs may well skip this section and start from section 2.
  • 4
    Prawitz, D. (1971) Ideas and Results in Proof Theory in: J.E. Fenstad ed., Proceedings of the Second Scandinavian Logic Symposium, North-Holland, Amsterdam (1971), pp. 235-307.
  • 5
    One might ask oneself why the application of immediate expansions is restricted to these cases. The reason, as I hope will become clear, has nothing whatsoever to do with identity of proofs, but rather with normalisation of derivations; which is the very purpose for which these reductions were devised in the first place. It turns out that the restriction upon immediate expansions is what allows the obtention of strong normalisation in the presence of the reductions by means of which maximum formulas are removed (on this and closely related matters, see C.Jay, N.Ghani13. C.B. Jay, N. Ghani, The virtues of eta-expansion. J. Functional Programming 5 (2), Cambridge University Press (1995), pp. 135-154., The virtues of eta-expansion. J. Functional Programming 5 (2): 135-154, April 1995, Cambridge University Press).
  • 6
    Here one might also wonder why the second reduction, contrarily to the first, inserts, instead of removing, a formula which is dispensable for the obtention of the end-formula of a derivation from its top-ones. The answer, once again, is not connected to identity of proofs, but rather to the confluence of the reduction procedure: if we took the immediate expansions in the other direction - i.e. as contractions rather than expansions -, then e.g. the unicity of the normal form of derivations in general would be lost in the presence of the reductions by means of which one removes maximum formulas. Again, see C.Jay, N.Ghani13. C.B. Jay, N. Ghani, The virtues of eta-expansion. J. Functional Programming 5 (2), Cambridge University Press (1995), pp. 135-154., The virtues of eta-expansion. J. Functional Programming 5 (2): 135-154, April 1995, Cambridge University Press).
  • 7
    The unfamiliar reader is referred to M. H.Sørensen, P. Urzyczyn27. M.H. Sørensen, P. Urzyczyn, Lectures on the Curry-Howard Isomorphism, Elsevier (2006)., Lectures on the Curry-Howard Isomorphism, Elsevier, 2006.
  • 8
    Prawitz acknowledges that this may not be the case for the “redexes” of permutative reductions associated to the eliminations of disjunction and existential quantifier.
  • 9
    “If we think of the objects of theory as ‘direct proofs’ whose canonical descriptions are normal deductions (cf. numbers and numerals in the case of arithmetic), then the inference rules →I, →E correspond to certain operations on direct proofs; the normalistion theorem then establishes that these operations are always defined, and permits us to regard any deduction as a description of a direct proof. The conjecture [i.e. NT] in the direction <= [soundness] then becomes trivially true. But if the deductions are seen as descriptions of a more general concept of proof (...) (in the same manner as closed terms are descriptions of computations in the case of arithmetic) it may be argued that intensional equality should correspond to (literal) equality of deductions; and then the conjecture is false. So, without further information about the intended concept of proof, the conjecture in the direction <= [soundness] is meaningless because, as just explained, ambiguous.”
  • 10
    Which is a quite specific one; more on this topic later.
  • 11
    This “completely” holds both in the sense that all proofs can be analysed in this way and in the sense that the analysis is complete i.e. breaks them down into their most elementary component steps; yet the first of these senses is the obviously the relevant one to the described analogy.
  • 12
    de Castro Alves, 20182. T. de Castro Alves, Synonymy and Identity of Proofs - a Philosophical Essay, doctoral dissertation, Eberhard-Karls-Universität Tübingen, Tübingen (2018)., section III.2. Actually, we were, it seems, rather more radical than Došen on this point: he seems to side with Kreisel and Barendregt in stating that the Post-completeness argument is an efficient way of justifying the completeness part of NT once its soundness is granted; a point specifically against which we argued in section III.2.b.
  • 13
    Most comments to be made will also apply to the other mentioned related theses.
  • 14
    Although there are exceptions: see, e.g. Kripke 2013, who claims that CTT is a corollary of Gödel’s completeness theorem for first-order predicate logic with identity. Gandy 198810. R. Gandy, The Confluence of Ideas in 1936, in R. Herken (ed.), 1988, The Universal Turing Machine: A Half-Century Survey, Oxford: Oxford University Press (1988), pp.51-102., Mendelson 199016. E. Mendelson, Second thoughts about Church’s thesis and mathematical proofs. The Journal of Philosophy (1990), pp. 225-233. and Sieg 200825. W. Sieg, “Church Without Dogma: Axioms for Computability”, in B. Lowe, A. Sorbi, and B. Cooper (eds), New Computational Paradigms, New York: Springer Verlag (2008), pp. 139-152. also figure among such cases. The reader interested in this particular topic is referred to Copeland 20175. B. J. Copeland, The Church-Turing Thesis, The Stanford Encyclopedia of Philosophy (Winter 2017 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2017/entries/church-turing/>.
    https://plato.stanford.edu/archives/win2...
    SEP entry for further details and references on this kind of approach. The present effort takes the much more frequently adopted stance on the topic - namely, the contrary one - as a working departure point for building its argument. It is nevertheless probably worth stressing that it might well be the case that no real disagreement ultimately occurs between examples of these two apparently opposite claims.
  • 15
    Turing, A.M., 1954, “Solvable and Unsolvable Problems”, Science News, 31: 7-23; reprinted in Copeland 2004b: 582-595.
  • 16
    Two that appear in the literature of which I am aware can be respectively found in Péter18. R. Péter, Rekursivität und Konstruktivität, in A. Heyting (ed.), Constructivity in Mathematics, Amsterdam: North-Holland (1959), pp. 226-233., R. Rekursivität und Konstruktivität, Constructivity in Mathematics, Amsterdam 1959, pp. 226-233.; and Porte19. J. Porte, Quelques pseudo-paradoxes de la “calculabilité effective”, Actes du 2me Congrès International de Cybernetique, Namur, Belgium (1960), pp. 332-334., J. Quelques pseudo-paradoxes de la “calculabilité effective”, Actes du 2me Congrès International de Cybernetique, Namur, Belgium,1960, pp. 332-334.
  • 17
    The unfamiliar reader is referred to Došen 20038. K. Došen, Identity of proofs based on normalization and generality, The Bulletin of Symbolic Logic, vol. 9 (2003), pp. 477-50., pp.8-10, where this point is made explicit and the so-called “generality conjecture” on identity of proofs is explained in adequate level of detail.
  • 18
    Maybe now is the proper time for a brief digression, in which we are to ask ourselves this: What does this mean: a term of art? Where does such a use of a term lie between a technical, i.e. explicitly stipulated in a fix and possibly arbitrary fashion, and a non-theoretical, spontaneous or unreflective one? In the cases that now interest us, as in many others, it is most certainly not the case that philosophers, mathematicians and scientists just chose to use a given term to express a certain specific concept of their disciplines out of no particular reason and in a way independent from its meaning in other contexts where its rules of employment are well enough sedimented by linguistic practices. Terms of art become terms of art usually due to their behaviour elsewhere in language, and not independently of it, let alone in spite of it. Thus, e.g. “effectiveness” in logic, mathematics and computer science is not some extraterrestrial appropriation of a term which is employed in most other contexts to refer to some distinct, “usual” notion of effectiveness; an appropriation alien to the sense in which one says that e.g. using a blunt razor blade is not very effective for shaving one’s beard. Of course one must recognise that e.g. while writing truth tables is a mathematically effective procedure to test whether or not any given sentence of classical propositional logic is tautological, it could hardly be regarded as effective in a practical sense for testing the tautologousness of sentences with many different variables - it may simply take too long, to the point of unfeasibility. But does this mean that we are dealing with different notions of effectiveness in each of these cases? Some fallacies, for instance, may be deemed rhetorically very effective arguments, despite their being clearly epistemically ineffective. Would it not rather be the case of effectiveness - a single, non-technical and democratically accessible notion, which means, quite obviously, the character of methods or procedures that produce or are capable of producing a certain effect - being measured by different standards, according to different requirements and goals involved in respective tasks or activities? This reflection applies to both cases considered here, of course. This means that proofs, as well as their identity, are also not to be looked upon in the present context as completely isolated instances of notions which are not but homonymous to some supposed everyday notions of proof and their identity; insulated notions, over the determination of the meaning of which logicians, mathematicians and computer scientists would have complete and exclusive power. I fail to see reasons why anyone should feel entitled to consider what would be such linguistic aberrations as relevant, even as mere possibilities, to the present discussion.
  • 19
    Footnote 16.
  • 20
    See e.g. p.51 of Gandy, R., 198810. R. Gandy, The Confluence of Ideas in 1936, in R. Herken (ed.), 1988, The Universal Turing Machine: A Half-Century Survey, Oxford: Oxford University Press (1988), pp.51-102., “The Confluence of Ideas in 1936”, in R. Herken (ed.), 1988, The Universal Turing Machine: A Half-Century Survey, Oxford: Oxford University Press: 51-102.
  • 21
    My experience is that the use of this notion - “intuitive” -, while hardly being of any significant utility at all in philosophical discussion, usually comes at the expensive cost of bringing about a great deal of confusion to it. Therefore it is purposefully avoided here as much as possible.
  • 22
    A view similar to the one just put forward on CTT is advocated in Shapiro 2006 and Shapiro 2013, though in far greater level of detail and with very different purposes. Shapiro argues at length for some theses very close to some we have presented here: e.g., that the informal notion of computability at stake in the early 1930s had no clear-cut borders and CTT historically contributed to sharpen it (in a way that nowadays cannot be, in a sense, undone). The views presented here were developed in complete unawareness and independence of Shapiro’s mentioned work, but I of course waive any claim to originality on the matter. A proper discussion of how Shapiro’s views compare to the ones presented here would lead us far too much astray, but a few rather inexhaustive notes are due. Providing a satisfactory defence of the view of CTT presented in this section was of course never a goal here at all - conveying the idea as a reasonable interpretive possibility is more than enough for the present purposes, which, in essence, do not but instrumentally employ this interpretive possibility as a means of analysis and interpretation of another thesis: NT. Shapiro’s mentioned efforts, on the other hand, do have CTT as their main subject-matter, and thus unsurprisingly argue and claim a lot more concerning it than the brief six-page-long instrumental interpretive characterisation offered here. Thus, for instance, Shapiro insistently subscribes to the truth of CTT - a matter concerning which no particular stance is explicitly taken here. In fact, as some passages of this effort suggest, a discussion of the eventual truth value of CTT - especially of its soundness clause - seems rather incompatible with the current angle of consideration. Besides, Shapiro seems to tie CTT’s successful establishment of a paradigmatic conception of effective computability to a loss of room for what he calls “open-texture” of this notion, especially within mathematics. Once again, no particular judgement on this matter is explicitly expressed here - although I would like to stress, bringing the remarks made in footnote 20 above as background, that the philosophical discussion of effective computability among mathematicians is rather young in historical standards, and it is nested in a long and dense tradition of thought which it frequently neglects, and whose bearing on the current and future possibilities for this debate - within mathematics and elsewhere - has not really been adequately studied as yet. It is also worth noticing that, seen from a more abstract perspective, Shapiro’s texts share with this one the employment of a sort of general strategy for approaching its main subject-matter, namely: a comparison of the thesis under investigation with other conveniently similar, historically or conceptually related mathematical characterisations of informal notions, aimed at highlighting noteworthy aspects of the first one by analogies or contrasts. More concrete and well developed notes on Shapiro’s views are left for a more convenient opportunity.
  • 23
    de Castro Alves 20182. T. de Castro Alves, Synonymy and Identity of Proofs - a Philosophical Essay, doctoral dissertation, Eberhard-Karls-Universität Tübingen, Tübingen (2018)., section I.
  • 24
    The problem of algorithm identity has already received attention in the literature, though. Yiannis Moschovakis, for instance, is a locus classicus: he proposes a conception of algorithm identity based on the idea of recursor isomorphism. The reader is referred to Moschovakis 200117. Y.N. Moschovakis, What Is an Algorithm?, in: Engquist B., Schmid W. (eds) Mathematics Unlimited - 2001 and Beyond. Springer, Berlin, Heidelberg (2001). for an explicit formulation of his thesis (p.18). For a criticism of Moschovakis’ stance and a sceptical view on the possibility of giving this question a proper answer, see the homonymous Gurevich 201212. Y. Gurevich, “What is an algorithm?” in SOFSEM: Theory and Practice of Computer Science (eds. M. Bielikova et al.), Springer LNCS 7147 (2012), pp. 31-42; and also Blass, Dershowitz and Gurevich 20091. A. Blass, N. Dershowitz, and Y. Gurevich. When are two algorithms the same? Bulletin of Symbolic Logic, 15(02) (2009), pp.145-168.. San Mauro 201823. L. San Mauro, Church-Turing thesis, in practice, in: Truth, Existence and Explanation, Springer (2018), pp. 225-248. also discusses the surroundings of this problem.
  • 25
    de Castro Alves 20182. T. de Castro Alves, Synonymy and Identity of Proofs - a Philosophical Essay, doctoral dissertation, Eberhard-Karls-Universität Tübingen, Tübingen (2018). (e.g. all subsections of III.2 and IV.2 ) .
  • 26
    For a case built specifically against the complenetess of CTT, see L.Kalmár, 195914. L. Kalmár, An Argument Against the Plausibility of Church’s Thesis, in A. Heyting (ed.), Constructivity in Mathematics, Amsterdam: North-Holland, (1959) pp. 72-80., “An Argument Against the Plausibility of Church’s Thesis”, in A. Heyting (ed.), 1959, Constructivity in Mathematics, Amsterdam: North-Holland: 72-80.
  • Article info CDD: 511.3

Publication Dates

  • Publication in this collection
    21 Oct 2020
  • Date of issue
    Jul-Sep 2020

History

  • Received
    18 Apr 2020
  • Reviewed
    11 Sept 2020
  • Accepted
    14 Sept 2020
UNICAMP - Universidade Estadual de Campinas, Centro de Lógica, Epistemologia e História da Ciência Rua Sérgio Buarque de Holanda, 251, 13083-859 Campinas-SP, Tel: (55 19) 3521 6523, Fax: (55 19) 3289 3269 - Campinas - SP - Brazil
E-mail: publicacoes@cle.unicamp.br