_{1}

^{*}

The fundamental aim of the paper is to correct a harmful way to interpret a Godel’s erroneous remark at the Congress of Konigsberg in 1930. Although the Godel’s fault is rather venial, its misreading has produced and continues to produce dangerous fruits, so as to apply the incompleteness Theorems to the full second-order Arithmetic and to deduce the semantic incompleteness of its language by these same Theorems. The first three paragraphs are introductory and serve to define the languages inherently semantic and its properties, to discuss the consequences of the expression order used in a language and some questions about the semantic completeness. In particular, it is highlighted that a non-formal theory may be semantically complete despite using a language semantically incomplete. Finally, an alternative interpretation for the Godel’s unfortunate comment is proposed.

Often the adjective formal is abused in violation of the original meaning that can rightly be called Hilbertian. Actually, it is common understanding “formal system” as a synonym for “axiomatic system”, although no one doubts the necessary existence of specifically semantic axioms in various mathematical disciplines. This superficiality, together with the wrong meaning given to certain affirmations of Gödel, assists in producing the serious mistakes that we will show.

The best definition of a formal system is probably that one given by Lewis in 1918 [

A mathematical system is any set of strings of recognizable marks in which some of the strings are taken initially and the remainder derived from these by operations performed according to rules which are independent of any meaning assigned to the marks.

So, it is a framework that starting by certain strings of characters with no meaning (the formal axioms), produces (deducts) other strings with no meaning (the theorems) by use of operations (deductive rules) which never make use of any meaning assigned to any mark. To get from that sort of “abacus” a real scientific discipline, we must interpret―if this is possible―the characters and strings, so that: a) the axioms are true; b) the deductive rules are sound, i.e. capable of producing only true meaningful strings, starting by true meaningful strings. Such an interpretation is said as sound model. This conceptualization, revolutionary for the times, is also present in the contemporary Bernays and Post (see, for example, the Dreben and Heijenoort introduction to the first Gödel’s works in [

[...] It is surely obvious that every theory is only a scaffolding or schema of concepts together with their necessary relations to one another, and that the basic elements can be thought of in any way one likes. If in speaking of my points I think of some system of things, e.g. the system: love, law, chimney-sweep… and then assume all my axioms as relations between these things, then my propositions, e.g. Pythagoras’ theorem, are also valid for these things. In other words: any theory can always be applied to infinitely many systems of basic elements. [...] a concept can be fixed logically only by its relations to other concepts. These relations, formulated in certain statements I call axioms, thus arriving at the view that axioms… are the definitions of the concepts. I did not think up this view because I had nothing better to do, but I found myself forced into it by the requirements of strictness in logical inference and in the logical construction of a theory.

Incidentally, the fact that none of the mentioned authors uses rigorously the adjective formal is not surprising. Since for all of them the forthright belief (or hope) was that every axiomatic theory was formalizable, the formal dressing was nothing more than the correct way of presenting any axiomatic discipline. Precisely Gödel is one of the first to use the term carefully and accurately for his attention to distinguish meticulously between Mathematics and Metamathematics and his consciousness of expressive limits for the formality (which he will help to shape). As a matter of fact, the non-formal axiomatic theories are necessary to express, and possibly decide, all that the Mathematics was thought to deal. Moreover, the condition of categoricity only is feasible in non-formal axiomatic systems^{1}.

The concept of formality requires at least a clarification. The question is simply whether the formal systems coincide with recursively (or effectively, assuming the Church-Turing Thesis) axiomatizable systems. When the works of Church and, especially, Turing [

In consequence of later advances, in particular of the fact that due to A. M. Turing’s work a precise and unquestionably adequate definition of the general notion of formal system can now be given, a completely general version of Theorems VI and XI [the two incompleteness Theorems] is now possible. That is, it can be proved rigorously that in every consistent formal system that contains a certain amount of finitary number theory there exist undecidable arithmetic propositions and that, moreover, the consistency of any such system cannot be proved in the system.

Clarifying, in a footnote of this same text, that:

In my opinion the term “formal system” or “formalism” should never be used for anything but this notion. [...]

The Gödel’s proposal was therefore to coincide, by definition, the effectively axiomatizable systems with the formal systems. Unfortunately, this suggestion was not successful. Certainly, it is just a matter of convention; but if we maintain the original definition of formal, the two concepts are different. Consider a theory that deduces only by the classical deductive rules. Since these rules are mechanizable, the only obstacle for a machine, in simulating the system, is the specification of the set of axioms: if it is mechanically reproducible, the system will be effectively axiomatizable. On the other hand, on the basis of definition of formality, the chains to call axioms undoubtedly must be exhibitable but can be specified in an arbitrary way. So, effective axiomatizability implies formality but not vice versa.

An important example of this case can be built by the formal Peano Arithmetic (PA). Assuming that such theory is consistent, we can add as new axioms the class of its true statements in the intuitive (or standard) model. By construction, it forms a syntactically complete system (PAT) which, as a result of the first incompleteness Theorem, cannot be effectively axiomatizable. Nevertheless this system can still be called formal: it is necessary to give meaning to the sentences just to determine whether or not they are axioms. Once done, every statement can be reconverted to a meaningless string, since the system can deduce its theorems without making use of significance. So, really exists a formal system able to solve the halting problem: cold comfort.

Consider an arbitrary language that, as normally, makes use of a countable^{2} number of characters. Combining these characters in certain ways, are formed some fundamental strings that we call terms of the language: those collected in a dictionary. When the terms are semantically interpreted, i.e. a certain meaning is assigned to them; we have their distinction in adjectives, nouns, verbs, etc. Then, a proper grammar establishes the rules of formation of sentences. While the terms are finite, the combinations of grammatically allowed terms form an infinite-countable amount of possible sentences.

In a non-trivial language, the meaning associated to each term, and thus to each expression that contains it, is not always unique. The same sentence can enunciate different things, so representing different propositions. For example, the same sentence “it is a plain sailing” has a different meaning depending on the circumstances: at board of a ship or in the various cases with figurative sense. How many meanings can be associated to the same term? That is: how many different propositions, in general, can we get by a single sentence? The answer, for a normal semantic language, may be amazing.

Suppose we assign to each term a finite number of well-defined meanings. We could then instruct a computer to consider all the possibilities of interpretation of each term. The computer, to simplify, may assign all the different meanings to an equal number of distinct new terms that it has previously defined. For example, it might define the term “f-sailing” for the figurative use of “sailing” (supposed unique). The machine would then be able, using the grammar rules, to generate all the infinite-countable propositions. In this case we will say that, in the specific language, the meaning has been deleted^{3}. More generally we have this case when the different meanings allowed for each term are effectively enumerable: even in the case of an infinite-countable amount of meanings, the computer can define an infinite-countable number of new terms and associate only one meaning to each term in order to establish a biunivocal correspondence between sentences and propositions. So, the machine could list all them by combination.

Hence, by definition, we will say that a language is inherently semantic (i.e. having a non-eliminable meaning) if it uses at least one term with an amount not effectively enumerable of meanings; with the possibility, which we will comment soon, that this quantity is even uncountable. From the fact that a sentence represents more than one proposition if and only if it contains at least one term differently interpreted, it follows an equivalent condition for the inherent semanticity: a language is inherently semantic if and only if the set of all possible propositions is not effectively enumerable.

A first important example of such a language was considered in the previous section. The fact that the axioms of PAT are not effectively enumerable proves that in the expression “true statement in the standard model” the term true has got an amount not effectively enumerable (although enumerable) of distinct meanings. So, the phrase belongs to an inherently semantic language.

When the set of all possible propositions is enumerable but not effectively enumerable, still it is possible to define an amount infinite-countable of new terms and to associate only one meaning to each term (so re-establishing a biunivocal correspondence between sentences and propositions); but this operation cannot be performed by a machine. Go back to the example of PAT: an inherently semantic phrase is used to define all its new axioms, but then every axiom is formulated by its unique symbolic representation in PA. Every proposition of PAT “true in the standard model” is explicitly replaced with an appropriate formula of PA. So, in this case, the “definition of new terms” consists precisely in using the formulas of PA to express the axioms of PAT; although, this is a non-mechanizable task.

Now consider the case of an inherently semantic language in which all possible propositions are not enumerable, i.e. in which there exists at least one term with an uncountable quantity of meanings. Due to the uncountability of properties of the standard natural numbers (N), i.e. of the set of all subsets of N, P(N), this case is especially interesting, because only a language of this kind is capable of expressing these properties. This time it is not possible to define new terms so to associate to them an unique meaning, because the number of all possible strings is only countable^{4}. Not even we can reaccommodate the things so to re-associate a quantity at most infinite-countable of interpretations to each term, because in this way we still could get only a countable total amount of sentences. There is no way to avoid that at least one term conserves an uncountable number of interpretations.

Although that is just what really happens in every usual natural language, this feature, at first, might surprise or even be considered unacceptable. Undoubtedly, all the meanings that ever will be assigned to any settled word are only a countable number: indeed, finite! But these meanings cannot be specified once and for all: the fact remains that the possible interpretations of the term vary within an infinite collection. Moreover, a collection not limited by any prefixed cardinality. Some classic paradoxes can be interpreted as a confirmation of this property. The Richard’s one^{5}, for example, can be interpreted as a meta-proof that the semantic definitions are not countable, i.e. that they are conceivably able to define each element of a set with cardinality greater than the enumerable one (and therefore each real number). The proper technique used in diagonal reasoning, reveals that the natural language is able to adopt different semantic levels (or contexts) “looking from the outside” what was “before” defined, namely, what was previously said by the same language. Identical words used in different contexts have a different meaning and for the number of contexts, including nested, there is no limit. On the other hand, the Berry’s paradox clearly shows that a finite amount of symbolic chains, differently interpreted, is able to define an infinite amount of objects. Here the key of the argument is again the use of two different contexts to interpret the verb “define”.

Finally, we consider a consistent arbitrary axiomatic system (S_{A}). We wish that S_{A} is able to express and possibly decide^{6} all the properties of the natural numbers. In particular it must be capable to distinguish the properties one from the other, in order to deduce, in general, different theorems starting by different properties. Admitting, as normally, that S_{A} makes use of a countable number of symbols, if we interpret the theory in a certain conventional model, this will associate a single meaning to each sentence. So, the interpreted sentences will be only an enumerable amount and they cannot express all the properties of the natural numbers. Therefore, the only possibility is to consider a non-conventional model of the system, able to assign more than a single meaning (indeed a quantity at least 2^{אּ}^{0}) to at least one sentence; and, then, able to verify the remaining requirements for a normal model^{7}. That is what really happens in the so-called full second order Arithmetic (FSOA): its standard model, which can be proved unique under isomorphism, is of this kind. For what we said, this theory is an inherently semantic―non-formal―axiomatic system. Definitively, no kind of interpretation can allow an axiomatic system to express and study all the properties of the natural numbers in compliance with formality.

The first-order predicate calculus of Logic (defined by Russell and Whitehead in [^{8} only can range over variable-elements of the universe (U) of the model. In 1929 Gödel proved the semantic completeness of this theory, namely, that all the valid (i.e. true in every model) sentences are theorems [

The second-order predicate calculus of Logic, or second-order classical Logic, extends the use of existential quantifiers to predicates, i.e. properties of the elements of U. From the standpoint of the Set theory, the predicate-variables vary inside the set of all subsets of U, that is, P(U). In non-trivial cases, U is infinite and consequently P(U) always has got an uncountable cardinality. If we consider an arbitrary non-trivial axiomatic theory based on the second-order classical Logic, we have thus three fundamental cases:

1) The axioms and/or rules of the theory nevertheless ensure formality. For this purpose it is necessary (although not sufficient) that they limit the variability of the predicates within a countable subset of P(U); normally, this is achieved by means of appropriate comprehension axioms. This case is known as general, or Henkin’s, Semantics.

2) There is no limit for the variability of the predicates in P(U). This full understanding is known as standard Semantics. In any case a non-formal system is obtained, since we have to be able to express an uncountable quantity of propositions.

3) The variability of predicates in P(U) is restricted but not enough to verify the formality. The propositions may be uncountable or countable but, in the latter case, for what we have observed, not effectively.

In the case 1) the theory always is semantically complete; what happens in the remaining cases?

The FSOA Arithmetic is right an important example of case 2). In it, an induction principle, valid for “any property of the natural numbers”, is defined as axiomatic scheme. Definitely, this is a full understanding of the inductive rule, that implies uncountability for the sentences of the theory^{9}. By the way, it is well known that such full understanding is necessary to achieve the categoricity (see e.g. [

At this point an explanation is necessary, which, surprisingly, I never have found in any publication. Either by the invalidity of compactness Theorem or by the L-S Theorem, what one really concludes is precisely the semantic incompleteness of the language of FSOA, i.e. of the full second-order Logic; and not of the system. Not every system that is expressed by the full second-order Logic (or, more generally, by any other semantically incomplete language) has necessarily to be semantically incomplete^{10}. In particular, for a categorical (non-trivial) system (which always, by the L-S Theorem, uses a semantically incomplete language), there is nothing to prohibit a priori the possibility that it is syntactically (or semantically^{11}) complete. Indeed, such a system, which necessarily will be non-formal, could deduce all the true sentences using an appropriate kind of inherently semantic deduction (more in [

On the other hand, a source of mistakes is undoubtedly the current widespread tendency to categorize the axiomatic Theories looking at the expression order (first order, second order, etc.) without making the necessary distinctions. It is in line with the disuse of the term formal in its pure Hilbertian sense (so used also by Gödel). Regardless to clarify if the semantics is full (or standard) or general (i.e. accomplishing the formality) or intermediate (previous case 3), the systems of second (or more) order are normally considered as those for which the semantic completeness, and properties related to it, does not apply, as opposed to those of first order. But, firstly, a first order classic system can own proper axioms that violate either the semantic completeness or the same formality (see [^{12}. It only states that, when you have this case, the theory can be re-expressed in a―simpler―first-order language. Certainly, this property stands out the particular importance of the first-order language^{13}. But this should not be radicalized. The grouping of the axiomatic theories based on the expression order is, in general, misleading their basic logical properties, unless the actual effect of the axioms is considered. Because these properties are only a consequence of the premises. The essential tool for classification remains the accomplishment of the Hilbertian formality.

Starting from the FSOA, the formality can be restored by limiting the induction principle to the formally expressible properties, so re-establishing an injective correspondence between formulas and properties: in this way the system PA is obtained, which, however, is unable to express, and therefore also decide, infinite (namely, again 2^{אּ}^{0}, since this number remains unchanged for subtraction of _{0}אּ) properties of the natural numbers. Now the induction principle is not longer able to reject all the models non-isomorphic to N, so categoricity cannot be achieved (see e.g. [

Finally we recall that the incompleteness Theorems are valid for any effectively axiomatizable (and therefore formal) system in which the general recursive functions are definable. Therefore, they can be applied to PA but not to the non-formal FSOA. For PA, the first Theorem reveals a further limitation: if one admits its consistency, even between the properties of the natural numbers that this theory is able to express, there are infinite (_{0}אּ) that it cannot decide.

In the third volume of the aforementioned Kurt Gödel’s collected works, published in 1995, are collected the unpublished writings of the great Austrian logician. According to the editors, the document *1930c is, in all probability, the text presented by Gödel at the Königsberg congress on September 6, 1930 [

In the first part of the document, Gödel presents his semantic completeness Theorem, extended to the “restricted functional calculus” (certainly identifiable with the first-order classical Logic: to be convinced, just consult [

[...] If the completeness theorem could also be proved for the higher parts of logic (the extended functional calculus), then it could be shown in complete generality that syntactical completeness follows from monomorphicity [categoricity]; and since we know, for example, that the Peano axiom system is monomorphic [categorical], from that the solvability of every problem of arithmetic and analysis expressible in Principia mathematica would follow. Such an extension of the completeness theorem is, however, impossible, as I have recently proved [...]. This fact can also be expressed thus: The Peano axiom system, with the logic of Principia mathematica added as superstructure, is not syntactically complete.

In summary, Gödel affirms that is impossible to generalize the semantic completeness Theorem to the “extended functional calculus”. In fact, in this case also the Peano axiomatic system, structured with the logic of Principia Mathematica (PM), would be semantically complete. But since this theory is categorical, would follow that it is also syntactically complete. But just this last thing is false, as he―surprise―announces to have proved.

Now, regardless of what Gödel meant by “extended functional calculus”, this affirmation contains an error. We have in fact two cases:

a) If Gödel understands by “Peano axiomatic system structured with the logic of the PM” the formal theory PA or any other formal arithmetical system, the error is precisely to regard it as categorical.

b) If, however, he alludes to the only categorical arithmetic, that is FSOA, then Gödel errs applying to it his first incompleteness Theorem.

Of the two, just the second belief has been consolidating but―shockingly―without reporting the error. Rather, exalting the merit of having detected for the first time the semantic incompleteness of the full second-order Logic. Surely, to forming this opinion has been important the influence of the following sentence contained in the second edition (1938) of Grundzüge theoretischen der Logik by Hilbert and Ackermann [

Let us remark at once that a complete axiom system for the universally valid formulas of the predicate calculus of second order does not exist. Rather, as K. Gödel has shown [K. Gödel, Über formal unentscheidbare Sätze der Principia Mathematica und verwandter systeme, Mh. Math. Physik Vol. 38 (1931)], for any system of primitive formulas and rules of inference we can find universally valid formulas which cannot be deduced.

The echo of the Gödel’s unfortunate words at the Congress pushes the authors (probably Ackermann, given the age of Hilbert) to attest that the first incompleteness Theorem concludes precisely the semantic incompleteness of the second-order Logic! False. Furthermore, in our opinion, that (true) conclusion cannot be derived by the incompleteness Theorems.

Sadly, today the belief that the incompleteness Theorems even can apply to FSOA and, above all, that they have as a corollary the semantic incompleteness of the full second-order Logic is almost unanimous. This can be seen either in Wikipedia or in the most specialized paper. Even in the introductory note of the aforementioned document, Goldfarb writes [

Finally, Gödel considers categoricity and syntactic completeness in the setting of higher-order logics. [...] Noting then that Peano Arithmetic is categorical―where by Peano Arithmetic he means the second-order formulation―Gödel infers that if higher-order logic is [semantically] complete, then there will be a syntactically complete axiom system for Peano Arithmetic. At this point, he announces his incompleteness theorem: “The Peano axiom system, with the logic of Principia mathematica added as superstructure, is not syntactically complete”. He uses the result to conclude that there is no (semantically) complete axiom system for higher-order logic.

So interpreting, without the slightest doubt, that Gödel refers to the second-order categorical Arithmetic. But in this case the incompleteness Theorem could not be applied! Goldfarb neglects that the full second-order induction principle, the only capable to ensures categoricity, generates an uncountable quantity of axioms, so that the effective axiomatizability of the system cannot be accomplished. As a matter of fact, the FSOA theory could be syntactically complete.

Sometimes, the semantic incompleteness of the full second-order Logic is “concluded” by an alternative approach to passing by the (alleged) syntactic incompleteness of FSOA (a Freudian stimulus?). It is supposed, by contradiction, that the valid statements of the full second-order are effectively enumerable theorems and then, by applying the incompleteness (or Tarski’s) Theorem, an absurdity is obtained (emblematic examples, respectively, in [

The main reasons of this unfortunate misunderstanding are due probably to ambiguities of the used terminology, both ancient and modern.

The expression “extended predicate calculus” is for the first time used by Hilbert in the first edition (1928) of the aforementioned Grundzüge der theoretischen Logik where, with no doubt, indicates the full second-order Logic. This one was considered for the first time in the Principia Mathematica (PM). The belief that Gödel, in the aforementioned phrase, refers to FSOA (explanation b.), implies that he with “extended functional calculus” intends the same thing. But in which work he has shown or at least suggested that the incompleteness Theorems can apply to the full second-order Logic? In none.

In his proof of 1931, Gödel refers to a formal system with a language that, in addition to the first-order classical logic, allows the use of non-bound functional variables (i.e. without the possibility to quantify on them) [

If we imagine that the system Z is successively enlarged by the introduction of variables for classes of numbers, classes of classes of numbers, and so forth, together with the corresponding comprehension axioms, we obtain a sequence (continuable into the transfinite) of formal systems that satisfy the assumptions mentioned above [...]

Speaking explicitly of comprehension axioms and formal systems. Finally, in the publication of 1934, which contains the last and definitive proof of the first incompleteness Theorem, Gödel, having the aim both to generalize and to simplify the proof, allows the quantification either on the functional or propositional variables: a declared type of second-order. However, appropriate comprehension axioms limit at infinite countable the number of propositions [

Different formal systems are determined according to how many of these types of variables are used. We shall restrict ourselves to the first two types; that is, we shall use variables of the three sorts p, q, r,... [propositional variables]; x, y, z,... [natural numbers variables]; f, g, h, ... [functional variables]. We assume that a denumerably infinite number of each are included among the undefined terms (as may be secured, for example, by the use of letters with numerical subscripts). [...] For undefined terms (hence the formulas and proofs) are countable, and hence a representation of the system by a system of positive integers can be constructed, as we shall now do.

Therefore, we are again in the case 1) of the third section: far from full second-order. Nevertheless, in the introduction to the same paper, Kleene, in summarizing the work of Gödel, does not avoid commenting ambiguously [

Quantified propositional variables are eliminable in favor of function quantifiers. Thus the whole system is a form of full second-order arithmetic (now frequently called the system of “analysis”).

But he can mean only that the whole system is a formal version (perhaps as large as possible) of the full second-order Arithmetic. Maybe is exactly this one the “extended functional calculus” to which Gödel was referring in the examined words at the Congress? We will discuss it in the next section.

Another source of mistake is probably related to use of the term metamathematics. Although Gödel intends it in the modern broad sense that includes any kind of argument beyond to the coded formal language of Mathematics (so, also the possibility of using inherently semantic inferences and/or making use of the concept of truth), in his theorems always he employs this term limiting it to a formalizable (though often not yet formalized) use deductive (and, indeed, even decidable): i.e. only with purpose of brevity. In the short paper that anticipates his incompleteness Theorems, for example, Gödel invokes a metamathematics able to decide whether a formula is an axiom or not [

[...] IV. Theorem I [first incompleteness Theorem] still holds for all ω-consistent extensions of the system S that are obtained by the addition of infinitely many axioms, provided the added class of axioms is decidable, that is, provided for every formula it is metamathematically decidable whether it is an axiom or not (here again we suppose that in metamathematics we have at our disposal the logical devices of PM). Theorems I, III [as the IV, but the added axioms are finite], and IV can be extended also to other formal systems, for example, to the Zermelo-Fraenkel axiom system of set theory, provided the systems in ques- tion are ω-consistent.

But in both the subsequent rigorous proofs, he will formalize this process, which now is called metamathematical, using the recursive functions, so revealing that, in the words just quoted, he refers to the usual “mechanical” decidability. By the same token, even in the theorem that concludes the consistency of the axiom of choice and of the continuum hypothesis with the other axioms of the formal Set Theory, he does the same: he uses the metamathematics only as a simplification, stating explicitly that all “the proofs could be formalized” and that “the general metamathematical considerations could be left out entirely” [

As noted, Gödel has never put in writing that his proofs of incompleteness may be applied to the uncountable full second-order Arithmetic and furthermore it looks absolutely not reasonable to believe that he deems it^{14}. In this section, therefore, we will examine the other possibility, namely the a. of the fourth section. It, remember, pretends that Godel in 1930 believed, mistakenly, categorical a kind of formal arithmetic and, in consequence of his incompleteness Theorems, semantically incomplete its language. Is this reasonable (or more reasonable than the previous case)?

Certainly not for the system considered by Gödel in his first proof of 1931: in fact, the semantic completeness Theorem applies to it, as Gödel himself remarks in note n. 55 of the publication [

As we have observed in third section, apart from the use of the incompleteness Theorems, the existence of non-standard models for PA can be proved by the compactness Theorem, the L-S upward one, or a theorem proved by Skolem in 1933 [^{15}. Moreover, despite its fundamental importance for the model theory, nobody―except Maltsev in 1936 and 1941―uses it before 1945^{16}. Not much more fortunate is the story of the L-S Theorem. The first proof, by Löwenheim (1915), will be simplified by Skolem in 1920^{17}. In both cases, these theorems are downward versions, able to conclude the non-categoricity of the formal theory of the real numbers and of the formal Set Theory, but not of PA. However, Skolem and Von Neumann [^{18}. In any case, the argument continues to have low popularity^{19}, at least until the generalization of Maltsev in 1936 [

In this context of disinterest for the topic, Gödel not only is no exception, but his notorious Platonist inclination pushes him to distrust and/or despise any interpretation that refers to objects foreign to those that he believes existing independently of the considered theory, which, in all plausibility, also believes unique. As a matter of fact, in the introduction of his first paper on the semantic completeness, he shows to believe categorical even the first-order formal theory of the real numbers [

On this basis, one can surmise the following alternative for the option a. When he discovers the non-catego- ricity of the formal arithmetical system where his original incompleteness Theorems are applied, Gödel is not so glad and immediately looks for an extension that, though formal, is able to ensure the categoricity. Probably he believes to have identified it in a formal version of the full second order Arithmetic: just that one that will be considered in his generalized proof of the first incompleteness Theorem of 1934 [

(Assuming the consistency of classical mathematics) one can even give examples of propositions (and in fact of those of the type of Goldbach or Fermat) that, while contentually true, are unprovable in the formal system of classical mathematics. Therefore, if one adjoins the negation of such a proposition to the axioms of classical mathematics, one obtains a consistent system in which a contentually false proposition is provable.

Admitted the soundness, here he evades mentioning standard models, perhaps because he plans a more general proof than that one currently available, valid for a formal arithmetic believed categorical. A theory that, as announced in the main affirmation under scrutiny, uses the “extended functional calculus”: inevitably, a semantically incomplete language due to the syntactic incompleteness together with the (alleged) categoricity.

Anyway, the Skolem’s proof of 1933 should have amazed him: not even the system, semantically and syntactically complete, of the propositions true for the standard model, is categorical. A first disturbing evidence that the non-categoricity covers all formal (non-trivial) systems, regardless of the syntactic completeness or incompleteness. Gödel, in reviewing the Skolem’s paper, only observes laconically―finally!―that a consequence of this result, that is the non-categoricity of PA, was already derivable from his incompleteness Theorems [

Finally, the Henkin’s Theorem of 1949 will prove that in every formal system (and so, anywhere the incompleteness Theorems could be applied) there is semantic completeness of the language and therefore, if infinite models exist, there cannot be categoricity.

We summarize briefly the conclusions of this paper:

1) In every usual natural language, the meanings that can be assigned to the terms vary, in general, within a collection not limited by any prefixed cardinality.

2) The Richard’s paradox can be interpreted as a meta-proof that the semantic definitions are not countable, i.e. that they are conceivably able to define each element of a set with cardinality greater than the enumerable one (so, each real number). Further, the Berry’s paradox shows that a finite amount of symbolic strings, differently interpreted, is able to define an infinite amount of objects.

3) An axiomatic system that allows us to express and study an uncountable amount of properties cannot be formal.

4) The categorical model of the full second-order Arithmetic is an interpretation that, before satisfying the axioms, associates 2^{אּ}^{0} different propositions to at least one sentence. The result, thus, is a non-formal axiomatic system.

5) An axiomatic system can be semantically complete despite employing a semantically incomplete language. In particular, the full second-order Arithmetic could be syntactically (and therefore also semantically, being categorical) complete.

6) The grouping of the axiomatic theories based on the expression order is, in general, misleading their fundamental logical properties: these are only a consequence of the premises. The essential tool for classification remains the accomplishment of the Hilbertian formality.

7) The text of the Gödel’s communication at the conference in Königsberg on September 6, 1930 (never published by him) contains a mistake. In the common understanding, this error not only is not reported, but also is wrongly deduced, by it, that a) the incompleteness theorems can also be applied to the categorical arithmetic founded on the full second-order Logic; b) the semantic incompleteness of the full second-order logic is a consequence of the incompleteness theorems (or of the ensuing Tarski’s truth-undefinability theorem).

8) On the basis of the publications of Gödel and of logic, the previous interpretation is untenable.

9) As an alternative interpretation of the manuscript in question, it is possible that Gödel is referring to the formal arithmetic considered in his proof of 1934, in which the quantification on the functional and propositional variables is allowed. If so, in 1930 he believed that this theory was categorical and, as a consequence of its syntactic incompleteness, equipped with a semantically incomplete language. This explanation is consistent with the fact that both his original semantic completeness theorems cannot be applied to this system, due to the said quantification.

10) This alternative interpretation also explains why, becoming more and more evident, as time passes, the difficulty for the condition of categoricity, Gödel never will repeat similar affirmations. Finally, he never corrected the phrase plausibly because he was not worrying about correcting an unpublished text.

Giuseppe Raguní, (2015) Consequences of a Godel’s Misjudgment. Open Access Library Journal,02,1-12. doi: 10.4236/oalib.1101820