Talk:First-order logic/Archive 2

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Formula check

Many wise men have red this but just to make sure: Is the following correct:

"there is someone who can be fooled every time" can be expressed as:

(can´t copy formula) Where "" means "there exists (an) x", "" means "and", and Canfool(x,y) means "(person) x can be fooled (at time) y".  —Preceding unsigned comment added by 84.120.158.33 (talk) 09:41, 2 November 2007 (UTC) 

Can we use the word "set" in the vocabulary section?

"A set of predicate variables (or relations) each with some valence (or arity) ≥1, which are often denoted by uppercase letters P, Q, R,... ."

Like the above statement, the vocabulary section of this article keeps using the expression "a set of" in defining concepts such as predicate variable and constant. But what does a set mean in this context? What is the definition of it? Isn't it a concept of ZFC (or other equivalent 1st order theories) which is again dependent on the 1st order logic? I'm worried if it is a circular definition. --Acepectif 06:50, 5 May 2007 (UTC)

Yes, it is somewhat circular, but that is the way foundational mathematics is done, and it is not Wikipedia's task to correct that. Historically, Hilbert's program tried to find a non-circular way to formalise mathematical reasoning, but this hope was shattered by Gödel's incompleteness theorem and related discoveries in the 1930s. Today most mathematicians would probably say one may well conduct a mathematical argument without first having formalised the process of arguing mathematically, and it is only the formalisation that needs set theory. Or, a bit more provocatively, at its root it is merely an article of faith that mathematics works at all. –Henning Makholm 08:43, 5 May 2007 (UTC)
After further thought, perhaps it would be good for the article to say that first-order logic is but a mathematical model of the process of reasoning mathematically, and is not identical to mathematical reasoning itself. It is a very good model, which captures quite accurately which kinds of arguments most real mathematicians will accept as valid proofs. But mathematicians are in general capable of forming those judgements without translating the purported proof into first-order logic, and it is often possible to get at least master's degrees in mathematics without ever having seen the precise rules of first-order logic in a classroom setting. If only I knew how to say that succinctly in the article without sounding to POVish and argumentative ... –Henning Makholm 21:15, 6 May 2007 (UTC)
Henning Makholm is correct (IMHO), and another good reason is this. There are already complaints that the article is too technical. Personally, I do not agree with those complaints, but, in consideration of the people who feel that way, it would only further obfuscate the matter to point out that the phrase "a set of" is meant in the sense of naive set theory (e.g., Paul Halmos's book), not in the sense of axiomatic set theory. In other words, it would not (IMHO) help to point out that the paragraphs in the article that attempt to explain aximomatic set theory are not themselves to be interperted as an informal expression of an underlying axiomatic meta-description of axiomatic set theory.DeaconJohnFairfax (talk) 18:43, 16 July 2008 (UTC)

Algorithm for generating axiom or checking them?

The article says:

When the set of axioms is infinite, it is required that there be an algorithm which can decide for a given well-formed formula whether or not it is an axiom. There should also be an algorithm which can decide whether a given application of an inference rule is correct or not.

I thought the correct statement would be:

When the set of axioms is infinite, it is required that there be an algorithm which can generate all axioms by recurring application, so that each axiom can be generated in a finite number of steps. The same is true for inference rules. For each inference rule there should be an algorithm which implements it, so that each implementation can be carried out in finite number of steps.

Any comments? Dan Gluck 15:34, 11 July 2007 (UTC)

Actually, no algorithms are required for general first-order theories. I don't know how everyone missed that. For example the full theory of any first-order structure is a first-order theory, but in most cases there is no way to decide whether a given sentence is in there or not, and no way to enumerate the theory. I removed the two sentences from the article. — Carl (CBM · talk) 16:26, 11 July 2007 (UTC)

OK, but still you need an algorithm to generate the axioms, otherwise how do you generate them? In other words, are there any objections for me adding the paragraph I have suggested?Dan Gluck 18:25, 11 July 2007 (UTC)

If you can recognize an axiom algorithmically, you can generate them by generating all finite strings of symbols (for example, in order of increasing length) and filtering out those that are not axioms. Merely requiring that the set of axioms are recursively enumerable is too weak a condition to be of much use; if you don't have an algorithm for deciding whether a formula is an axiom, you won't be able to decide whether a purported proof is valid. –Henning Makholm 19:39, 11 July 2007 (UTC)
Thanks, this is much clearer now. Dan Gluck 06:43, 12 July 2007 (UTC)
Any set of axioms generates a first order theory, whether or not that set is computable or computably enumerable. For example the theory of true arithmetic (the set of sentences of PA true in the standard model of the natural numbers) is a first-order theory but it has no effective axiom system. I would not support adding any claim to the contrary to the article. — Carl (CBM · talk) 20:57, 11 July 2007 (UTC)
That is true. My point was whereas restricting to decidable axiom sets is useful (i.e., one does not have to do it, but it it may be useful in particular applications), there is no good reason to restricting only to recursively enumerable axiom sets. It excludes many of the benefits of assuming decidability, and does not improve the expressivity of the logic; any theory with an r.e. set of axiom has an equivalent (in terms of which formulas are provable) theory with a decidable set of axioms. Though the claims quoted above seem to have disappeared now, I think that for full clarity the article should explicitly say that one often but not always consider theories with the added requirement that axiomness is decidable. –Henning Makholm 21:35, 11 July 2007 (UTC)
We might as well use the standard terminology, though. First-order theories with a decidable language and a decidable set of axioms are called effectively axiomatized in the terminology I am used to. Maybe there are other synonyms. Without an extra modifier, there is no reason to think that a first order theory need be effectively axiomatized, and in fact it won't be in the common case where the theory is a complete, undecidable theory of a particular structure. — Carl (CBM · talk) 21:42, 11 July 2007 (UTC)
I think it depends a lot on the subdiscipline one is working in whether that case is in fact "the common" one. In computer science (whence I come) a theory where the axioms are not decidable would be considered a rather strange one. But I agree that the term "first-order theory" does not in itself imply that it is effectively axiomatized. I'm merely concerned that some readers might not notice that there is no such requirement unless we explicitly note that it is something one can add. –Henning Makholm 22:22, 11 July 2007 (UTC)

This article makes a separation between logical axioms, of first-order logic, and non-logical axioms, of the first order theory one is considering. Your arguments all referred to the non-logical axioms. What about the logical axioms? There are versions with a finite number of such. Is it legitimate to have a version with logical axioms which are not decidable? Dan Gluck 06:43, 12 July 2007 (UTC)

I don't know of any presentation of ordinary first-order logics which has an undecidable set of logical axioms, so it is not clear to me that the question is relevant. I don't see any inescapable reason why one could not define a logic with an undecidable set of logical axioms if one wanted (except for the question of whether the result is useful enough to be called a "logic"). But that logic would be a different logic from the ordinary first-order logic, and therefore not directly relevant to this article. –Henning Makholm 19:54, 12 July 2007 (UTC)
The reason I'm asking is that I think the statements about semidecidability assume semidecidability of the axioms, they do not however assume their decidability. In its present form the article nowhere states anything about these assumptions. Dan Gluck 06:36, 13 July 2007 (UTC)
The article explicitly presents a decidable set of logical axioms for first-order-logic, and mentions that there are other well-known axiomatizations of FOL that would also work. Those other axiomatizations also have decidable sets of locigal axioms; this is fact, not assumption. I don't see what there is to worry about. Which sentence could possibly be relevant in the article that even mentions undecidable sets of logical axioms? Keep in mind that logics with such axiomatizations are not FOL. –Henning Makholm 21:23, 13 July 2007 (UTC)
No, there is nothing called "first order logic" with an undecidable set of logical axioms. But phrases such as "The axioms of first-order theories" uniformly refer to the nonlogical axioms, since the logical axioms are fixed across all first order theories, and in particular a first order theory is usually defined as (the logical consequences of) a set of (nonlogical axioms) in a fixed language, — Carl (CBM · talk) 22:54, 13 July 2007 (UTC)
OK. My point was that the article only gives an example of axiomatization of first-order logic, and while in this one and in any other I ran into the axioms are decidable, I wanted to be sure that this is part of the definition of a first-order logic, i.e. that no one can give axiomatizaion where the axioms are undecidable and call it "first-order logic". You have confirmed that this is indeed the situation. Thanks. Dan Gluck 09:46, 14 July 2007 (UTC)

Quantifier axioms

I can't read the quantifier axioms due to unicode problems, and while I've figured out alone two of them and changed the problematic characters to math formulas, I didn't figure out the other two. Can anyone please change the problematic characters to math formulas? By the way, the second axiom is in fact easily proved from the first one, hence is not an axiom. Dan Gluck 20:50, 11 July 2007 (UTC)

Is this still a problem?--Cronholm144 10:06, 16 July 2007 (UTC)

No, thanks.Dan Gluck 11:58, 16 July 2007 (UTC)

Major edit

I've done a major cleanup and added of explanations and examples. Hopefully I haven't done mistakes. Dan Gluck 08:14, 12 July 2007 (UTC)

This Explanation Sounds Backward, Please Explain

74.229.250.5 00:55, 24 August 2007 (UTC)==Why is first-order logic needed?==

Propositional logic is not adequate for formalizing valid arguments that rely on the internal structure of the propositions involved. For example, a translation of the valid argument

  • All men are mortal
  • Socrates is a man
  • Therefore, Socrates is mortal

into propositional logic yields

  • A
  • B
  • C

which is invalid ( means "therefore"). The rest of the article explains how FOL is able to handle these sorts of arguments.

This is at the top of the article and the symbols seems to describe the syllogism correctly. But then the last sentence says "which is invalid ( means "therefore")." I guess the reader is supposed to see something wrong here, but the sentence doesn't tell us what that might be. Why does it point out the arrow means "therefore"? Is the arrow a problem? It would seem not, because "THEREFORE, Socrates is mortal" is the correct end of the cogent syllogism. This leaves the reader wondering, "What exactly is the problem FOL solves that propositional logic can't?".

I tried to explain it. Is it better now? Dan Gluck 08:34, 16 July 2007 (UTC)

I think it is better. I may try to rephrase a little, but the emphasis on the structure of the formulas is much better now. — Carl (CBM · talk) 18:59, 16 July 2007 (UTC)

rjm Aug 23, 2007 I believe it would help to simply state that "propositional logic reqeuires the content of the syllogism be replaced by generic symbols, thus losing the logical connections between the premises. Therefore, propositional logic is unsatisfactory in many such cases." I too had to read the section multiple times to understand why you seemed to be calling the most famous valid syllogism invalid. Clarify that the invalid syllogism is the one with just letters, not the original one about Socrates. Reading the first comment here helped me, but that shouldn't be necessary. rjm Aug 23, 2007

I was taught that is the predicate calculus dyadic operator implies which has its own truth table; I don't thing referring to it as therefore emphasises this formality. -- Mickraus 11:37, 14 September 2007 (UTC)

Clarity of variable use

In the second paragraph the construction of two propositions within FOL is explained thus:

"In propositional logic these will be two unrelated propositions, denoted for example as p and q. In first-order logic we may denote both as: Man(x), where Man(x) means that x is a man. For x=Socrates we get the first proposition, and for x=Plato we get the second one."

While in mathematics 'x', used as a variable, can be used to denote any element in the context, this is both inconsistent with how variables have been used in the entire history of logic and also how they are in contemporary use. Suggest a change to Man(x) and Man(y) for x= socrates and y=plato.

Given that the full propositional form of just these simple sentences is something like (Ex)Man(x)& Socrates(x) and (Ey)Man(y)& Plato(y) respectively, it is little wonder that this convention has been maintained over the years. It maintains clarity and allows for manipulation of the domain that may not be possible otherwise.

Wireless 143.252.80.100 09:32, 22 August 2007 (UTC)

The whole point was to explain that in Man(x) every consant can replace x. You current suggestion is not beneficial for such an explanation. However, you are invited to suggest other options.Dan Gluck 11:51, 22 August 2007 (UTC)

If you mean that the predicate 'Man' can be applied to every non-logical constant x,y,z... then fine. But 'Man(x)' alone is not a proposition in FOL as is claimed. If you don't want to introduce quantifiers at such an early stage, suggest just using words: 'if you substitute Socrates...' and 'if you substitute Plato...'. Otherwise it is plain unclear.

Wireless 143.252.80.100 12:01, 22 August 2007 (UTC)

OK, I change it so that it's clear that Man(x) is a realtion rather than a sentence. Dan Gluck 06:29, 23

August 2007 (UTC)

I've cleaned up the language of that paragraph, hopefully making it a bit clearer, but keeping all the substance. Hope you don't mind. Wireless99 09:23, 23 August 2007 (UTC)

Wikipedia's |- non-standard

(I am not a logician)

If the standard meaning of |- is used (and this is claimed in the article) then Wikipedia's Modus Ponens is useless.

(1) If |-A->B and |-A then |-B

According to the article this means

(1) If A->B and A are theorems then B is a theorem

I believe this. I also believe it is useless. In doing a deduction one seldom knows that A-> B and A are theorems. Most commonly one only knows that they can be deduced from some hypothesis set G. One wishes to conclude that G|-B . However the Wiki (1) form does not allow this since neither A->B nor B is , alone, a theorem.

I think what you want is

(2) Rule of Inference: If A->B and A then B.

or

(3) If G|-(A->B) and G|-A then G|-B.

[The same comment applies to the Generalization Rule of Inference]

[In the case of the Generalization Rule of Inference these are not the same. They seem to give two slightly different flavors of FOL. (2) (for Generalization) is in the spirit of the treatments of Mendelson or Detlovs/Podiniak whereas the (3)(for Generalization) is in the spirit of Enderton's text]

If you insist on the non-standard usage you should at least explain that it is non-standard.

128.241.111.116 19:29, 30 November 2007 (UTC)A. Reader

Well, as a model theorist I am supposed to be a logician, but in modern model theory it's easy to keep away from syntax, so take my answer with a pound of spoonful of salt. I am referring to the definitions in this article only, I didn't look at any others; and I am shooting out of the hip here. In the paragraph before modus ponens is defined, theorem is defined as what is provable from a theory. Then apparently "|-A" is used to mean "A is a theorem", i.e. "A is a theorem in some theory that we don't bother to mention." It looks as if somehow you are right, and the article is also right because it means exactly what you say it should. I won't fix the article right now because I am not sufficiently familiar with the standard conventions (and believe in the "think first" approach), but I will make a note in my todo list.
By the way, I am not sure that anybody insists on non-standard usage. From what I have seen on Wikipedia so far it's more likely that somebody added this without thinking and so far none of the editors (including the original author) read it properly. Thanks for the help! --Hans Adler 20:49, 30 November 2007 (UTC)
I am not a logician either, but I was trained in theoretical computer science, which comes close at points. One point of confusion here is that there is not a single canonical formal proof system for first-order logic. At the very least you can have either a Hilbert-style system, natural deduction, or sequent calculus, and in each of these there are various ways of writing down explicit rules for quantifiers. Neither of these are more standard or non-standard than the others, and at the end of the day they all lead to the same theorems. Therfore,unless one is doing proof theory one does not need to worry about the differences. The problem with the article is that it does present a proof theory, without making it clear up front that it is just one among several possibilities. In particular, what this article describes (with some bumps and plot holes, due to there being multiple authors) is a Hilbert-style presentation. In that style, "A. Reader"s formulation (1) is indeed the way one does deductions, and a small arsenal of logical axioms and proof rewriting schemes must be used to justify using modus ponens under and assumption. On the other hand "A. Reader"s (3) properly belong in natural deduction, and there it is standard. (Rule (2) might be used as a variant notation either fir (1) in a formalism where one does not bother to write down the turnstile, or for (2) in a tree-style natural deduction with "invisible" assumptions).
In all of the formalisms, one does not bother to write down the axioms of one's theory as assumptions for each theorem, and in that sense it is true that the naked assertion that a turnstile holds is always context-dependent; it has no inherent truth value expect under a particular theory (or in a particular model). In other words, one does not do
In FOL, prove: |— (all axioms of ZFC) & A finite & B proper subset of A ==> not(A equipotent with B)
but
In FOL+ZFC, prove: |— A finite & B proper subset of A ==> not(A equipotent with B)
This convention may seem strange and uneconomical to an outsider (it certainly did to me back when I learned this stuff), because within the proof, the axioms of the theory and the per-theorem assumptions have no need to be distinguished. This is true when one tries to construct a proof in one's mind, or checks somebody else's proof, that is, as when one views logic simply as a tool for finding truth. However, to the logician who views logics or theories as interesting objects of study in their own right, the proof is just a small part of a larger context, and in the context axioms and assumptions have rather different roles. And, of course, logicians are the ones who get to choose the notation and conventions used in the textbooks and articles they author. –Henning Makholm 01:06, 2 December 2007 (UTC)

I am not a logician. What little I know is from

leafing through low level texts.

I think your interpretation of the current

article is correct. It means that |-A has only a context sensitive meaning. So out of context |-A would be true for any sentence A--since A could be the axiom of a (not necessarily consistent) theory. The books I have leafed through seem to reserve |-A for wffs that follow from (their!) axioms of FOL.

128.241.111.116 02:16, 1 December 2007 (UTC)A. Reader

You're right - means that, in whatever deductive system is being used, φ is deducible with no additional axioms beyond those of logic alone. Thus means φ is logically valid. On the other hand means there is a deduction of φ which may use axioms from Γ. Thus φ is satisfied by every model of Γ but may not be satisfied in structures not satisfying Γ. I would be extremely interested to see any contemporary mathematical reference that uses in a different way. (In compiler theory and related areas of computer science, has a different but related meaning, because the deductive calculi are not intended to represent logical truth). If the article confuses these, it should be fixed. — Carl (CBM · talk) 03:56, 2 December 2007 (UTC)
Elliott Mendelson, Introduction to Mathematical Logic, fourth edition, beginning of section 2.4. Having defined a "first-order theory" to include non-logical axioms, he writes:
All the results in this section refer to an arbitrary first-order theory K. Instead of writing , we shall sometimes simply write ...
In this reference, the non-logical axioms of the theory do not go in front of the turnstile, but (when they need to be stated at all) as a subscript. –Henning Makholm 12:06, 5 December 2007 (UTC)
That notation with subscripts is also common enough; leaving out the theory is very strange to my eye. The notation is used in Hinman, Fundamentals of mathematical logic, 2005, and by Troelstra among others. In the area I work in, the 'inline' version is ubiquitous. I believe this is because we often have several different systems that are considered at the same time, so the exact system in which a statement is provable must frequently be specified. — Carl (CBM · talk) 13:01, 5 December 2007 (UTC)

Can we clear up the Modus Ponens paragraph?

First of all, according to ordinary convention, the turnstile symbol is just a symbol for logical consequence in general and therefore does not always mean that the formula following it is a theorem. A formula is only a theorem if it is a logical consequence of the empty set i.e. true under every interpretation. There is no such thing as a "new" theorem.

Also, Modus Ponens(MP) is described rather unclearly. The description suggests that MP ONLY applies to theorems and not to contingent formulae also, which is (i would argue) its most important application (unless you are a mathematician). Would therefore suggest the following:

P; P > Q
  ----
Q

Or the greek equivalent (couldn't transcribe it properly - apologies) Wireless99 15:43, 23 August 2007 (UTC)

This article is indeed math-oriented. There has been an old debate about that, which is located in the archive, from the time when this page was called "predicate calculus". It has been agreed to change the page's name to "first-order logic" and keep it math-oriented, and have the philosophy-oriented stuff in quantification. This seems to me reasonable, and I think (though not sure) that the term "first-order logic" is indeed in use by mathematicians more than by anybody else. If you disagree with the decision, you may start here a new debate. You may otherwise add a remark in the beginning which says something like: This article is mathematically oriented. For philosophically oriented discussion, see quantification.
  • As for your other remarks, this symbol: , is in fact in use as a sign that the formula following it is a theorem. See for example Metamath Proof Explorer Home Page.
  • You wrote "A formula is only a theorem if it is a logical consequence of the empty set i.e. true under every interpretation." This statement is inaccurate, since the empty set is not a well-formed formula. The correct statement is: "P if and only if for every statement Q, Q implies P". This is not a statement in first-order logic, but rather in second-order logic (it is in fact a theorem of some second-order logics). Another statement, which is a theorem in propositional logic, is "P implies (Q implies P)". While true, this is totally irrelevant, because this is not the way one proves theorems in first-order logic. By the way it is true that in propositional logic you can use truth tables in order to check whether a sentence is a theorem or not, but again this is not how you prove a sentence (in mathematical logic).
  • You wrote "There is no such thing as a "new" theorem.". When you are proving theorems in mathematical logic, you do it by some order starting from the axioms and moving on, one step at a time. A "new theorem" in step n is a theorem which is proven in step n; it is of course not new, in the sense that no theorem is new: all the theorems in mathematics are always true (well, actually there's a philosophical debate about that...). If you can describe it more clearly and accurately than what already appears in the article, as you have done successfully in the lead section, you are invited to do that.
  • Modus ponens, which has an article of its own, appears here only as an inference rule, in the sense of proving (or creating, depends on your point of view) theorems. As such, your suggestion for changing the way in which it is presented is not appropriate here.
Dan Gluck 20:13, 23 August 2007 (UTC)

Are we talking about the same FOL? In any FOL I am familiar with, the empty set is perfectly well formed and when you write P it is merely shorthand for following on from the empty set, i.e. that P is a logical consequence of the set of no premises (and for the picky mathematicians out there - thats 'set' in the sense of naive set theory as it is used by logicians). Consult any elementary logic textbook. Also, while there may be new mathematical truths, there can be no new logical truths in FOL - these are all given at the start by definiton. Do not let the fact that mathematics may be able to be expressed well in FOL confuse your thinking.

"this is not the way one proves theorems in first-order logic."

Rather, this is the way mathematicians use FOL. The language you are working in is correct and complete, and that you only use a fragment of it and call it "mathematical logic" or whatever is up to you. Wireless99 21:39, 23 August 2007 (UTC)

I'm not interested in arguing about mathematicians, but that section does need to be edited some. Really the whole article needs some work. I'll try to get to it if nobody else does, but I wouldn't argue with almost any rewrite of the modus ponens section. — Carl (CBM · talk) 00:23, 24 August 2007 (UTC)

Wireless - what you write about the empty set sounds vaguely familiar, and since I studied logic many years ago, you may be right. Now, by "new logical theorems" I meant in the sense of, for exampe, automated theorem generators; Of course nobody really needs in practice to "prove" logical theorems, but there is a meaning in doing that: You may start with few axioms and generate all logical theorems by the use of inference rules. Anyway, MP is also used to prove non-logical theorems, as you know. You may change the words "old" and "new" to something more well-suited if you wish. What I DO think should be kept in mind is the use of inference rules to move from one set of theorems to another theorem. It is true that ALL inference rules - including MP - can be used for contingent formulae as well. But I'm not sure that in the context of this article this is a more important use. Dan Gluck 18:52, 28 August 2007 (UTC)

I agree with all that you say above, apart from I would wish to (re)start the debate about the usefulness of focussing on the mathematical uses of FOL. While this debate may be futile given that the logic portal is starting up, this article will most likely become superfluous once the logic portal has gotten going. Either suggest changing title of article to "mathematical and computer science uses of FOL" (or some equivalent) or widen it out to talk of all of FOL proper so it may somehow be usefully included in the portal once it has reached a sufficent level of independence from mathematics/philosophy etc. Wireless99 09:20, 29 August 2007 (UTC)
I don't care if you insert philosophy related perspective, but it seems to me that some separation between this and, say, automated theorem generators perspective (which is more or less the opposite perspective?) should be respected. Given this, both can live at the same article. What do you say? Dan Gluck 14:24, 29 August 2007 (UTC)
I agree. However, I will have to take some time to come up with a "philsophy perspective" and do some research. I was taught FOL within philosophical logic, and so will need some time to separate the two, to be able to write something suitable for the article. Question: does this constitute original research? Wireless99 14:50, 31 August 2007 (UTC)
I don't know, I think it depends on how you write it. Though I studied both mathmatical logic as a part of a degree in mathematics and logic as a part of a degree in philosophy, I never heard about FOL in the philosophy course because it was very low-level. So I don't really know what is the philosophical perspective... maybe I'm just kistaken and there is no such thing, I just encountered an old debate here about this and assumed that there is.
Anyway if you come up with your own idea about what is the distinction and write two different sections, one titled "mathematical viewpoint" and the other "philosophical viewpoint", this may be original research. A solution could be for example to write something mixed, but having for example different section(s) about things like Goedel's theorem and automated theorem generators which seems to me not to occupy much of philosophers' research time (maybe I'm mistaken here again?), and another section(s) about things which philosophers emphsize. Dan Gluck 21:01, 31 August 2007 (UTC)

You are mistaken at least when it comes to Godel. Godel's theorems occupy alot of philosophers time. The incompletetness of any system with the capacity to represent arithmetic is a very significant and puzzling result over which philosophers spend a lot of time. Automated theorem generators are a little more boring for philosophers perhaps, but there are philosophical issues here such as those related to turing machines. I'll have to have a think and see if I can justifiably write a separate section in this article entitled "philosophy". Wireless99 09:42, 1 September 2007 (UTC)

classical mathematics is not all mathematics

The article said: it is generally accepted that all of classical mathematics can be formalized The language of first-order logic has sufficient expressive power for the formalization of virtually all of mathematics. A first-order theory consists of a set of axioms (usually finite or recursively enumerable) and the statements deducible from them. The popular set theory ZFC is an example of a first-order theory, and it is generally accepted that all of classical mathematics can be formalized - in ZFC (in which case, if true, one may drop the qualification 'virtually' from the previous sentence in which it occurs). I deleted the part in the parenthesys, since it says that if classical mathematics can be formalized in ZFC, a first-order theory, then first-order logic has sufficient expressive power for the formalization of ALL of mathematics, implying that classical math is all math! As an example, second-order logic surely canno t be expressed by first-order logic. For the same reason, I'll change "virtually all of mathematics" to "most of mathematics". Dan Gluck 19:47, 24 August 2007 (UTC)

Have you any idea about the relationship between first-order classical set theory and higher-order logic? We're talking about expressing things without making clear what it means to "express" something, but nothing yet has been said about "n-categorically expressing" something for some ordinal n. So what can you express (n-categorically or not, for some ordinal n) in second-order logic that you cannot express in first-order set theory? Nortexoid 07:58, 30 August 2007 (UTC)
You are invited to make it clearer in the article, in a way that even a stupid man like me may understand. Dan Gluck 15:32, 30 August 2007 (UTC)

Why is first-order logic needed? - explaining a revert

I reverted from: Clearly the propositional translation is invalid. Let both A and B be true and C false. This interpretation makes the argument invalid. (Syntactically, it is easily observed that one cannot derive C from A and B in propositional logic.) In order to adequately translate the argument, one would have to pay attention to the content of the statements over and above truth values of the simple propositional constituents.

back to:

This translated argument is invalid: logic proves theorems only according to their structure, and nothing in the structure of this translated argument (from A and B follows C) suggests that it is valid. One would have to pay attention to the content of the statements, which is beyond the scope of logic. Translated into propositional logic, this syllogism has the same structure as a translation of "It is raining; The sun shines; Therefore, 1+1=3".

If you knew anything about the subject you would know that the current revision is rubbish for a number of reasons. First, it says that the argument is *invalid* and then provides a *syntactic* reason for its being so without citing any correspondence between validity and derivability (with respect to some class of deductive systems). Nor is it clear what is meant by "nothing in the structure...suggests that it is valid". What role does 'suggest' play in that sentence? Also, there are deductive systems (generating a set of theorems which is consistent or not) which have as a rule (derived or not) "from A and B, infer C". The rest is very poorly written for a logic article. Certain people need to refrain from editing articles fo a certain nature. Nortexoid 08:01, 27 August 2007 (UTC)
Does the above refer to the current version, or the one or the one when the above paragraph was entirely contained within the article? because there have been some changes. Also, if you feel its that bad, improve it yourself. Wireless99 12:13, 27 August 2007 (UTC)

for the following reasons:

  • "clearly" - it is certainly not clear for those unfamiliar with the subject.
  • It should be emphasized that logic proves theorems only according to their structure, a thing which most people are unaware of. Noting that "Syntactically" one cannot derive C from A and B, doesn't mean anything to the layman, at least at this stage (beginning) of the article.
  • An example is in place, hence the example about the sun shining.

Dan Gluck 20:01, 24 August 2007 (UTC)

I've carried out quite a significant re-write of the section, hopefully its clearer. See what you think. Feel free to revert. Although I'd prefer it, unless you really hate it, that you'd try and improve rather than just flat out revert it :) Wireless99 14:34, 26 August 2007 (UTC)

I really like your edit, perhaps I'll make some slight changes but overall I think it's good. Dan Gluck 21:20, 26 August 2007 (UTC)

Article needs history and etymology

Who coined the term First-order logic, when, where, how, and why? What's First-order logic been used for -- especially has it been used for purposes a general audience can easily appreciate? A long article should have this information, if at all possible. The current article disappoints; like reading a Chess article that jumps directly into chess notation, describing clever plays, without mentioning the game's origins, history, or players. --AC 03:31, 20 September 2007 (UTC)

Limitations of first-order logic

  • Section about sorts is unclear.
  • Does the fact that graph reachability cannot be expressed in FOL follows from the Löwenheim–Skolem theorem? If indded so, it should be mentioned.

Dan Gluck 08:15, 24 September 2007 (UTC)

I don't think you can show it from the Löwenheim–Skolem theorem. The natural and easy proof uses the compactness theorem. However, the context in which it is mentioned in the article suggests finite model theory, that is, the interesting statement is that reachability is not FO-definable on finite graphs. This is more complicated to prove than the infinitary version, and it requires Ehrenfeucht–Fraïssé games. -- EJ 13:02, 8 October 2007 (UTC)

Provable Identities

  • Can anyone provide the proofs? I am dying to know! —Preceding unsigned comment added by 121.7.107.72 (talk) 12:26, 8 October 2007 (UTC)

Terence Tao blog article

this article is interesting. He claims:

it seems that one cannot express
For every x and x’, there exists a y depending only on x and a y’ depending only on x’ such that Q(x,x’,y,y’) is true
in first order logic.

He gives an example of such a statement from linear algebra. Is that correct? Did he miss something? Also, is there a simple way to state the Hilbert-Waring theorem (aka Waring's problem) in first-order logic over N? That theorem says:

for every natural number k there exists an associated positive integer s such that every natural number is the sum of at most s kth powers of natural numbers (for example, every number is the sum of at most 4 squares, or 9 cubes, or 19 fourth powers, etc.).

Thanks. —Preceding unsigned comment added by 75.62.4.229 (talk) 00:33, 16 November 2007 (UTC)

It's either confused or ambiguous to say that first order logic can or cannot express something. What is meant, I gather, is that the language L of first order logic can or cannot express something. But again, that depends on what one means by expressibility. It's true that every expression of L is linear ordering of elements of some vocabulary (taking L to be the set of expressions of the vocabulary V given some formation rules on V), while the branching expression you consider is not. So in that sense, one cannot express branching expressions in L.
Call the language of branching (or partially ordered) expressions L*. Since the variables of L* range over only individuals, L* is sometimes called "first order" (e.g. some literature on generalized quantifiers adopts this nomenclature). From a model-theoretic perspective, L* is called second-order, since it has the expressive power (in the model-theoretic sense) of the Sigma-1-1 (i.e. existential) fragment of a full second order language. Nortexoid (talk) 23:40, 16 November 2007 (UTC)
I find it a bit harsh to call the observation "confused or ambiguous" simply because you know enough to restate it in fancier language.
What I think it interesting here is the following: It is commonly known that every formula in ordinary FOL can be systematically Skolemized into an equivalent second-order formula of the form
(**)    
where all of the existentials are over functions, all of the universals are about individuals, and is contains no quantifiers. I, for example, was taught this as an undergraduate. It appears to be less known that not every formula of the shape (**) is equivalent to a FOL formula (even with the added requirement that the arguments to each fi is always the same tuple of variables). I was not aware of this fact until the present discussion [which is why I find it interesting].
Terrence Tao's deeper point, if I understand it correctly, is that (1) working mathematicians generally consider statements with the (**)-shape (or rather, natural-language constructs that might be translated into this structure) to be fair game when doing mathematics in the common "informal, but tacitly assumed susceptible to formalization" mode; (2) it is often claimed or taught that FOL is the implied target for such formalization; (3) these facts do not actually fit together.
I expect that many WP math editors will feel wary about (3), because it sounds like it might be the prelude to a crankish denunciation of all mainstream mathematics. But it isn't. Mathematics is alive and well, but (and this isn't news) sometimes subtler than it looks. One of the delights of a true mathematicianTM is to discover that something is more subtle than he thought at first. I'm glad that I had the chance to do this today. –Henning Makholm 22:17, 17 November 2007 (UTC)
Actually what I said was not a restatement since it was significantly different in meaning. Aside from that, I don't know a single person (in published print) who claims that "working" mathematicians give proofs or express mathematical statements strictly in first-order languages. They don't. And the fact they express statements in branching form (which occur in natural language) is not philosophically interesting.
There is no incompatibility between (1) and (2) along the lines you are suggesting. Informal statements of mathematics that seem most naturally translated as higher order (or branching) statements are easily formalized in a first-order theory such as ZFC. If one has in mind properties like categoricity, then first-order languages will be inadequate in most cases, but as far as foundational mathematics is concerned, first-order languages are more appealing than the intractability of a second order consequence relation and the failure of a number of desirable model-theoretic properties (e.g. compactness). That's the whole point of formalizing mathematical theories in first-order languages--they're easy to work with from a model-theoretic point of view. Nortexoid (talk) 01:18, 18 November 2007 (UTC)
Well, it's not clear whether he missed something, because the sentence
For every x and x’, there exists a y depending only on x and a y’ depending only on x’ such that Q(x,x’,y,y’)
isn't very clear. If what he means is
There are functions f and g such that for every x and x’, Q(x,x’,f(x),g(x’))
then that is certainly expressible in first order logic with two sorts of variables, one to range over variables and one to range over functions. If he doesn't want to consider a language that includes function variables, he needs to explain how he defines 'depending only on' without mentioning functions. Then someone can tell if it is expressible in first order logic.
P.S. the original sentence could also be expressed as:
using another reading of 'depends only on'. — Carl (CBM · talk) 00:59, 16 November 2007 (UTC)
I think it it clear that the original sentence is supposed to mean what you give as your first option. The observation is that one does need to quantify over functions in order to state such a thing. This could be a problem if one is working directly with a "small" theory, such as the common axiomatization of fields, without wishing to extend the vocabulary to speak about functions as primitive objects. Your statement that "he needs to explain how he defines 'depending only on' without mentioning functions" appears to confuse the meta-language with the object language. It makes perfect sense to use functions at the meta-level to define a class of models in which want the formula to be true, and then ask whether there is a formula in the (function-less) object language which evaluates to true in exactly those models.
Your second formulation fails to capture the intended semantics. It is true in a model where the elements are {1,2,3} and Q is the relation
but the intended semantics is false for this model. –Henning Makholm 14:53, 16 November 2007 (UTC)
I agree the second sentence doesn't have the same meaning as the first, and that it makes perfect sense to ask whether something expressible at the meta level is expressible in a certain language (that is, whether some class of models is [basic] elementary). The classic example is "X is a finite set". I should be more careful when I write things that relate to semantics , since I ordinarily think in more syntactic terms. — Carl (CBM · talk) 15:46, 16 November 2007 (UTC)

As for the second question,

for every natural number k there exists an associated positive integer s such that every natural number is the sum of at most s kth powers of natural numbers (for example, every number is the sum of at most 4 squares, or 9 cubes, or 19 fourth powers, etc.).

is expressible in first-order logic in but in order to express it I would use the fact that all decidable relations on the natural numbers are expressible, and that would not lead to an elegant formula. — Carl (CBM · talk) 15:53, 16 November 2007 (UTC)

Thanks. The explanation about function variables makes sense, though I don't know if it would help Terence Tao's linear algebra example since he mentions that the class of finite dimensional vector spaces is too big to be a set, so maybe one can't have functions on it if we're working in set theory. I don't understand the answer about Waring's problem but I'll take your word for it. The question was inspired by some earlier discussion over at the second-order arithmetic page.
The comments for the blog post contain interesting and relevant discussion about how, by using NBG instead of ZFC set theory, one is allowed to quantify over functional proper classes at least as needed for this particular example. Arguably, quantifying over functions is a better match than branched quantification for the linear-algebra example, because what the original informal statement in the blog post seems to say is actually
and the requirement that the relation between the primed and unprimed x and y must be the same was lost already in the "depending only on" version that was quoted at the beginning of this discussion. –Henning Makholm 22:45, 17 November 2007 (UTC)
I see also that the article on branching quantifiers sheds more light on this.

Deleted section "Metalogical theorems of first order theories"

This section stated that Gödel's Completeness Theorem does not apply to first order theories with nonlogical axioms, which is entirely false, and mentioned the Incompleteness Theorems but in a misleading way. Generally it would serve only to confuse, since it conflates the two different definitions of "completeness" used in the Completeness and Incompleteness theorems.

Xplat (talk) 07:16, 11 December 2007 (UTC)

Thanks. I absolutely agree with your evaluation. The same editor added this misunderstanding to Gödel's completeness theorem, where Carl has already corrected it. --Hans Adler (talk) 11:56, 11 December 2007 (UTC)

All proof systems equal?

The article states: "In particular, any reasonable definition of "provable" in first order logic must be equivalent to the one above (though it is possible for the lengths of proofs to differ vastly for different definitions of provability)."

Are we claiming here that intuitionistic logic is not "reasonable", or that an intuitionistic proof system is not "in first-order logic"? If the latter, then why so? After all, an intuitionistic proof system and a classical one share the same set of wffs (viewed as symbol strings), subsets of which they declare to be provable. If intutitionistic proof systems are not "in first-order logic", then it would appear that "in first-order logic" implies by matter of definition that proof systems must be equivalent to classical FOL, but then the quoted sentence becomes entirely vacuous! –Henning Makholm 01:58, 25 January 2008 (UTC)

Problems with links

I have corrected redirect links I'm pretty sure about. Here are more issues:

The following links need disambiguation:
-conditional
-deduction
-graph

The following links are redirected:
-branched quantification
-decision procedure
-higher order logic
-inference rule (2)
-main contention (3)
-second order logic
-semidecidable (3)
-sentential logic

Randomblue (talk) 19:46, 30 January 2008 (UTC)

Mostly done. I believe everything that is left now should remain per WP:R#NOTBROKEN. --Hans Adler (talk) 21:39, 30 January 2008 (UTC)

Formal grammar

Some link to this page is required under the section on formal grammar without regard to the article's quality because of its relevance. If quality was the criterion for linking, nothing would be linked or improved. Perhaps some material needs to be merged or otherwise integrated, but that type of thing wont happen inless we are aware of the link. Pontiff Greg Bard (talk) 01:06, 2 February 2008 (UTC)

I think Henning's point was that a "main article" link was not the right way to achieve this. Normally it is only used in cases where a section gives a short introduction to something that is explained at length in its own article. In this case a "main article" for the section would have to be mainly about the formation rules of first-order logic, and perhaps explain them from many different angles. (Mathematically there isn't much to explain, but perhaps philosophically.) But this article doesn't even seem to mention first-order logic, and it doesn't look as if it would benefit much from that. Even if we add a complete treatment of first-order formation rules there, it can hardly be much more complete than what is already in this article, and if one of the two articles is to have a more complete treatment of this than the other, it seems to me it should be this one. Can't you just put the link into the text somewhere? --Hans Adler (talk) 01:14, 2 February 2008 (UTC)
I am fairly indifferent to the form the link takes. There is a whole world of ways to communicate this. I agree that the "main" template may not be the single best way to do this. There are many many such templates, and I admit that I didn't go through to identify the single best one. I think this boils down to the incrementalist/eventualist view, and the immediatist view. There is plenty of room for evolution. There are plenty of reasons why a person would need a link to that article, that I believe are not being considered. It is obviously the appropriate concept for that section. I am a little astonished at the judgment that we are just better off without it entirely?! Pontiff Greg Bard (talk) 02:26, 2 February 2008 (UTC)
Has anyone said that we are better off without the link? That would be quite astonishing. For me the point is, the current version with an introductory sentence makes it clear that if a reader is confused by this section for lack of prior exposure to formal grammars, then formal grammar is the way to go; after the excursion the section may be a bit more accessible. This is definitely an improvement. Advertising formal grammar as a "main article" had the same effect for this kind of reader, but it was obviously inappropriate and therefore not an improvement.
In the constructive atmosphere that we have in mathematics reverts are often just a fast and convenient way of saying: "There was a problem with your edit that needs correcting. I could make an effort to look closer into this and try to do it myself, but I can't be bothered right now. I would have to do a lot of reading just to guess what you wanted to achieve, and I might even get that wrong. So I will just revert, and I trust you will easily find a good solution since you have already familiarised yourself with the situation." This is not bullying, this is rational division of work within a team of equals. Now Paul August has done what you could have done after the revert, and it's possible that you are not entirely happy with his formulation and have to tweak it. --Hans Adler (talk) 10:43, 2 February 2008 (UTC)
Paul August's formulation is perfectly wonderful. It's the type of natural evolution I am always hoping for. I'm just embarrassed I just didn't think of it myself! However, I wasn't really analyzing it beyond the fact that some connection was appropriate. The revert may be the type of message you describe (and I will interpret that sort of thing that way in the future) However, if no one's paying attention, a revert means the connection is lost forever. So I don't really appreciate the value of reverts as a message. If you want to send a message, send a telegram and leave the content alone. Be well, Pontiff Greg Bard (talk) 11:53, 2 February 2008 (UTC)
I agree that that would be a good way of doing it, and initially it was how I expected things to be done here. But it seems that it's not how the community generally handles this, and so I just got used to it, more or less. --Hans Adler (talk) 12:32, 2 February 2008 (UTC)
Sounds like you are in a groove. That's cool. Hopefully these issues can be resolved in the future without pages on the talk side for the sake of one line on the article side. -GB
Well, when an obviously misleading addition is inserted without any edit summary to reveal what the editor was trying to do, the only thing one can do is just fix the encyclopedia by reverting - unless one has some telepathic knowledge of what the point was supposed to be. The edit summary box is there for a reason. Leave it blank and stange edits are probably going to be treated like vandalism; explain your motives and there's a much better chance that others will be able to figure out what you were trying to do, and correct instead of revert. –Henning Makholm 12:55, 2 February 2008 (UTC)
Well noted. However, are you characterizing the addition of the link to formal grammar in a section about formal grammar as mysterious? That would seem to be self evident. What exactly was "obviously misleading" about it? I think that is getting carried away. Be well, Pontiff Greg Bard (talk) 14:04, 2 February 2008 (UTC)
A "main article" link means: "We are only giving a quick overview here. See the main article for all the details." So e.g. if I wanted to know whether "Inductive clause I-III" is standard terminology, or if I wanted to see more examples wffs and non-wffs, then I would go to the "main article". And be disappointed. Doing the connection as a "main article" link was obviously misleading, because none of the stuff covered in the section was covered in the "main article". --Hans Adler (talk) 14:38, 2 February 2008 (UTC)
The formal grammar article is not the "main article" for the definitions of terms, formulas, etc. in first-order logic. The formation rules given here are not even a formal grammar, although a formal grammar could be written for them, in EBNF or some other normal form. What is presented in the article could be viewed as an English description of a formal grammar, or the translation of a formal grammar into English. It is certainly true that, when processing formulas with a computer, it is possible to use a formal grammar to define what we mean by formula, but I don't think the relationship goes much deeper than that. — Carl (CBM · talk) 15:30, 2 February 2008 (UTC)
That's a good point: it is not essential to the understanding of logic that we choose to use a grammar (or, as is the case here, a grammar unfolded into prose that looks much more complex than just a grammar would be) to define its objects. For some reason, actual formal grammars do not seem to be common in foundational mathematics. Perhaps this is due to fear that a reader would interpret an actual formal grammar as entailing all of the set theory that goes into formalizing the semantics of a formal grammar in language-theoretic texts, which would be bad for minimizing the amount of a-priori mathematics that one needs for one's metatheory. If computer scientists rather than mathematicians were the trend-setters in logic, we would probably just display a grammar, with the explicit understanding that it is used as a familiar notation for an intuitive concept rather than a formal object in itself. –Henning Makholm 23:56, 5 February 2008 (UTC)
I think the main reason for the cultural difference is that in computer science there are tools like lex and yacc that take a formal grammar as input. This makes it worthwhile to learn a specific notation. --Hans Adler (talk) 23:38, 2 May 2008 (UTC)