Talk:Relation algebra
This article is rated Start-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
Older comments
[edit]Apparently, this article uses unconventional notation, and it certainly seems crazy to me. Why not use the conventional notation? -Chinju 05:31, 28 March 2007 (UTC)
- I second the motion. If we have to use parenthesis to represent boolean negation as in -x=(x), how do we use parenthesis normally in an equation? I suppose that we could express the associative law of addition (union) as ((x+y))+z=x+((y+z)) on the theory that the double parenthesis represents a double negation. As long as we double up on all of our parenthesis, we ought to be safe. ;o) --Ramsey2006 02:38, 29 March 2007 (UTC)
- Ambiguity arising from infix notations are a bane of algebra and computer science. Polish prefix notation has not caught on, but there is another way of banishing ambiguity without using brackets. If a binary operation associates, it is not necessary to establish the order in which multiple instances of the same operation are carried out; simply concatenate the operands and enclose them in a prespecified pair of brackets. The two primitive binary operations of RA, Boolean join and composition, both associate by virtue of axioms B2 and B4.132.181.160.42 (talk) 23:24, 24 May 2009 (UTC)
I third it. Relation algebra is just another variety like groups and Boolean algebras, in fact a relation algebra is almost both (it would be exactly both if converse canceled composition). It has no more need of pedantic language-metalanguage distinctions and multiple fonts than group theory or Boolean algebra. I suspect the metalanguage stuff is inherited from the Tarski-Givant book, where they use it to refound various logical formulations of set theory on RA including Zermelo-Fränkel set theory, Von Neumann-Bernays-Gödel set theory, Morse-Kelley set theory, and Vopěnka and Hájek's semisets, a wildly ambitious project well outside the scope of the present article. I propose to replace the first two sections (definition and axioms) with the definition that a relation algebra is a residuated Boolean algebra with converse (the definition given at the end of the examples section of the residuated lattice article), and use the freed-up space to motivate the subject by moving the historical remarks (currently at the end) up to the front and reorganizing them to show what the subject looked like originally and how each of De Morgan (1858), Peirce (1870s), Schröder (1895), and Tarski (1940s) added to and abstracted from the subject. The only decent section in the current article is the one on expressive power, which is very much to the point (who wrote that?). The section on "Other notations" should be reduced to a couple of lines simply listing them. The merits of splitting the RA article into RA (logic) and RA (structure) paralleling the recent BA split are pointed up by the Examples section, which would go in the logic article, while a separate (glaringly omitted) section on examples of relation algebras would go in the RA (structure) article (but such a split should not be needed as long as the reworked article remains reasonably short). Comments? I suggest waiting a week for input before actually implementing any of this. --Vaughan Pratt 22:17, 26 July 2007 (UTC)
The promised rewrite is now pretty much completed (at least I'll be taking a rest from it for a while). In the end I split the material between three articles, on residuated lattices, residuated Boolean algebras, and relation algebras, each adding a little to its predecessor. This enabled relation algebras to be defined as simply residuated Boolean algebras for which ▷I and I◁ are involutions, following Jónsson and Tsinakis. --Vaughan Pratt 06:19, 16 August 2007 (UTC)
Questionable Statements
[edit]Several statements on this page are obviously false. The converse operator isn't an interior operator, since it isn't idempotent (but rather an involution). Also, I'm pretty sure that two non-identity functions can commute with eachother: e.g. f(x) = 2x and g(x) = 3x —The preceding unsigned comment was added by 131.107.0.73 (talk • contribs) 14:54, 20 July 2007.
Removed strange links
[edit]The article seemed to assert (by citation) that the notions of injectivity, surjectivity were invented in 2002. These notions are indeed mentioned briefly in the referenced article, but they were well-known for years before then. I have removed these citations. I've left a (partial) reference to that article at the end of the page. I'm not sure that it is an important general reference, but I leave it to others to decide whether to delete it.
If I have missed an important point, I'm sorry. I'm happy to discuss. Sam Staton 09:35, 27 September 2007 (UTC)
Expressive power section is poor
[edit]RA is a class of relational algebras, not the class of equations valid on that class of algebras, i.e. it is not some equational theory. The section should be talking about how the expressive power of some fragment of a first-order language (not logic!) corresponds to an equational language for RA. In particular, such first-order formulas are satisfied in a model iff the equation is "verified" in the corresponding relational algebra under the appropriate valuation. This section of the article currently makes little sense. Nortexoid (talk) 19:39, 19 August 2008 (UTC)
- Agreed. That section was left over from before my rewrite. I didn't have the heart to delete it, but it certainly needs a thorough rewriting if it is to be kept. --Vaughan Pratt (talk) 00:52, 19 January 2009 (UTC)
Look at Givant's streamlined exposition
[edit]I invite all editors of this article to read the first 6 sections of Givant's 2006 Journal of Automated Reasoning article. It is addressed to computer scientists not mathematicians, the clearest articulation of RA to date, and my source for the axioms. Givant does not use the language of universal algebra. Nevertheless, he makes it evident that RA is a ⟨L, join,composition,complementation,converse,I⟩ algebra of type ⟨2,2,1,1,0⟩. This signature is a good deal simpler than the one appearing in the entry. A drawback of this article is that the adjective "residuated" appears nowhere in it.132.181.160.42 (talk) 23:10, 24 May 2009 (UTC)
- What's "streamlined" about a signature of 2-2-1-1-0 when 2-2-0, namely NAND, ◁, and I, is enough for RA?
- "Streamlined" and "clear" are relative concepts. Streamlined automobiles are great, but not at the expense of comfort. If you've been thinking about something for several years in a certain way it will steadily become clearer to you, but if you strike up a conversation with someone else who has been thinking about the same thing in another way for as many years, each of you could easily find the other unclear.
- For some people the definition of a relation algebra as a structure (L, ∧, ∨, ¬, 0, 1, •, I, ) satisfying equations B1-B10 will be perfectly clear in the sense they have in mind. For others the definition of a relation algebra as a residuated Boolean algebra for which I◁x is an involution will be clear in another sense. Both definitions are self-contained, and they are completely equivalent. By what criterion are we to judge which is clearer? One could argue that the first definition is clearer because it avoids organizing its axioms hierarchically. But one could also argue that the second definition is clearer because it organizes its axioms hierarchically. Your implication that the former serves computer scientists and the latter mathematicians would suggest that computer scientists don't care about modularity, which is news to me.
- In any event how does all this bear on the article, which gives both definitions? --Vaughan Pratt (talk) 07:48, 26 May 2010 (UTC)
- A reading of Tarski & Givant (1987), and of Givant (2006), makes clear the appeal of the signature ⟨L, join, composition,complementation, converse,I⟩. I also submit that the typical reader is more likely to have an intuitive understanding of function composition and converse than of lattice residuation. This belief of mine about the sociology of mathematical understanding grounds my preferred signature. That ⟨NAND,◁,I⟩ of type ⟨2,2,0⟩ is an RA signature is news to me. "Your implication that the former serves computer scientists and the latter mathematicians..." Rest assured that the thinking you attribute to me never entered my mind. I was only stating an incidental fact about expected readership. "...how does all this bear on the article, which gives both definitions?" The article did not include both signatures at the time when I wrote my comment. I am happy to see that it now does so.111.69.255.27 (talk) 18:19, 14 May 2011 (UTC)
Inconsistent Notation
[edit]It looks like this page uses in some places and in others. AshtonBenson (talk) 01:17, 10 May 2010 (UTC)
- Good catch (mea culpa). Hopefully I fixed them all, let me know if not. --Vaughan Pratt (talk) 08:10, 26 May 2010 (UTC)
Incorrect expression for asymmetric relations?
[edit]Our article on asymmetric relation defines it as a relation where a R b holding implies b R a does not hold. It seems evident that this is not the same thing as R ≠ R, which is merely the negation of the expression R = R for a symmetric relation. Many relations are neither symmetric nor asymmetric, including e.g. ≤. Can someone double check this? Dcoetzee 12:20, 21 August 2013 (UTC)
Just came here to write the same thing. Asymmetry should be antisymmetry + irreflexivity. 69.86.161.244 (talk) 20:53, 18 September 2013 (UTC)
Add a four-variable counterexample, as simple as possible
[edit]Someone should add the plainest, simplest possible example in predicate logic of a four-variable principle, definition or theorem (maybe plainer than general associativity of relative product and relative sum, if there is one) that can't be done in the algebra. (Tarski & Givant) Ideally, provide it in an English sentence that Joe Blow (in Britain, Fred Bloggs) can understand. The current remark about it leaves the reader hanging. 108.73.1.253 (talk) 20:42, 11 December 2017 (UTC)
Broken Weblinks
[edit]Many links are outdated. Please refer to de:Relationsalgebra for fixing as all links have been reviewed there. --Ernsts (talk) 00:56, 27 January 2018 (UTC)
Quantifier nesting
[edit]We read 'quantifiers can be nested arbitrarily deeply'. But in https://en.wikipedia.org/wiki/Quantifier_%28logic%29 we read 'Relation algebra cannot represent any formula with quantifiers nested more than three deep'. That looks like a contradiction to me, but I know nuffink. 31.52.252.166 (talk) 01:42, 29 March 2019 (UTC)
- I agree that both statements don't coincide. However, since in Relation algebra the number of bound variables is restricted to 3, they aren't contrary to each other.
- Since I'm not en expert in relation algebra, all I can do beyond placing a {{citation needed}} tag is to give an example formula that would be admitted by Relation algebra, but not by Quantifier (logic): "∀x. (p(x) → ∃y. ∃z. (q(x,y,z) → ∃x. r(x,y,z)))". It has 4 nested quantifiers, but only 3 variables; the red x is bound by the outermost quantifier, while the cyan x is by the innermost one. Inside the range of the latter (underlined part), the red x isn't available - similar to block structure in imperative programming. - Jochen Burghardt (talk) 09:39, 29 March 2019 (UTC)