Talk:Knuth–Bendix completion algorithm

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Too technical[edit]

I added a motivation section so hopefully the article is no longer too technical. I'd like to hear other people's thoughts before I remove the tag, though. TooMuchMath 01:45, 27 April 2006 (UTC)[reply]

Monoids[edit]

Well, I've got a Ph.D. in computer science. My first thesis (roughly B.Sc. equivalent) was on analysing and transforming unfailing completion proofs, my second one (M.Sc.) was on automatic learing of search heuristics for completion-based theorem proving, and my Ph.D. and PostDoc work was on superposition-based proving. I've taught rewriting systems (at a T.A. level) for several years. I barely recognize KB completion from the current article. Maybe it's because I come from the CS side. I consider the language and notation from BDP89 or BN98 to be nice and natural. I don't e.g. see why we need a monoid (what is our associative operation?), and I don't know what "generators" are in this context.
 @InProceedings{BDP89,
   author =       {L. Bachmair and N. Dershowitz and D.A. Plaisted},
   title =        { {Completion Without Failure} },
   key =          {BDP89},
   booktitle =    {Resolution of Equations in Algebraic Structures},
   editor =       {H. Ait-Kaci and M. Nivat},
   year =         {1989},
   volume =       {2},
   publisher =    {Academic Press},
   pages =        {1--30},
 } 
 @Book{BN:REWRITING-98,
   author =       {F. Baader and T. Nipkow},
   title =        { {Term Rewriting and All That} },
   publisher =    {Cambridge University Press},
   year =         {1998},
 }
--Stephan Schulz 07:12, 27 April 2006 (UTC)[reply]

Thanks for the comments Stephan. I encountered the subject of rewriting systems in Sims' book "Computation With Finitely Presented Groups" so it does sound like we are coming from different places. In fact my understanding is limited to rewriting systems wrt monoids and groups so I'm probably not the best person to rewrite this from the CS perspective. I would welcome any information from the CS perspective, though, and would really like to see this article develop into something that is useful for everyone.

To clarify the case of the monoid, the language over the alphabet is the same thing as the free monoid over the generators . The associative binary operation is string concatenation and the identity is the empty string. The finite presentation is basically a way of assigning a partition to (by taking the transitive, reflexive, symmetric closure of R) so that elements of M are equivalence classes of . The goal (in my mind) of KB is to produce a system to find a minimum (wrt a reduction order <) representative of an equivalence class starting from any of its representatives .

TooMuchMath 17:20, 27 April 2006 (UTC)[reply]

Ok, now it makes more sense. It looks like your version is restricted to what we call "string rewrite systems". You can define reduction systems and completion on a more general level, with a reduction system (R, =>) given by an arbitrary set R and a binary relation =>, and transforming => into a convergent relation using completion. The concrete case then uses first-order terms as the carrier set and equations/rules over terms to describe the relation. String rewrite systems are a further specialization for the case that you only have unary function symbols. --Stephan Schulz 20:24, 27 April 2006 (UTC)[reply]

I'd like to be able to tackle the general case but there are still a few things I don't quite grasp. If we start with an arbitrary set R and a relation => then it makes sense to talk about the transitive closure relation =>* given by reduction sequences. However I don't see how the notion of term and subterm fits in here. That is, we need a way to identify elements of R as compositions of other elements of R so that we may apply a rule a => b to terms other than a (IE applying this rule to ab we get the reduction ab => bb). Perhaps this is somehow captured in a grammar for R but we have taken R to be an arbitrary set, not a formal language.

Also, I don't understand what you mean by "transforming => into a convergent relation using completion." Is this simply the transitive closure =>*? I've never taken a course in logic so an example of a rewriting system that isn't a string rewriting system would be very helpful. TooMuchMath 21:47, 27 April 2006 (UTC)[reply]

In the abstract case, you don't use composite elements. You assume an arbitrary binary relation over arbitrary elements. That each equation potentially defines equality for an infite number of instances in an infinite number of contexts is irrelevant in that abstract view. Manipulating equations manipulates all the instances in one go. As for a concrete example, here is a completion of the group axioms as in the KB70 paper (straight out of my prover E):
     1 :  : [++equal(f(X1,0), X1)] : initial("GROUP.lop", at_line_9_column_1)
     2 :  : [++equal(f(X1,i(X1)), 0)] : initial("GROUP.lop", at_line_12_column_1)
     3 :  : [++equal(f(f(X1,X2),X3), f(X1,f(X2,X3)))] : initial("GROUP.lop", at_line_15_column_1)
     5 :  : [++equal(f(X1,X2), f(X1,f(0,X2)))] : pm(3,1)
     6 :  : [++equal(f(X1,f(X2,i(f(X1,X2)))), 0)] : pm(2,3)
     7 :  : [++equal(f(0,X2), f(X1,f(i(X1),X2)))] : pm(3,2)
    27 :  : [++equal(f(X1,0), f(0,i(i(X1))))] : pm(7,2)
    36 :  : [++equal(X1, f(0,i(i(X1))))] : rw(27,1)
    46 :  : [++equal(f(X1,X2), f(X1,i(i(X2))))] : pm(5,36)
    52 :  : [++equal(f(0,X1), X1)] : rw(36,46)
    60 :  : [++equal(i(0), 0)] : pm(2,52)
    63 :  : [++equal(i(i(X1)), f(0,X1))] : pm(46,52)
    64 :  : [++equal(f(X1,f(i(X1),X2)), X2)] : rw(7,52)
    67 :  : [++equal(i(i(X1)), X1)] : rw(63,52)
    74 :  : [++equal(f(i(X1),X1), 0)] : pm(2,67)
    79 :  : [++equal(f(0,X2), f(i(X1),f(X1,X2)))] : pm(3,74)
    83 :  : [++equal(X2, f(i(X1),f(X1,X2)))] : rw(79,52)
   134 :  : [++equal(f(i(X1),0), f(X2,i(f(X1,X2))))] : pm(83,6)
   151 :  : [++equal(i(X1), f(X2,i(f(X1,X2))))] : rw(134,1)
   165 :  : [++equal(f(i(X1),i(X2)), i(f(X2,X1)))] : pm(83,151)
   239 :  : [++equal(f(X1,0), X1)] : 1 : 'final'
   240 :  : [++equal(f(X1,i(X1)), 0)] : 2 : 'final'
   241 :  : [++equal(f(f(X1,X2),X3), f(X1,f(X2,X3)))] : 3 : 'final'
   242 :  : [++equal(f(0,X1), X1)] : 52 : 'final'
   243 :  : [++equal(i(0), 0)] : 60 : 'final'
   244 :  : [++equal(i(i(X1)), X1)] : 67 : 'final'
   245 :  : [++equal(f(i(X1),X1), 0)] : 74 : 'final'
   246 :  : [++equal(f(X1,f(i(X1),X2)), X2)] : 64 : 'final'
   247 :  : [++equal(f(i(X1),f(X1,X2)), X2)] : 83 : 'final'
   248 :  : [++equal(i(f(X1,X2)), f(i(X2),i(X1)))] : 165 : 'final'
I hope the format is more or less self-explaining. Notice non-unary terms like "f(X1,0)" with variables and constants. We start with the three initial equations for the group (neutral element, inverse element, associativity). The 10 equations marked with "final" represent the resulting convergent rewrite system. "pm" is short for "paramodulation", a generalization of critical pairs to non-unit clauses (we only have units in this example, though). "rw" is rewriting. Ordering of equations is done implicitely and not recorded. --Stephan Schulz 22:00, 27 April 2006 (UTC)[reply]

Wrong Algorithm[edit]

The algorithm described in the article appears to me to be a special case of the Knuth-Bendix algorithm that is described in Knuth and Bendix's paper from Computational Problems in Abstract Algebra.

In that paper, the algorithm operated not on strings, but on expression trees that could contain wildcard terms. Rather than finding a pair of confluent string-rewriting rules by finding two rules whose left sides shared a prefix and suffix, it would perform an expression unification on the left-hand sides; if the rules could be unified, a new rule was added to the system.

A typical example might involve a (prefix) binary operator symbol x, a unary operator symbol i, and a nullary operator symbol e. One might have the rules:

xxABC -> xAxBC (associative law)
xDiD -> e (right inverse)

Where A, B, C, and D are term variables that can stand for arbitrary terms. Unifying the left-hand sides yields the assignments A = A, B = B, C = ixAB, D = xAB. so that:

xxABixAB -> xAxBixAB (a special case of the first rule)
xxABixAB -> e (a special case of the second rule)

From which one can then derive:

xAxBixAB -> e

The original Knuth-Bendix paper used this technique, and the algorithm, to find minimal sets of axioms for group theory, among other examples. Group theory normally starts with axioms like:

xeA -> A (left identity)
xAe -> A (right identity)
xiAA -> e (left inverse)
xAiA -> e (right inverse)

(Taking e to mean the identity element, iA to mean the inverse of A, and xAB to mean the product of A and B.) But in fact any three of these imply the fourth, and the algorithm was able to find a proof of this and to eliminate one of the axioms. The example I showed above has inferred the group-theoretic identity A(B(AB)-1) = e from the associative law and the right inverse law.

The algorithm described in the Wikipedia article, appears to be a much more limited special case.

-- Dominus 02:06, 16 August 2006 (UTC)[reply]

Addendum: The paper An Introduction to Knuth–Bendix Completion, A.D.D. Dick, The Computer Journal, 1991; 34: 2-15 would seem to be a more readable explanation than the original Knuth-Bendix paper itself, and bears out my assertion that the Wikipedia article is not describing the Knuth-Bendix algorithm fully. -- Dominus 02:55, 16 August 2006 (UTC)[reply]

I doubt the current version is detailing the algorithm in its full generality also. It would be nice to describe the algorithm for a general equational specification and the resultant term rewriting system -- I have such a description (Term Rewriting Systems, Terese), but I don't quite have the time currently to provide it. Dysprosia 05:05, 16 August 2006 (UTC)[reply]
You're right, see the older discussion. It's just that nobody has had time to do anything about it. If this goes on for long enough (weeks or months), I might find some. This is not a promise ;-). --Stephan Schulz 05:49, 16 August 2006 (UTC)[reply]
The new tag about dispute of accuracy might be a little misleading. The current algorithm is quite correct as stated, and is the (unoptimized) Knuth-Bendix completion procedure for finding a reduced confluent rewriting system for a finitely presented monoid. The scope of the algorithm is fairly clearly indicated in that section. The only dispute is whether to offer the algorithm for finitely presented monoids as the algorithm. Resolving the dispute might be as simple as renaming the section. I'll do that and remove the dispute tag later this week if no one objects. JackSchmidt 00:15, 9 October 2007 (UTC)[reply]
I agree that would be preferable; thanks. Moreover, I don't think there is any dispute that the algorithm as presented is not the algorithm. In this case, at least, there is a canonical source, namely the Knuth-Bendix paper reprinted in Leech. The algorithm as presented in that paper is the algorithm. -- Dominus 01:29, 9 October 2007 (UTC)[reply]
I don't think that the presented algorithm is entirely accurate because it is missing a critical pair case, namely when the lhs(left hand side) of one rewriting rule is completely enclosed in the lhs of another rule, e.g., {d->abc, e->b}, then abc will rewrite to d or aec and since this is not detected by the current algorithm it cannot be made confluent. Adding this case will make the algorithm correct. — Preceding unsigned comment added by Silvescu (talkcontribs) 16:11, 24 September 2012 (UTC) Just added the correction. I am also strongly in favor of keeping the description of the algorithm for finitely presented monoids(FPM), even if the original version of the algorithm (for term rewriting systems(TRS)) is typed in, as it very easy to comprehend without having to know unification. FPM and TRS are equivalent in power anyways since they are both equivalent to Turing Machines![reply]

Comment[edit]

The focus of my academic interest in the last 20 years is to automate the abstract mathematical reasoning in order to make it applicable by those, who have no time to endeepen in mathematics. Now I turn the importance from low to high, see the reason for that below. In fact, also the field should be changed from simple applied mathematics to artificial intelligence and applied mathematics, but I don't know how to do it.

It turned out during the last 15 years, that the abstract idea behind the Knuth-Bendix is simply essential to the automatically find and localise the calculation/deduction ideas needed in the applied abstract mathematics. The Knuth-Bendix critical pairs surviving the upcoming reduction phase of the algorithm are in fact the ideas. As soon as we replace the notion of critical pair with the notion of interferencing pair of knowledge resources, then the Knuth-Bendix will tell us, how to find key ideas in a much more general situation. Hence it is only the historical surface, that the Knuth-Bendix is illustrated using classical structures like groups or rings or alike.

{{technical}}

I completely agree, that the article is too technical. I suggest to remove almost all the technical details, and we have to tell the whole Knuth-Bendix idea in a much more human way. Sorry, but in the near future I do not see enough time to do it myself. But I encourage everybody to do it, to forget about the technical details, and to focus on the later much wider use of the algorithm. It is also very important to tell the reader, that Knuth-Bendix algorithm is NOT an algorithm in fact!

prohlep (talk) 22:32, 18 July 2008 (UTC)[reply]

I think the lead makes that clear using the term semi-algorithm in the first sentence. 86.127.138.67 (talk) 10:22, 21 April 2015 (UTC)[reply]

Bad Article and Example[edit]

The article and the example are unfortunately quite useless. Regarding the example, x^3=1 of course implies (xy)^3=1 if it is meant to be under generalization, and the set definition notation is equally incorrect in the first place. If x and y are meant to be constants, then the set definition notation is even more buggy. Sigh.

"x^3=1 of course implies (xy)^3=1" - Not true, these are finitely presented monoids and there is no unification going on. You cannot substitute x->xy in the first expression to get the second. However somwhat in in the spirit of your comment I think (xy)^3 should be written explicitly as xyxyxy and the same for x^3. — Preceding unsigned comment added by Silvescu (talkcontribs) 16:40, 24 September 2012 (UTC)[reply]
Then maybe we should rename x, y in a, b. Sth that looks more constants than variables.--Fschwarzentruber (talk) 21:10, 29 May 2016 (UTC)[reply]