Jump to content

Talk:Boyer–Moore–Horspool algorithm

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

More Typical Wiki Crap

[edit]

Well, we have an implentation which does not produce correct results using two compilers. And the original poster does not perform the shift correctly. Do not use this code.

151.196.6.139 (talk)noloader

cross-platform

[edit]

I'm not native to the C programming language and I don't have time to start sifting through the code in this article to develop a pseudocode representation of this algorithm.

Can someone who knows C better than I please read through and put the code into something cross-platform?

--Oddb411 13:01, 17 September 2006 (UTC)[reply]

C is actually a cross-platform language when written properly. The sample I wrote is meant to be of that kind. I'll see if I can edit it to use less of the kind of syntax that is inherent to C to make it better understood for those who don't have experience of languages of C origin. --Bisqwit 21:00, 17 September 2006 (UTC)[reply]

variable names

[edit]

Many programmers (WikiWikiWeb:MeaningfulName) now recommend "Always, always, always use good, unabbreviated, correctly-spelled meaningful names." Is there some more meaningful name we could use instead of "hpos" ?

In the Boyer-Moore algorithm article, the "hpos" variable tracks the position where a copy of the needle might possibly start in the haystack, although we generally look at some later position in the haystack. (What would be a more meaningful name for this?)

In this Boyer–Moore–Horspool algorithm article, the "hpos" variable tracks the actual letter in the haystack that we look at. (What would be a more meaningful name for this?)

(The difference is precisely

   BMH_hpos = BM_hpos + npos

).

I think doing it the same way in both articles would be less confusing. Which way (the current BM way, or the current BMH way) would make clearer, easier-to-understand code?

--68.0.120.35 02:32, 25 March 2007 (UTC)[reply]

Possible miscalculation of comparisons

[edit]

The article says "For instance a 32 byte needle ending in "z" searching through a 255 byte haystack which doesn't have a 'z' byte in it would take 3 byte comparisons."

I think it meant 7 byte comparisons, since each one skips 32 bytes until there's less that 32 bytes remaining.

Can anybody confirm? —Preceding unsigned comment added by 155.210.218.53 (talk) 18:21, 18 January 2008 (UTC)[reply]


- Yes, I concur, and will make the change. (And rather than use base-2 numbers, I'll use base-10 -- it gets the same point across w/ more-obvious arithmetic.) not-just-yeti (talk) 20:00, 3 July 2019 (UTC)[reply]

- Whoops, wait, I take that back -- that'd only be true if the needle were *entirely* "z"s, which is a bit *too* optimistic to include. You still get good average-case performance (skipping half the needle's length on average, each time) for haystacks whose letter-distribution matches the needle (besides 'z') and the needle contains no duplicate letters. …But I don't have a citation, and it's a bit of a mouthful.

- I still think we should better characterize the example; "up to 244 comparisons for a string of length 255" feels a bit unfair to the algorithms true typical-good-case performance.

not-just-yeti (talk) 20:11, 3 July 2019 (UTC)[reply]

Optimising the table size

[edit]

I might be wrong, but it looks like space can be saved in the bad character skip table by using a hash of the character instead of its actual value. In the case of a collision, the result will just be a smaller shift.

Not a particularly useful idea when the table's only 256 chars long, but it would allow for much better storage requirements if you were using, say, a 32-bit character set. In a case like that, probably only a small fraction of the character set would be seen, and so chances of collision would be low for a reasonably sized hash. CountingPine (talk) 08:27, 17 July 2009 (UTC)[reply]

KMP

[edit]

How is this related to KMP? If anything, the other heuristic in Boyer-Moore (which is not in this algorithm) is closely related to KMP's table (e.g. the compute_prefix code in http://en.wikipedia.org/wiki/Boyer%E2%80%93Moore_string_search_algorithm is exactly the pre-processing in KMP). Raduberinde (talk) 23:36, 17 August 2010 (UTC)[reply]

Python implementation

[edit]

From http://code.activestate.com/recipes/117223-boyer-moore-horspool-string-searching/

Python

[edit]
# bmh.py
#
# An implementation of Boyer-Moore-Horspool string searching.
#
# This code is Public Domain.
#
def BoyerMooreHorspool(pattern, text):
    m = len(pattern)
    n = len(text)
    if m > n: return -1
    skip = []
    for k in range(256): skip.append(m)
    for k in range(m - 1): skip[ord(pattern[k])] = m - k - 1
    skip = tuple(skip)
    k = m - 1
    while k < n:
        j = m - 1; i = k
        while j >= 0 and text[i] == pattern[j]:
            j -= 1; i -= 1
        if j == -1: return i + 1
        k += skip[ord(text[k])]
    return -1

if __name__ == '__main__':
    text = "this is the string to search in"
    pattern = "the"
    s = BoyerMooreHorspool(pattern, text)
    print 'Text:',text
    print 'Pattern:',pattern
    if s > -1:
        print 'Pattern \"' + pattern + '\" found at position',s
## end of http://code.activestate.com/recipes/117223/ }}}

(code is released under the PSF license (http://www.opensource.org/licenses/Python-2.0) <-- What? It makes absolutely no sense to apply a more restrictive license to public domain source code. We can do whatever we want with it, PSF be damned.

Real life examples

[edit]

Wouldn't there be a benefit in pointing out real live examples, e.g. https://github.com/ggreer/the_silver_searcher, to substantiate the usefulness of this algorithm?

I saw earlier the talk page that the algorithm itself was not implemented correctly. I am too lazy to verify the correctness of this code, so I'll leave it to others, but in the above example you have an implementation that does work (might have bugs though as any algo/sw).

There you have a truly super-fast grep tool using this algorithm for substring searches. — Preceding unsigned comment added by 192.75.88.232 (talk) 19:49, 25 June 2013 (UTC)[reply]

It is amazing that BMH has reigned (since 1980) for 33 years, it's time to utilize new CPUs in a much faster way!

[edit]

It took me 2+ years to optimize and explore this beautiful and simple algorithm, finally the gods of searching helped me to reveal the FASTEST function for searching a block of memory into another block, the so called MEMMEM, given that Windows OS lacks this important function and seeing how *nix world has got nothing worthy enough (except some 2007-2008 empty talks in newsgroups) I think it is time for change, the role of the successor is played by 'Railgun_Sekireigan_Bari'.

Why did you remove my contribution?

[edit]

Just saw that my BMH order 2/12 link in 'External links' is removed, what is the problem? — Preceding unsigned comment added by Sanmayce (talkcontribs) 17:45, 19 December 2013 (UTC)[reply]

Proposed merge with Raita Algorithm

[edit]

Raita's is apparently an optimization of BMH. QVVERTYVS (hm?) 15:08, 15 March 2015 (UTC)[reply]

Removing stale merge proposal after 2.5 years; substantial improvements can be independently notable. Klbrain (talk) 21:10, 10 September 2017 (UTC)[reply]

Pseudocode doesn't match the original

[edit]

I translated this from the Python version that was previously on this page, but it doesn't match Horspool's pseudocode very closely. It also contains some bugs that I discovered when implementing it (corner cases). QVVERTYVS (hm?) 10:26, 6 June 2015 (UTC)[reply]

No mention of Boyer-Moore-Sunday algorithm.

[edit]

"A very fast substring search algorithm"; Daniel M. Sunday; Communications of the ACM; August 1990

The Sunday variant is slightly more efficient than Horspool.

Thierry Lecroq covers the three versions presented by Sunday. "Exact String Matching Algorithms" — Preceding unsigned comment added by SirWumpus (talkcontribs) 16:37, 5 November 2015 (UTC)[reply]

So be bold, and add it! QVVERTYVS (hm?) 17:26, 5 November 2015 (UTC)[reply]

Demo code buggy?

[edit]

The pseudo code in the article

function preprocess(pattern)
    T ← new table of 256 integers
    for i from 0 to 256 exclusive
        T[i] ← length(pattern)
    for i from 0 to length(pattern) - 1 exclusive
        T[pattern[i]] ← length(pattern) - 1 - i
    return T

function search(needle, haystack)
    T ← preprocess(needle)
    skip ← 0
    while length(haystack) - skip ≥ length(needle)
        i ← length(needle) - 1
        while haystack[skip + i] = needle[i]
            if i = 0
                return skip
            i ← i - 1
        skip ← skip + T[haystack[skip + length(needle) - 1]]
    return not-found

hangs at "skip = 2" when the haystack is "ADBBCCBDCDCC" and the needle is "ABC".

When you add a line like

T[pattern[length(pattern) - 1]] ← 1

, then it works fine.

— Preceding unsigned comment added by Aunkrig (talkcontribs) 08:48, 4 October 2017 (UTC)[reply]

Please note that the second for loop in preprocess_pattern is from 0 to length(pattern) - 1 exclusive
The 'exclusive' means you stop before the 'to' value, so for needle="ABC", length(pattern) - 1 is 2; you use values i = 0 and 1 only; so T['A'] = 2, T'['B'] = 1, but the entry T['C'] wil remain as 3, and it wil not hang. Greg.smith.tor.ca (talk) 16:51, 9 July 2023 (UTC)[reply]

"The original algorithm tries to play smart" by checking the last character for good reason

[edit]

... so it seems wrong to belittle that choice in that comment.


The first operation in the while loop is to compare, somehow, the entire match candidate site with the pattern. If the comparison fails, the next operation is to get haystack[skip + length(needle) - 1] to use as table index into T; this is the rightmost character in the candidate site.

No matter which end you start at, it's quite likely that the first character compared will fail to match, so it's better to start by reading the rightmost one, since you're going to need it anyway. I.e. the first 'if' below will usually fail, and the code will already have hlast in a register for the table index of T.

Factoring the entire string compare into the 'same' function makes the algorithm a little clearer overall, but obscures this detail.

function search(needle, haystack)
    T ← preprocess(needle)
    nn1 ← len(needle) - 1
    nlast ← needle[nn1]
    skip ← 0
    while length(haystack) - skip ≥ length(needle)
        hlast ← haystack[skip + nn1]
        if hlast == nlast
            if same(haystack[skip:], needle, nn1)
                return skip  
        skip ← skip + T[hlast]
    return not-found

Greg.smith.tor.ca (talk) 16:42, 9 July 2023 (UTC)[reply]