Talk:Computer chess/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Introductory paragraphs corrected and clarified

The previous introduction erroneously gave the impression that computer chess dates back to 1769; I clarified that this was a hoax and linked to The Turk as appropriate. I then noted circa 1950 as the earliest realization of legitimate computerized chess (according to Bill Wall's timeline; the link to which I added to the external links section). Also the commentary on the motivations and success of chess computerization were separated into a new paragraph and refactored a bit.JimD 00:42, 2004 Jul 27 (UTC)


Computers won't solve chess

Removed from main page: Theoretically, at some point in the future, a computer will be able to play all possible chess games and determine the optimal move for any given board position (using Moore's Law as a guide, it probably won't happen until 2030). For example, the fastest chess programs can "look ahead" and completely finish the last 15 moves in a game (because of "pre-calculated" endgame tables). This is possible because there is a finite number of ways the chess pieces can be arranged on the chessboard.

Finite, yes, but very, very large. As noted at the beginning, the number of possible board positions is probably greater than the number of elementary particles in the universe. Moore's misquoted law says computers double in speed every 18 months, but to search one more possible move on a chessboard typically requires a factor of 16 increase. Computers in 2030 (by Moore's Law) are a mere million times faster, which gets you all of 5 more moves. --Belltower
I think solving chess is possible. --Aloril 11:29, 9 Sep 2004 (UTC)

Mathematically, YES, chess is solvable but almost impossibly so. Please refer to the thorough discussion which already transpired on the following page: http://en.wikipedia.org/wiki/Talk:Chess (See section 10.) --OmegaMan

I read the archived debate, and I find the topic absolutely fascinating. This is great stuff. Here's my take: We know chess is finite and therefore theoretically solvable. We also know that theoretically there might not be enough time available (whatever this means) to conduct the actual solving. However, no one knows what computing advances are on the way -- perhaps quantuum computers will be up to the task. Perhaps something else. From a purely logical standpoint, I think we have a duty to acknowledge that the game has a solution even if we don't have the capability of computing it yet. Whether that gets added to the article or not is another matter. It's a great article as is. Ikilled007 08:58, 25 February 2007 (UTC)

I am not sure it is possible to solve chess, even theoretically, because the capability of computers is limited to the number of atoms available in the universe. SyG (talk) 08:11, 31 May 2008 (UTC)

Computer chess

Isn't GNUchess is vastly weaker than other widely distributed software? Fritz or Rebel could defeat most master players under tournament conditions, but I am not sure GNUchess could. Can anyone confirm or contradict? Thanks. --Karl Juhnke Fritz, Rebel or any of the top chess programs are well above the strength of a FIDE Master (FM). Deep Fritz recently defeated the world champion. But as you say, GNU Chess is much weaker. Spiderpop 09:08, 1 March 2007 (UTC)

From the main article: "Minor variations to the rules would either make chess a trivially easy task for a computer to win, or conversely leave even elaborate computers easy pickings for amateur players." Really? What minor variations? Does anyone have a citation for this? -- The Anome

It is partly a supposition on my part, but I believe a good one. It is based on the facts that a) the different algorithms for playing various board games (checkers, chinese chess, othello, etc.) are all variations on minimax searching with pruning heuristics, and that in some of these computers can be beaten by rank amateurs, but others computers are the undisputed world champions. Secondly, it is a general characteristic of tree search algorithms like this that they are *extremely* fragile—a minor change in the pruning heuristics and suddenly things go to pot. --Robert Merkel

I, too, am curious what "minor variations" you have in mind. For example, I understand that it doesn't particularly tip the balance of power between computers and humans to play FischeRandom chess, or chess at material odds. The variations at which computers are known to stink relative to humans all seem to me to involve major rule changes, e.g. bughouse.

The reason computers excel at some games and do poorly at others depends, AFAIK, mostly on the presence/absence of a quick, reliable static evaluation function. In chess you get a very fast and reasonably accurate static evaluation simply by counting up material. Similarly for pruning heuristics, the most important thing is to keep examining a position as long as there are captures, checks, or promotions. Otherwise the static evaluation is OK.

In the absence of any specific examples of how a small change in rules makes a big change in computer playing strength relative to human playing strength, isn't the contested sentence purely speculative? --Karl Juhnke

Nice edit, Axel, issue resolved. Does that mean we should delete the talk section about it? --Karl Juhnke

No, we usually junk talk entries only if the discussion refers to a completely different version than the current one. AxelBoldt


Answer to GNUchess question and more suggestions

With regards to the question about GNUchess, yes GNUchess is weak even in comparison with freeware products like Crafty, Yace, Ruffian etc. The last, Ruffian is as of June 2003, generally acknowledged as the strongest chess engine you can get without paying a cent. The strongest engine of which source is available is Crafty, by Dr Hyatt, (author of Clay Blitz mainframe—a 2 time winner of World chess championships in the 80s). Crafty is a chess engine with a long history. I'm surprised it doesn't get a mention.

GNUchess is no pushover though. Depending on hardware and time controls, perhaps only FIDE rated players about 2100 can be certain to match Gnuchess, though weaker players can win one or two games.

Does anyone think it's a good idea to include information about endgame tablebases? Maybe a small mention on how many chess engines (200+ at last count) are conforming to one of the two communication protocols, Xboard or Universal Chess interface (UCI) which allows engines to be used in various interfaces both commerical (Chessmaster, Fritz etc[.]) and non-commerical (Winboard, Arena, Eboard, Knights, etc[.])

I'm new to this, so I'm not sure what's the best way to go about adding entries.

Some external links

Endgame Tablebase FAQ

List of engines and ratings

Some info about Winboard/UCI engines

Aaron Tay


Various other things that could be added to the article

  • Early history of the chess computers (including the theoretical work of Babbage, Zermelo, Quevedo, Von Neumann, Shannon and Turing)
  • The russian BESM
  • How chess AI work led to the development of alpha-beta searching.
  • The ITEP versus Kotok-McCarthy match
  • The development of custom chess hardware (à la Machack 6 and BELLE)
  • The Fredkin Prize
  • The International Computer Chess Association
  • The ICCA journal
  • Levy's Computer Chess Compendium
  • Non-bruteforce approaches to chess AI, for example the TDLeaf algorithm

--Imran

Well isn't the ICCA journal now the ICGA? Interesting ideas, maybe I'll work on one of them, but on a new page maybed?

Aaron Tay

Yes, it's operated under the name of ICGA since the first issue of 2000. --Imran 14:56 24 Jun 2003 (UTC)

Is there some confusion on there being more than one David Levy? The link here goes to an astronomer whose website makes no mention of chess—this needs sorting and/or disambiguating (or at least some mention of chess being added to David Levy's profile). --/Mat 22:46, 6 Apr 2004 (UTC)

If someone want to write up an article about the Chess David Levy here's some information I found about him,
Born: March 14, 1945
Won the 1997 Loebner Prize
Founder and Chief Organiser of the annual Mind Sports Olympiad,
Founder of Computer Olympiads
Founder of World Computer Chess Championships
Since 1999 the president of the International Computer Games Association.
--Imran 11:25, 7 Apr 2004 (UTC)

Our article here says that Leonardo Torres y Quevedo built his rook and king versus king-playing machine in 1890. This agrees with this webpage, but disagrees with this one from Chessbase and this rather detailed piece in Spanish which both claim it was 1912. I don't know which is correct myself; I just wanted to note the discrepancy and ask if anybody knew with absolute certainty which is correct. I'm tempted to believe 1912 myself, since that Spanish-language page looks rather well researched. --Camembert

A bit more on this: the Oxford Companion to Chess, in its "Automaton" entry, states this machine was first exhibited in 1914. This suggests a 1912 build date more than it does 1890, though the OCC doesn't actually offer a date for its construction. Still, I think the balance of evidence suggests 1912, and since nobody has come up with anything completely comprehensive, I'm going to change the article to 1912. --Camembert

Inclusion of Arimaa

I notice that a reference to Arimaa was added to this page and then deleted. It's true that Arimaa isn't as well known as Go, but the people who do know about Arimaa are predominantly AI researchers. That is to say, the importance of Arimaa as an area of AI research far exceeds its importance as a strategy game. Furthermore, there is a $10,000 standing prize offer for writing an Arimaa program that can beat the top humans. If a reference to Arimaa from this article isn't yet appropriate, then I predict that in a year or two it will be, as the game continues to prove itself to be interesting and intractible to computers. --Fritzlein 18:31, 11 Nov 2004 (UTC)

So is it still out of the question? lysdexia 06:34, 12 Nov 2004 (UTC)

I would prefer to NOT include Arimaa within this article. For that matter, Go (board game) as well.

This article is entitled "Computer chess" and NOT "Computer board games". This is a significant distinction. Although the broad definition of "chess variants" is nearly interchangeable with "board games", the specific definition is not. Of course, I cannot judge for everyone else which definition we should be using.

I think it is instructive to point-out that at Zillion Of Games, a universal board game program, their webmaster and game expert Ed van Zon has devised an index of 14 categories for placing all 1023 download entries (as of Nov. 14, 2004).

Zillions Of Games | Game Index

Please note that only the "checkmate" category (274 entries) and the "checkmate combo" category (56 entries) contains entries intelligibly related to chess, shogi or xiang-qi which are commonly known as "chess variants".

Furthermore, please note that Arimaa is properly categorized as a "breakthru-race game" and Go (board game) is properly categorized as a "territory game". Hence, their mention in an article such as this one is questionable. BadSanta

I vote against including many other games in this category. I'd like to make an exception for Go as a really well known example, illustrating why chess as a game might be different then other board games. If we include other board games, where do we draw the line? Until Arimaa has replaced Go as the obvious counter example, i'd say to leave things as they are. Sander123 16:33, 15 Nov 2004 (UTC)

Go is a much more obvious counter-example than Arimaa, partly because millions of people play Go whereas hundreds (or only dozens?) of people play Arimaa, and partly because the gap between the best computers and the best humans is much larger for Go than it is for Arimaa. The one thing that would qualify Arimaa for this page is its superficial similarity to chess, i.e. using the same board and pieces. But Arimaa plays so much differently than chess, I agree with those who are uncomfortable calling it a chess variant. --Fritzlein 01:16, 8 Feb 2005 (UTC)

Endgame

Also, the Nalimov tablebases do not consider the fifty move rule, under which a game where fifty moves pass without a pawn move or capture is automatically drawn. This results in the tablebase returning erroneous results in some positions such as "Forced mate in 66 moves" when the position is actually a dead draw because of the fifty move rule.

"a game where fifty moves pass without a pawn move or capture is automatically drawn."I thought this had to be declared a draw, since when is this automatic?

Right - it has to be claimed by one of the players, it is not automatic. Bubba73 (talk), 00:55, 13 February 2006 (UTC)

I thought that the 50-move rule had a specific exemption for positions for which theory predicted a result in a larger number of rules? Such databases surely count as "theory". David.Monniaux 14:52, 3 May 2005 (UTC)

There used to be exceptions to the fifty move rule for material imbalances which could theoretically take more than 50 moves to win, but these have now all been scrapped (how this relates to the article, I don't know, but I thought I'd better mention it). --Camembert
Well, it certainly relates: there are positions where the computer may know how to win (due to endgame libraries), but only in > 50 moves, which leads to a drawn game if the exceptions I alluded to is not present. Of course, this is not really a problem: the endgame libraries can probably be curtailed to endings in < 50 moves. David.Monniaux 06:39, 5 May 2005 (UTC)
It actually touched off considerable debate when endgame tablebases first discovered forced wins of longer than 50 moves without capture or pawn advance. The initial reaction of the majority of chess players was that the 50-move rule should NOT impose a draw if one player has a forced win on the board that might take longer than 50 moves. The 50-move rule was never intended to cut off perfect winning play from a given position, only to cut off pointless play. On the other hand, it soon became clear that in practice it was difficult to categorize what positions deserved extended opportunity to win. As increasingly long forced wins were discovered, a consensus grew that the "purist" position was untenable in practice. Either the drawing rule would have to become a "250-move" rule in all cases, or there would have to be a ridiculously complex system for determining how long one would be allowed to play on.
An additional consideration is that most humans don't have an understanding of pawnless positions that would allow them to convert a won position in, say, 100 moves, if they can't convert it in 50. In essentially every human versus human chess game, if there have been 50 moves without capture or pawn advance, then truly no one is making progress, and extending the game would not so much allow one side to execute good technique as allow one side to play on hoping for a blunder.
The fifty move rule has once again become standard, and with good reason, but the purist mode of thinking is so intuitive and entrenched that it still requires explicit explanation as to how a "mate in 66" can also be "dead drawn". --Fritzlein 15:29, 6 May 2005 (UTC)
Note also most humans can't defend positions to their maximum number of moves, so merely truncating the endgame table base to 50 moves might be fine for analysis, but might actually draw games against humans that would have been won in practice. There is no right answer - I suggest playing the best possible chess and claiming a moral victory if the computer has to accept a draw due to the 50 move rule, with mate in 2 on the board. Sometimes it really is how you play the game, and not the result that matters.
-- Simon
The 50 move rule is undoubtedly the most unnatural and least satisfactory rule in chess. Theoretically, it could be dropped entirely, as the 3 move repetition rule will eventually come into play in any position where true progress is impossible. It would be perfectly reasonable to drop the 50 move rule for chess between computers to produce a purer contest with more decisive games. For human players, the 50 move rule avoids the fatigue involved in games of hundreds of moves. With regard to positions like the Q+RP v Q ending where it takes 71 moves to get the pawn from the sixth rank to the seventh rank, it is not very useful to relax the 50 move rule between human players, as no human without a tablebase can approach accuracy in such an ending. The result of such an ending betwenn humans depends on whether the relative inaccuracy of the two players' moves permits a win in 50 moves. This is one reason that the relaxing of the 50 move rule for certain positions was dropped again, as it did not really add to the quality of human competition. Elroch 23:19, 12 November 2006 (UTC)

Anti-computer Chess

In the edit summary for 30 August 2005 13:48, Lenthe wrote

added link to a krabbe article about anti-computer chess. actually this article's enthousiasm about the strength of computers should be tempered a bit

It's an interesting link, but his premise seems bogus; why does a few less-then-optimal games and games played without the 50-move rule (a professional only rule) mean that computer chess isn't chess? Seems like a meatbag presumption against digital beings. The assumption that passing the Turing Test has anything to do with playing chess seems like pure anthropocentrism. The fact that someone has a playing style that can beat computers again means nothing, unless what you really mean is computer chess isn't human chess. As for today, Nemeth is effective against a number of PC-level programs, but a lot of playing styles have made a big hit when they first appeared and became less important when strategies were developed against them. Computers will likely be no different; new programming against Nemeth will be developed and it will just be a speed bump in the progress of better playing computers.--Prosfilaes 18:16, 30 August 2005 (UTC)

Quantum Computing?

In the "solving chess" section, I am completely uncomfortable with the paragraph focusing upon "quantum computing", a possible-only-if-the-theories-are-correct, future technology. Today, it is science fiction. Accordingly, I think it would be more responsible to remove its mention from this real technology article. --AceVentura

Quantum computing is not science fiction, it already exists; on a very small scale. It's a perfectly reasonable extrapolation to predict that chess might be solved using it, and the potential impact of quantum computing is being seriously considered on other areas such as cryptography. WolfKeeper

Wikipedia- Quantum Computing

http://en.wikipedia.org/wiki/Quantum_computing

Please note an article's link featured on this page entitled, "Unsolved problems in physics: Is it possible to construct a practical computer that performs calculations on qubits (quantum bits)?"

Although quantum computing may indeed become science fact or less inaccurately, real technology in the near or distant future, its basic possibility would not be debated by physicists if its future were assured. Upon closer inspection, this is a marginal case. I change my stand to neutral. Other editors may decide its fate. --AceVentura

Quantum Computers have been shown to be very unlikely to solve NP-Complete (generalized chess is P-space complete and thus NP-hard) problems in polynomial time (Umesh Vazirani). Quantum algorithms have been important for cryptography because of the quantum factoring algorithm, which is a specific algorithm that is extremely unlikely to be related to chess in any way. Quantum computing is unlikely to replace conventional methods because the algorithmic areas where quantum computers are promising is fairly specialized. —Preceding unsigned comment added by 216.73.210.86 (talk) 21:42, 26 September 2008 (UTC)

Links

The link to http://www.brainsinbahrain.com/ seems to fail. The link to Computer-Chess Club links to a password request. Please, fix (if possible) this link, or I will erase them (or you can do). I do not know how to fix them (there is a misprint, or the websites changed their URL?). Gala.martin 19:37, 25 January 2006 (UTC)


Any idea why the link to http://freechess.50webs.com (Zarkon Fischer's Free Chess Programs) has been removed? I think it is of interest to many visitors of this page - more so than some of the other links. However, I am not going to add it again if there is a reason why it has been deleted. Thanks. ZF.

Match drawn or tied?

The article lists matches as being drawn, for instance:

  • 2002, Vladimir Kramnik draws an eight-game match against Deep Fritz.
  • 2003, Kasparov draws a six-game match against Deep Junior.
  • 2003, Kasparov draws a four-game match against X3D Fritz.

I would prefer to say that games are drawn but matches are tied. Does anyone else have an opinion? Bubba73 (talk), 01:33, 13 February 2006 (UTC)

I think it's irrelevant whether to use "tie" or "draw". As the article you linked to suggests, a tie is the same as a draw. Fetofs Hello! 21:27, 22 March 2006 (UTC)
Yes, but we don't usually speak of a chess game being "tied", we say "drawn". Bubba73 (talk), 21:44, 22 March 2006 (UTC)
I agree that games are drawn but matches are tied. A tie implies equal number of points. Stephen B Streater 07:44, 9 April 2006 (UTC)

Endgame

I've edited this a bit. Here is some justiication.

Stored databases

With six pieces, the number of legal positions

< 64^6

= 68,719,476,736

With the board taking approx 7 bits per piece, this equates to 360GB. Thus a modern 10TB RAID array, which could be made available on the web, could store around 30 positions even if all the impossible-to-reach positions were allowed for.

Calculation at run time

OTOH, any single position could be solved in real time with enough RAM. Alpha-beta searching with a normal forward looking search with Win/Loss/Draw scoring would mean that only (in an ideal world) the square root of the number of positions would be searched before all relevant positions appeared in the database. With six pieces, this equates to 0.25M positions. Even with sub-optimal searching to the tune of thousands of times, with modern Macs supporting 16GB RAM, it be would a short matter to go through all positions allowed by the winning player strategy, thus solving a given position. Stephen B Streater 07:10, 9 April 2006 (UTC)

Brute Force vs. Strategy

I updated this section to conclude that rather than being a conflict between two completely different approaches, that the challenge is to find the middle ground between them that works the best.

I also updated the summary of the research of Adriaan de Groot to make it clear that these were his conclusions based on interviews, and not necessarily fact. I find it somewhat ludicrous to think that a master considers only 40 to 50 positions before making a move (but I do recognize that those were his conclustions, and he was a master). Bill Alexander

When I entered the First Inernational Computer Games Championship (it had a name something like that!) there was a big chess contingent and lots of grandmasters were there. They also thought they looked at around 40 positions. The consensus was that players at all levels looked at the same number of moves, but the good players looked at the right moves, with a branch factor of 1.3. Stephen B Streater 06:50, 14 May 2006 (UTC)
Well Bill, the problem is that a type A/Brute Force program simply calculates all possibilities to the limit of its time constraint and speed of its evaluation algorithm, that's how it got the name "Brute," it's an archaic, slow, inefficient approach. Adding parallel search/threads does not make the approach any smarter, it is still calculating all possibilities to the limit of its time constraint and speed of its evaluation algorithm. For example, let's say you add a second processor/thread, that would be like sending two people into the desert to search for a lost diamond ring, yes it will find the ring quicker than if only one person was searching for it, but it is still an inefficient approach. Type B, in the other hand, is the smart approach, it it discards the ridiculous moves and only calculates the moves that show promise, just like a human does. So back to the anology, a type B approach would be to ask the person who lost their ring where they were when they last were certain the ring was still in their possession, and then start the search from that last known position. This method is a lot smarter and efficient because instead of combing through an entire desert, you would only have to search a small percentage of the desert in order to find the ring. As you can see, these two approaches are entirely different, there can be no middle ground, a chess program is either type A or a type B, there is no type AB or BA or C. Do you understand what I'm trying to say? Dionyseus 07:53, 14 May 2006 (UTC)

I'm not sure ... Is a program that uses alpha-beta pruning a Type B program by definition? If so, then I am pretty sure that no one has ever written a strong Type A program - and we might as well drop this section from the article. I would call all the methods listed in the section Search Techniques methods that could be used by a Type A program, even though they end up limiting the search. The distinction is that they do so based on the results of previous searches.

Don't get me wrong, I think that good chess heuristics are absolutely crucial to making a good chess program. Besides pointing the computer in the direction of good moves generally, they allow it to search to find the best lines more quickly, and lead to many quick alpha-beta cutoffs, shortening the search time. I just have never seen a strong program that generates all the legal moves, uses some sort of pattern recognition to reduce this list to 10 or so, and then does a tree search on these. I certainly don't so easily give up on moves when I am playing chess.

By the way, I would be interested in references confirming that Deep Junior, Fritz and Hydra are Type B programs. I own a copy of Fritz, and use it often. Its diagnostics seem to indicate that its search is quite exhaustive (exhausting!). In general I would be interested in seeing some references that indicate that the opinion that Type B programs hold the future is held by someone besides you. Bill Alexander

Well the strength in type A programs came from their hardware. Back then it really did appear as if type A was the best approach because it was much easier to just calculate everything than the seemingly impossible task of making the engine smart. Deep Junior, Fritz, Shredder, and Hydra are all type B programs.
It's not black and white; there's a continuum between type A and type B, depending on how much chess knowledge gets built in.WolfKeeper 08:06, 15 May 2006 (UTC)
Take a look at 'Is Hydra as strong as Deep Blue?' in this Hydra FAQ, they explain that Deep Blue was the last type A/brute force searcher, and that Hydra is type B.
You're reading too much into this. Deep Blue was somewhat less sophisticated than the other programs it was contemporaneous with, because it was harder to build chess knowledge into silicon. Hydra has more flexible hardware and the team has simply been able to capitalise on this. Hydra is more sophisticated, but it's still type A. That's why it calculates 200+million moves per second; type B wouldn't do that.WolfKeeper 08:06, 15 May 2006 (UTC)
As for Deep Junior, Fritz, and Shredder, you can see for yourself that they are type B by going into menu 'Engine,' and then choosing 'Change Main Engine,' and then 'Engine Parameters,' and you can see that you can turn off some of the pruning and selective algorithms, but note that you cannot make them play as truly brute force, at least not in the Fritz 9 interface, I seem to remember that the Rebel 10 interface had an option that could force it to play in brute force mode. Dionyseus 03:43, 15 May 2006 (UTC)
That's not really true. All programs are somewhere between type A and type B. Hydra very probably has more chess knowledge than Deep Blue, but it's just a question of degree. Most programs try to evaluate a very, very large number of moves per second; and Hydra is the king of that right now. That's *very* type A behaviour. A type B program would evaluate *much* more slowly.WolfKeeper 08:06, 15 May 2006 (UTC)
The only reason Hydra can evaluate so many positions is because of its hardware, it can calculate just as fast as Deep Blue, but the difference is that Deep Blue was type A and was only able to reach a depth of 12 on average, whereas Hydra can reach depth 18 on average thanks to its type A approach.
It's not so simple. It uses a hybrid approach.WolfKeeper 20:16, 15 May 2006 (UTC)
If you make Rybka or Shredder play on Hydra's hardware, it is still a type B program, a type B program on very powerful hardware. Dionyseus 08:54, 15 May 2006 (UTC)
Hydra is a system of hardware AND software. Even if Rybka or Shredder was portable onto Hydra's hardware you'd doubtless end up disabling the chess coprocessors. Even if it might be possible to modify Rybka to use the coprocessors, would that still be Rybka? No. It wouldn't be either Hydra or Rybka, it would be a new engine.WolfKeeper 20:16, 15 May 2006 (UTC)
I have found this discussion to be very enlightening, and I stand corrected on a number of points. I still think that the conclusions in this section of the article too strongly favor Type B. I think the jury is still out about whether pruning prior to tree search produces the best chess. Bill Alexander
My personal view is that the current version as it stands gives readers a wrong impression that chess engines moved from Type B to type A and then back to Type B again in the 90s. From the reading of this discussion, pretty much everyone thinks this is wrong except for Dionyseus . Aarontay 20:09, 12 January 2007 (UTC)
Well can you name any type A engine/machine still in existence today? Deep Blue was really the last type A machine. Dionyseus 15:24, 15 May 2006 (UTC)
All of them. They're all type A programs, but with some features of type B. Type A and Type B are idealised things. It's quite incorrect to say that all current programs are type A or type B, they have aspects of both.WolfKeeper 20:03, 15 May 2006 (UTC)
No, they're not. You don't understand the differences in Type A and B. Here's the simple way to explain it: Type A programs evaluate all moves all possibilities. It would be like sending one person to find someone's ring in a desert, it's going to take that person millions of years to find it. If you add a processor, it is still Type A, it is like having two people looking for the ring in the desert, yes it will find the ring quicker than if there were only one person, but it is still a slow and inefficient method, it would take them thousands of years to search the entire desert. If you add 63 processors instead, it would still be Type A, 64 people searching for the lost ring in the desert, it still would take them years to search the entire desert. Type B, in the other hand, is the smart approach, a type B approach would be to ask the person who lost their ring where they were when they last were certain the ring was still in their possession, and then start the search from that last known position, instead of having to search the entire desert they only need to search a small portion of it. If you add 63 processors, it is still Type A, it would just make it a lot quicker. Dionyseus 00:30, 16 May 2006 (UTC)
Ok, a few points: a) I'm a professional software engineer. b) I've personally written several computer game searches, including chess c) I've read books on computer chess programming, so I do know something about it, so you can't bullshit me with talks about deserts.WolfKeeper 01:17, 16 May 2006 (UTC)
Type A, B were concepts outlined by Claude Shannon. His 'type A' program didn't even use alpha-beta pruning. Now, after alpha beta was invented, and added to a program, is that still a type A program? I think most people say yes. What about 'killer heuristic'? Again, mostly yes, some no, since killer heuristic works because of specific features of chess as a game. What about simple positional heuristics relating to King vulnerability? Debatable; yes, debateable no. So we have a fuzzy boundary, it depends what you think 'type A' is and isn't. And people will violently disagree. In fact I can quite reasonably take the position that no really successful program has ever been pure type A after the invention of alpha-betas. So given the extreme fuzziness of A/Bness, not only is it a stupid argument, I fundamentally disagree with the way you have edited the article to state that Deep Blue was type A and Hydra is type B. It really really, truly is not that simple, and you'd have to be very naive or ill-informed to argue about type-a or type b in anything more than a relative sense (obviously adding more chess knowledge makes something more type B). So yes, Hydra is more type B than Deep Blue, and Rybka is probably more type B than Hydra.WolfKeeper 01:17, 16 May 2006 (UTC)
I read the stuff about Deep Blue being a pure brute force machine and believed it for a while, but don't believe the hype, I found some of the papers on Deep Blue. It's so very wrong; Deep Blue was actually quite sophisticated, and had features like quiescence search/variable depth and so forth. It's not easy to beat Gary Kasparov, a pure type A program would never have won.WolfKeeper 01:17, 16 May 2006 (UTC)
Of course it's not easy to beat Garry Kasparov, but Deep Blue was able to win the match because it was capable of calculating 200 million positions per second and was able to reach on average a depth of 12 plies, in other words it was able to fully calculate all possibilites 6 moves ahead. That would also mean that it missed anything that was deeper than 6 moves, this is the folly of Type A programs, this is their major weakness. Hydra on the other hand can reach 18 plies on average, despite the fact that it calculates at the same speed as Deep Blue. Why is this? It's because Hydra is Type B and thus it doesn't bother calculating ridiculous moves and only considers what it thinks are good candidate moves. The Hydra team says this, I think they know more about computer chess programming that either of us Hydra FAQ. Dionyseus 01:50, 16 May 2006 (UTC)
They nowhere claim that they are a type B program. Do you have any other cite for your claim?WolfKeeper 02:56, 16 May 2006 (UTC)
You did not read the Hydra FAQ fully. Here, I'll post the relevant portion, and I'll put the most important portions in capital letters for you: "One example is the different search mechanism: Deep Blue and Hydra have about the same raw speed. Deep Blue searched in typical chess positions 12 moves (plies) deep, Hydra 18 plies. DEEP BLUE WAS THE LAST BRUTE-FORCE SEARCHER. Every possible combination was searched to 12 plies. HYDRA USES SOPHISTICATED PRUNING TECHNIQUES. SHE DOES NOT WASTE RESOURCES FOR IRRELEVANT VARIATIONS. Pruning away variations involves the risk to overlook some combinations, but the considerable greater search depth overcompensates this risk." If that doesn't convince you that Hydra is a Type B machine, then clearly you do not understand the differences between Type A and Type B. Dionyseus 03:15, 16 May 2006 (UTC)
I would not trust the Hydra FAQ has a unbiased source. They are clearly trying to downgrade the achievements of Deep blue. But as is well known, while Deep Blue is more "brute force" than Hydra they *do* stuff like singular extensions, so they extend some lines more than others....Aarontay 20:15, 12 January 2007 (UTC)
According to you, it's something to do with deserts isn't it?WolfKeeper 03:30, 16 May 2006 (UTC)
I repeat, it does not say that it is a type B machine. Do you have any cites that says that it is, or that any other programs are for that matter?WolfKeeper 03:30, 16 May 2006 (UTC)
Read the Computer Chess article here on Wikipedia for an explanation of what Type B is. Here's a quick explanation: "Shannon suggested that type B programs would use a "strategic AI" approach to solve this problem by only looking at a few good moves for each position. This would enable them to look further ahead ('deeper') at the most significant lines in a reasonable time." Dionyseus 05:07, 16 May 2006 (UTC)


I don't think null move pruning , extensions count as "strategic AI". Aarontay 20:15, 12 January 2007 (UTC)
Deep Blue was not a trivial brute-forcer; it used some sophisticated technicques--it does not do anywhere close to "examining every possible position for a fixed number of moves using the minimax algorithm." Both of them search millions of positions, and exclude a large number of them from further searching. It's not true, as they imply in the FAQ, that Deep Blue searched all positions equally; in fact, the Deep Blue team pioneered some innovations in quiessent search, where the program looks at some variations very deeply. The difference between the two is a matter of degree, not kind.
Deep Blue pioneered innovation in quiescent search? I read it was singular extensions. That's not quite the same as quisecene search. Aarontay 03:31, 11 January 2007 (UTC)
And you're being awfully condensending here. A little politeness might go a long way.--Prosfilaes 05:11, 16 May 2006 (UTC)
Well I'm quite familiar with Wolfkeeper for I had to endure mediation with him in the past. I did not intend to offend with the capital letters, I just wanted to make sure he would see it since it seemed as if he had somehow not read it the other two times that I provided the link. Dionyseus 07:07, 16 May 2006 (UTC)

I'd like to look at this type A / type B concept from a different angle. I wrote a brute force program which about ten years ago could manage a few million nodes per second on my Risc PC. It used a simple positional evaluation on the first ply to sort the moves. For the depth search, it scored by counting only piece values, using alpha-beta windowing with zero width (most moves came out as score zero, giving almost perfect cut-offs). By using iterative deepening and remembering the best move from early positions to help the sort order, I could generally search 12 ply exhaustively (max 18 ply in simple positions), with a quiessant search for the capture of up to another 12 ply, though this usually ended immediately. The increase in ability as I increased the search depth was fascinating to watch. It could thrash everyone except real chess players - because the brute force didn't allow for positional information. This program makes all other Type A programs look like type B, because they are implicitly using positional AI in their positional evaluations. To me, there is type A, and then a long line which stretches out past Deep Blue, Hyrda and the rest increasingly towards Type B which effectively is at infinity. There is no Type B point, because a better positional evaluation will always be possible. And better positional evaluation is effectively a mini pruned search. Stephen B Streater 08:43, 16 May 2006 (UTC)

I think what has really happened is that people have redefined type-B. It sounds so much better to say that my program is an intelligent strategy type-B whereas your program is a just a stupid brute-force type-A...

Having actually read Shannon's paper :) he has type-A as a fixed depth search, and type-B doing 1) "evaluate only at reasonable positions, where some quasi-stability has been established" i.e. quiessant searching and 2) "select the variations to be explored by some process so that the machine does not waste its time in totally pointless variations."

Now, just about everyone, even those calling their programs type-A, has been doing quiessant searching for ages. It's easier, for one thing.

Back in the 60s, 70s and 80s, people doing type B did things like have complicated decision-making - 'plausible move generators' - to chose which moves to look at in any one position. Now people are just doing null move pruning (i.e. assuming that at least one move exists that won't make things worse for the player making it) and some of them are calling it type-B.

Indeed, if you look at the top programs whose source is available, including an earlier version of the current computer chess champion, they're typically doing fixed depth full width search plus quiessant searching plus null move pruning and... twiddling with the evaluation function, nothing more. The search has moved since Shannon's day from minimax to alpha-beta to negascout, but they're still A's in my book.

Anyone using some of the claimed type-B programs for analysis knows there's either some pruning going on or the evaluation varies depending on depth (e.g. becomes 'material only' after a certain depth) - what's not seen in a 12 ply search is sometimes seen in a 6 ply search, 6 plies down the predicted line - but I seriously doubt that the pruning is much more than null move pruning. (And they're still searching millions of nodes, rather than searching a few thousand more intelligently.)

Does a simple 'if a is less than b' comparison count as the artifical intelligence that used to be implied by "type-B"? Probably not.

Or if it does, we are all type-B now. Lovingboth 09:38, 11 July 2006 (UTC)

Fully agree with your analysis, and I say that as a chess program author and researcher. The final paragraph about the resurgence of Type-B programs is completely incorrect. Really, the type-A/B distinction is a relic of the dinosaur age of computing, back when the hard problem was rationing out your 1000 nodes per second. Exhaustive search computer chess is the clear winner, due to the exponential increases in processor speed, memory, and disk space. --IanOsgood 23:11, 10 October 2006 (UTC)

Fully agree. But if we really *want* to talk about type A/B I would still say most modern programs are still mainly Type A with some aspects of Type B. Also when most people say Type B they think of more intelligent human-like selection of moves (plausible move generator mentioned above), which most modern programs don't do obviously, though there is pruning based on domain and non-domain heuristics (e.g Null move heuristic).Aarontay 03:31, 11 January 2007 (UTC)

On commercial chess computers, why was info deleted?

I have a hard time understanding how Wikipedia doesn't have any mention of the first commercially available chess computer (the Chess Challenger 1 by Fidelity Electronics). That is a key piece of information in the history of dedicated chess computers! And it is not well documented around the web except for a few German sites dedicated to this subject. I had added a couple of sentences with links to my site (www.chesscomputers.org) with mode information on this including details on the patent and the history of the company. I know it was a link to *my* site but I believe the information is very relevant. Those paragraphs were deleted. Any reason why? I'm not angry but just trying to understand how the collaboration and editing process works.

Thanks

--Isousa 00:26, 16 May 2006 (UTC)

Is your site a recognised reliable source? Stephen B Streater 08:44, 16 May 2006 (UTC)


I have references to information that can be verified, such as patent number, link to companies and actual pictures of documents, addresses, video interviews, all of which can be researched by anyone. For example, references to the US Chess Hall of Fame in Miami, FL. I have links to my site from the wiki Shachcomputer.info research site, etc. Information on dedicated chess computers is extremely scarce on the web and that's why I decided to share the info I researched myself. I thought that was the spirit of Wikipedia.
I respect the community and will not re-edit the article to include the info I had there before, but again, I'm just puzzled and trying to understand the why this info shouldn't be available. As of now, it is only in the possesion of few chess computer collectors around the world. I think it's sad that in the Chronology of computer chess there's no reference to the first commercial chess computer even.
Isn't there anyone here who can verify the info I had? For my own education, how can one get the information on Wikipedia "certified"?
Regards
--Isousa 23:51, 17 May 2006 (UTC)
You publish original research somewhere else first, but it looks like you've done that satisfactorily already. The reason I removed your initial reference to the first chess computer is that it was not well integrated into the flow of the article, which at this point is mainly about the technical aspects of computer chess and the performance of the state-of-the-art machines against professional opponents - the development of low-cost dedicated chess machines for a broad market is largely irrelevant to that; they use fairly unsophisticated hardware and software that is nevertheless quite sufficient to thrash most amateurs. My suggestion is that you should add a section about "commercial chess machines" in which information about the development of this type of chess machine is discussed. --Robert Merkel 00:04, 18 May 2006 (UTC)
Why not create the Dedicated Computer Chess article? I'm sure once you start it up others will help with more information. Dionyseus 15:10, 18 May 2006 (UTC)

Many articles have a history section. You could this chess computer to a short history section, perhaps just before the future section. Stephen B Streater 18:43, 18 May 2006 (UTC)

Thanks for your comments folks. I now understand it better. I usually don't separate the dedicated, commercial chess computers from software based chess computer games but I do see how a separate section makes the whole article more organized. I'll try to put together a brief paragraph there. Please let me know what you think of it when you see it. Thanks again for educating me.
Regards,
--Isousa 23:30, 20 May 2006 (UTC)

--Ferdinanvd 03:52, 15 September 2006 (UTC)fernando villegas03:52, 15 September 2006 (UTC)Ferdinanvd== Dedicated Units ==

Tha article about computer chess lacks a very important chapter, "chess computers" or, as they are called today, "dedicated units". This BIG omision is weird as much the full industry and in fact the field as such begun when commercial little computers that only played chess became commercially available. Today they are not anymore in the technological edge, but at least they were the main incarnation of computer chess in terms of social visibility AND strenght of play at the very least from 1978 to middle the 90's. A definition of a dedicated unit could be: a computer where electronic parts, the processor and the memory unit, are dedicated just to play chess. It is, then, a closed system where a chess program is fully embedded in the components of a system enterily designed to play chess and do nothing else. In the first times of this industry, the brand "Fidelity Electronics" was esential as much it produced some of the first and most sold and popular dedicated units, as Chess Challenger 7, that sold around 700 thousand units. Other companies of the period, some already vanished and some still existent, are Saitek, Novag, Mephisto, etc. The structure of these dedicated units, although variable along time as technology progressed, was constituted esentially by the following elements: a) a processor unit where the chess program was embedded and run the basic rules of the game, the search technique to list the available moves at any times and the rules to evaluate them. b) a memory unit to stock openning moves (ROM) and sometimes also to keep the value of moves already evaluated. c) some kind of contrivance to make possible the input of moves. In the first times this was just an alpha numeric keyboard with which the user could enter the move he wanted to play. d)an output device for the computer to show his moves.

The variations of this basic structure were many, but a the same time very restricted to those elemental principles. Most units came with a flat surface as a chess board, but others came with just a screen to show the alpha numeric identity of the moves and/or also a graphic representation of the board; these were of a reduced size, with the purpose to be used as handheld units. Later and more mature -and expensive- dedicated units came with a sensory surface that made possible to enter the moves just pressing the pieces on the squares, but some - still more expensive- even made possible to play just moving, like in real life, a piece from one square to the destination without exerting pressure over the surface of the board. In his climax, in the middle 90's, the better dedicated units were capable of playing at the level of a high expert or even Fide master, but soon they were overwheelmed by programs run in personal computers whith 100 or 1000 times faster processors. This can run a chess program at equally 100 or 1000 times higher speed, so that the perfomance of those programs were and are, in the main, far superior to even the best dedicated unit. This is so due to the fact that a very important factor to explain different chess strenght is the number of moves analyzed in an interval of time, as much this quantitative factor make possible to examine more deep sequences of moves. Clearly a chess contrivance that in an X time only can examine sequences of about 6 moves (white play, black play, then white play, black play, etc) deep cannot compete with modern chess programs that in current computers can examine very long sequences, sometimes 15-20 deep.

There is also a section of chess engine where this (sourced) information would be most welcome! Also a couple of articles such as Chessmachine. --IanOsgood 17:20, 26 September 2007 (UTC)

Chess engine rating lists

I have moved this list to Chess engines where it is more directly relevant. BlueValour 01:19, 18 October 2006 (UTC)

humans

In the external link I added today, one of the programmers of Deep Blue said that it was not specifically designed to play against Kasparov (although I have heard otherwise). I expect that most programs are not designed for a particular opponent. Bubba73 (talk), 01:33, 11 January 2007 (UTC)

I agree. I would go so far to say that this *never* occurs except in terms of cooking the opening book, which human players do all the time against selected opponents. And even that's more to ensure the program doesn't crash and burn then anything in the opening. As for specificly tuning the program to be anti-kasparov, Programmers don't know enough about chess to even attempt to do such such a thing. To do that you would need to have a special algorithm to analyze and quantity playing style, strengths and weaknesses. It would involve psychological models and all that. I do think that chess programs would be tuned to play differently against human players in general (keep queen on board, open lines as much as possible etc) than against their fellow computer program peers, but that's a far cry from "specifically designed to play against Player X". Aarontay 19:59, 12 January 2007 (UTC)

I see some reasons why computer versus human matches may seem unfair, but I have no reference for this:

  1. Human players are not allowed to use any materials. Computers are allowed to use a huge database of openings, endings, etc.
  2. Humans are not allowed to analyze on another board. You could argue that computer programs are doing that. Bubba73 (talk), 01:33, 11 January 2007 (UTC)

Very old debate , I don't see the point of adding such arguments, since most of this would violate WP:NOR Aarontay 19:59, 12 January 2007 (UTC)

If it is an old debate I would think finding cites is not a problem. So there would be no risk of NOR. Also, especially for old debates it would be interesting to summarize the pro and con positions. Sander123 14:53, 16 January 2007 (UTC)
Well you would probably end up citing nobodies on forums, but if you want to do that, go ahead. Aarontay 21:09, 26 April 2007 (UTC) :)

Inaccuracies in endgame tablebase section

"All endgames with six or fewer pieces, and some seven-piece endgames, have been analyzed completely."

I thought that Nalimov endgame tablebases does not include 5 vs 1 kind of endgames, so the claim all 6 or fewer is inaccurate? :)

"This results in the tablebase returning results such as "Forced mate in sixty-six moves" in some positions which would actually be drawn because of the fifty move rule. However, a correctly programmed engine does know about the fifty move rule, and in any case if using an endgame tablebase will choose the move that leads to the quickest win (even if it would fall foul of the fifty move rule with perfect play)."

I don't quite agree that the 50 rule move problem can be solved by a correctly programmed engine. Aarontay 13:32, 16 January 2007 (UTC)

You may be right about that 5 versus 1 tablebase, since there isn't much point in doing a king and 4 poieces versus a lone king. The second thing needs to be clarified too. Bubba73 (talk), 14:33, 16 January 2007 (UTC)
This page [1] seems quite comprehensive and doesn't list 5-1 endgames. Presumably because they do not exist. The article claims that some 7 piece positions have been done. Does anybody have a cite for that? Aarontay, what to you mean when you say: "I don't quite agree that the 50 rule move problem can be solved by a correctly programmed engine." ? Sander123 14:52, 16 January 2007 (UTC)
Definitely 7 pieces tablebases has being created (though to my knowledge not sytematically, and not Nalimov format) by Marc Bourzutschky & Yakov etc. What I mean about the second thing is that given the limitations of say Nalimov with regards to 50 moves rule, no (practical) engine modification can compensate for that, without changing the format. That is my understanding anyway Aarontay 16:14, 16 January 2007 (UTC)


As a quick answer to some seven-piece endgames being done, see Endgame#Longest forced win. Bubba73 (talk), 15:26, 16 January 2007 (UTC)
There are probably several othersworking on 7-piece endgames, those came to mind. Bubba73 (talk), 15:39, 16 January 2007 (UTC)

Lower bound on the number of moves in a winning strategy

With regard to the section on solving chess: It is of course a simple task, for sufficiently small values of n, to prove (with exhaustive analysis, if nothing better) that neither player has a strategy that forces a win in less than n moves. Do we know the largest n for which this has been done, or even a ballpark figure for it? I think it would be an interesting addition to the section. -- Jao 22:43, 11 August 2007 (UTC)

Flow of computers vs Humans section

I can't see how the sentence

(Kg6 was Kasparov's only legal move, leading to h5 checkmate)

fits in with the text around it. Should it be moved or removed? I have also completed a sentence in the endgame tablebases section. Rattle 12:47, 16 September 2007 (UTC)

I removed it. It was referring to the diagram position from 1996 Game 1, but it was also mistaken: 38. h5# is impossible because the pawn is at h3. -- Jao 14:01, 16 September 2007 (UTC)

Hardware clarifications needed

As a person who knows nothing about computer chess (and chess proper), I think that this article would benefit from some clarifications about the hardware used in matches between humans and computer programs. For example: I suppose from what is written above that in the sentence "Top commercial programs like Shredder or Fritz have surpassed even world champion caliber players at blitz and short time controls" it is implied "on a normal personal computer", but making it explicit (if true) would probably be better. Moreover, it would be interesting to know whether a computer program running on a normal personal computer has ever beaten a world champion under tournament conditions and, if this hasn't happened yet, whether any prediction about this event has been made.

Then, in the sentence "In November-December 2006, World Champion Vladimir Kramnik played Deep Fritz. This time the computer won, the match ended 2-4" it is not clear what the hardware was on which Deep Fritz run. —Preceding unsigned comment added by 83.184.38.97 (talk) 09:09, 30 September 2007 (UTC)

"Artificial Intelligence"?

This sentence: "...Shannon suggested that type B programs would use a "strategic AI (Artificial Intelligence)" approach to solve this problem by only looking at a few good moves for each position." ...seems to imply, with its quotes, that Shannon actually used the phrase "Artificial Intelligence", or the abbreviation "AI". I'd be interested if anyone could find that in his 1950 paper, as the term is meant to have been coined by John McCarthy in 1955... Is this perhaps from something later? Or do the quotes indicate something other than a quotation? gothick 23:31, 15 November 2007 (UTC)

Shannon's 1950 paper is online and referenced from this article. Neither term appears in that paper. Merenta 16:06, 16 November 2007 (UTC)
Fixed. --IanOsgood (talk) 20:36, 18 November 2007 (UTC)

Optimised for particular opponents - Dubious?

I thought that it was well-known that Deep Blue was optimized to beat Kasparov. Is that correct? Bubba73 (talk), 04:35, 16 January 2008 (UTC)

Don't know. What I do know is that nowadays computers are not optimised for anyone in particular. The Fritz which beat Kramnik was more or less off the shelf. The para could be rewritten to say that Deep Blue may have been optimised. But the way it is written (general comment on all computers) it is not.
Besides the comments are quite dated. It is no longer unclear whether the strongest players is a computer. The 2005 Hydra-Adams and 2006 Fritz-Kramnik matches have pretty well settled that question. Peter Ballard (talk) 04:44, 16 January 2008 (UTC)
The statement is "while computers are occasionally optimized for the current opponent", and I believe that is true of Deep Blue. It may be true of others, I don't have any info though. Bubba73 (talk), 04:55, 16 January 2008 (UTC)
In the absence of proof, I would not include it as a general statement. Perhaps "Deep Blue was allegedly optimised for Kasparov", but even then I think it was limited to updating the opening book. I've certainly never heard of Hyrda or Fritz being optimised for any one opponent. Nor can I imagine how it would help. It may help a computer to have a different algorithm when playing other computers compared to playing humans, but I'd have thought it'd play all humans the same way: tactically. Peter Ballard (talk) 05:05, 16 January 2008 (UTC)
I've heard it from other sources for Deep Blue and Kasparov, but I don't have a reference at hand. A computer could take all of the openings a particular person tends to play and look for weaknesses. Bubba73 (talk), 05:09, 16 January 2008 (UTC)

I've got a reference: "Deep Blue is IBM's supercomputer developed specifically with the aim of defeating Garry Kasparov in a match." (Burgess, Nunn & Emms 2004:536)

  • Burgess, Graham; Nunn, John; Emms, John (2004), The Mammoth Book of the World's Greatest Chess Games, Carroll & Graf, ISBN 0-7867-1411-5 {{citation}}: templatestyles stripmarker in |ID= at position 1 (help) Bubba73 (talk), 05:16, 16 January 2008 (UTC)
I'm still not convinced. How was that different to developing it to defeat Karpov (who was FIDE World Champion in 1997)? IOW does "developed specifically with the aim of defeating Garry Kasparov in a match" really mean "developed specifically with the aim of defeating the world champion in a match"? Peter Ballard (talk) 05:33, 16 January 2008 (UTC)
As I said, it could analyze Kasparov's openings. It could spend perhaps an hour evaluating each of many positions that come up in openings that Kasparov has played in the past. It could tabulate those computations and in some cases find better moves than other players have played against Kasparov. Then it could simply play those moves from its book. I.e., it could search for theoretical novelties in openings Kasparov plays in advance. Bubba73 (talk), 05:42, 16 January 2008 (UTC)
From the IBM Deep Blue article: "Deep Blue's programmers tailored the computer program to beat Kasparov by studying in great detail prior games Kasparov had played". Bubba73 (talk), 05:47, 16 January 2008 (UTC)
I'm not convinced, but I'll concede the point and challenge what the article actually says, which is that it grants an unfair advantage. So (1) who has argued that is an unfair advantage? (2) Does anyone still say this is an unfair advantage, i.e. even if it was said about the 1997 Kasparov match, did anyone apply it to the 2005 Adams match, the 2005 mini-tournament, or the 2006 Kramnik match? Peter Ballard (talk) 05:50, 16 January 2008 (UTC)
I'm with Peter on this. I think the statement about chess computers being tailored to beat specific human opponents is just wrong. For one thing, humans will often adopt "anti-computer" strategies so that their play doesn't resemble their normal play against humans very much at all. Kasparov used the Saragossa Opening in one game, and I'm not sure how the Deep Blue team would have prepared specifically for that. This claim needs a direct cite, and not one that is a horrible OR interpretation. When Burgess et. al. say that Deep Blue was built with the intention of beating Kasparov in a match, that does not necessarily imply that it was specifically tailored against Kasparov's chess playing style. It could simply have been developed to play the strongest game of chess it could with the hope that that would be sufficient to defeat Kasparov. I have the 2004 edition of The World's Greatest Chess Games and page 536 in no way supports the bald claim made in the IBM Deep Blue article. The Deep Blue article claims "Deep Blue's programmers tailored the computer program to beat Kasparov by studying in great detail prior games Kasparov had played (Burgess, Nunn & Emms 2004:536)." In truth Burgess et. al. say no such thing and say absolutely nothing about studying Kasparov's games, in great detail or otherwise. The line quoted from Burgess by Bubba73 doesn't support what we have in the Deep Blue article so I'm going to take it out. A better source for this sort of claim would be Behind Deep Blue: Building the Computer that Defeated the World Chess Champion, but I don't have that book. I'm certain that the Deep Blue team did study Kasparov's games, but the claim that chess computers are tailored for individual opponents seems unlikely to me. In fact currently humans have a far greater ability to vary their game strategies based on their opponent than computers do. Quale (talk) 07:00, 16 January 2008 (UTC)
I don't have the book about deep Blue either, but I read in several other sources that it was specifically to beat Kasparov. Bubba73 (talk), 14:47, 16 January 2008 (UTC)
I just ordered a copy of the book. Bubba73 (talk), 14:51, 16 January 2008 (UTC)
Why did Kaspy play the Saragossa Opening? Either he had to play something that he usually plays which the computer had analyzed in depth before hand or he had to play something less familiar to him. Bubba73 (talk), 15:00, 16 January 2008 (UTC)

There are several other books about Deep Blue versus Kasparov, does anyone have any of these books to check:

  • Kasparov and Deep Blue: The Historic Chess Match Between Man and Machine, by Bruce Pandolfini
  • Kasparov versus Deep Blue: Computer Chess Comes of Age, by Monty Newborn
  • Man Versus Machine: Kasparov Versus Deep Blue, by David Goodman and Raymond Keene

I think the one by Newborn would be most likely to have that sort of information. (talk), 19:35, 16 January 2008 (UTC)

I've ordered the Newborn book too, for an outside perspective. Bubba73 (talk), 19:51, 16 January 2008 (UTC)
I think I have Pandolfini and/or Goodman/Keene , somewhere, but I have to check. If so, I'll chime in with anything relevant. Baccyak4H (Yak!) 20:13, 16 January 2008 (UTC)
If I'm wrong, I'll pay for the books; but if I'm right someone owes me for two books! :-) Bubba73 (talk), 03:19, 17 January 2008 (UTC)

(outdent) I have Pandolfini and also Man vs Machine, but the authors are Keene and Byron Jacobs (with Tony Buzan) (ISBN 1-900780-00-3). This book looked at the first match, Kasparov's 4-2 victory. I looked at it and found no reference to optimizing against Kasparov. But I did find this (pg 53): "Deep Blue has been specifically designed to take advantage of the differences from human opponents". Not sure this helps. I'll check Bruce's book soon. Baccyak4H (Yak!) 03:44, 17 January 2008 (UTC)

Probably doesn't help. It isn't clear to me that it treats human opponents differently from each other. In addition, an editorial review of one of the Kaspy vs. Deep Blue says that D.B. was rigged to go after Kaspy specifically between the matches. Bubba73 (talk), 04:30, 17 January 2008 (UTC)

I apologize for this long rant, but I've got a bunch of problems with the claim specifically that Deep Blue was "optimized" to beat Kasparov, or the general claim that this is occasionally done in other computer matches.

  1. The context is almost always that of an apologist trying to make an excuse why the computer won. "Yeah, the computer won, but it was optimized." (i.e., that's so unfair). I don't think that is the intention of those discussing the issue here, but that kind of POV is repeatedly inserted into these articles.
  2. The type of preparation Bubba73 described, examining an opponent's openings and looking for flaws, is hardly unique to computers. In fact when Kramnik used the Berlin Defense to frustrate Kasparov, that is what we usually call preparation, not optimization. This is absolutely standard preparation for any top level match, and because it's completely standard, to say that the Deep Blue team did it is to say nothing. Well nothing except to imply that this was some special, unfair thing that the computer took advantage of. Why does it get described breathlessly using bizarre terminology that would never be used for a human that did the same thing (and all top human players do)? Answer: see number 1. Suppose the opening book of a chess computer were set up before a match with Kramnik to always play 1.d4 as white to avoid the Berlin Defense. Does this qualify as "optimization"? Was Topalov "optimized" to avoid 1.e4 in his match with Kramnik?
  3. If this "optimization" is so potent, why couldn't any of Kasparov's other opponents do it? If this claim were true it would mean that the Deep Blue team had special optimization methods that other players, say Karpov, couldn't apply against Kasparov. The notion is absurd. I have the greatest respect for de Firmian and Fedorowicz, but they were not better than Karpov's team in his WC matches against Kasparov. If the "optimization" is something that only computers can do and not humans, then its specific nature should be described.
  4. Any "optimization" claim should be backed by specific evidence either in construction details of the machine or its program, or strong clues found in the game scores of the match. Either of these should be easy to provide if the claim is true. Don't just say "computers are optimized", tell us precisely what the optimization is, supported by a reliable source. I'm not aware of any such evidence. If Deep Blue was "optimized" to exploit weaknesses in Kasparov's opening repertoire then it should be simple to point to games in the matches where such flaws were exposed. I can recall only one game that a flaw in Kasparov's opening preparation was uncovered, but it happened in an opening that wasn't part of K's usual repertoire. When Kasparov lost his suicidal Caro-Kann, there was some talk that the Deep Blue team had recently strengthened its opening book for that line. If so it was practically a miracle, because I don't know how anyone could have predicted that Kasparov would play it. He had played the Caro-Kann as a junior, but I don't think it had been part of his repertoire for years, and I don't think he had ever played that line before (as black). After the fact the best explanation I saw for Kasparov's odd and ultimately fatal choice was that Fritz (in 1997) handled the line very poorly as white and that K could easily beat it. Unfortunately for him, Fritz was not the equal of Deep Blue in 1997.
  5. The claim in this article is that "computers are occasionally optimized for the current opponent". Peter pointed out that we have at least one indisputable case where the computer wasn't optimized (Fritz v. Kramnik). Against this we have only one claim that this was done, Deep Blue, and I think that claim is wrong. Even if it were right, a single instance wouldn't support "occasionally"—it would support "once". This is the insidious problem with this kind of original research. It sounds plausible to a lot of people, so it might be true, it's probably true, it must be true—except I think it isn't true. This is the kind of thing that we shouldn't do in wikipedia: try to draw an inference from a single instance that (we think) we know ("Deep Blue was optimized) to make a broader general conclusion ("chess computers are occasionally optimized"). If the claim is true then it shouldn't be too hard to find a reference or two that simply states the claim directly without requiring inductive reasoning on the part of the wikipedia editor to make a broader claim. A lot has been written about computer chess. Quale (talk) 05:22, 17 January 2008 (UTC)
I have to agree with Quale. I believe computers are "optimised" to play differently against humans as compared to fellow computers since it's pretty obvious what should be done (open lines, tactical positions). "Opening optmization", oh sure... humans do it, computers (with the help of their opening book makers do it), but that's not a big deal. But it's a huge claim to say that Deep blue is optmised specifically against Kasparov beyond that. First off, Kasparov is/was one of the strongest players in history, oh sure he had a certain style, a certain perference, but no real weaknesses, at least weaknesses that any normal mortal could spot. A Anand,Kramnik might have a feel of kasparov's weaknesses (but i wouldn't bet on it) but even they couldn't exploit it with any certain of degree reliability. So if DB is optmised it has to use some automated learning weighting method... But how would that work? run the weighting eval against every one of Kasparov's loss? lol. Seriously if a method that allowed computer to tune against specific human opponents was found, it would be known/announced by now, particularly after DB and the team stopped... (Aarontay (talk) 14:28, 30 January 2008 (UTC)
I never said anything about it being fair or unfair. As Quale says, the only case I know of where I believe it was done was Deep Blue versus Kasparov, and I've ordered two books to look for references. It is common for people to prepare for specific opponents, but what a computer can do in that respect is several orders of magnitude beyond what a human can do. I didn't put that statement in either of the articles, I just provided a reference that it had been done at least once. Bubba73 (talk), 17:21, 17 January 2008 (UTC)
I know for a fact that Deep Blue was tuned for Kasparov. In fact not only was it tuned for him, they retuned it overnight, depending on how well it had played during the day. It totally did Kasparovs head in because it played completely differently from day to day, and he accused them of cheating.- (User) WolfKeeper (Talk) 17:38, 17 January 2008 (UTC)
Can you provide a reference to a book or magazine? Bubba73 (talk), 17:49, 17 January 2008 (UTC)
Complete bullocks. It did not play completely differently from day to day. Point out the evidence in the games scores. It's true that Kasparov accused the Deep Blue team of cheating, but Kasparov's understanding of computer chess was (at least at that time) as weak as his overall understanding of chess was strong. Computer chess programmers are generally very hesitant to make major changes in the middle of a contest except to fix problems in the opening book or major flaws, as without sufficient testing, a change is just as likely to make the program weaker as it is stronger. Quale (talk) 19:32, 17 January 2008 (UTC)
I'd like to point out that many tournaments involving computers allow program modification between rounds. The intent is not to prepare for specific opponents, but to fix bugs that become evident during games. If modifications were made to Deep Blue, it was likely for bug fixing rather than tuning. (You give the DB creators a lot of credit to even be able to tune their program for a particular opponent! Honestly, such a feat of opponent modeling would be worthy of several research papers in its own right!) --IanOsgood (talk) 18:47, 17 January 2008 (UTC)
They did though, I don't know the details. However, I would imagine fixing bugs changes the evaluation function, and that impacts the optimum weights used, and some of the weights are empirically derived from games played... so if I was them I would change things, and then run a subset of the test game set to retune the weights. I also suspect that you underestimate these guys, it was only 10 years ago, they were world class experts at the time, and techniques for tuning chess programs was known to me over a decade before that, and I'm not even any kind of expert. Using Kasparovs prior games as a dataset is not rocket science, and is likely to improve play against Kasparov... They also would have trained it on that days games as well.- (User) WolfKeeper (Talk) 20:40, 17 January 2008 (UTC)
What techniques of tuning chess programs are you referring to? As far as I know, all methods of tuning are not as succuessful as hand tuned methods and even then the tuning methods that *are* used are done to achieve a certain level of general ability and not specific tuning against any specific opponent! Can you give me an example of an algothrim used to tune a chess engine against a specific opponent? Aarontay (talk) 14:28, 30 January 2008 (UTC)
Oddly enough, humans do the same thing between rounds too. Fischer didn't fare too well on either side of the Sicilian Defense in his first match against Spassky, so he stopped playing it. I think you're right in saying that this super-optimization of computer programs to combat specific human players seems beyond our capabilities at present, and almost certainly was in 1997. Quale (talk) 19:40, 17 January 2008 (UTC)
No, no. Some of the positional evaluation functions feature weightings are very often empirical, and can be made automatically trainable.- (User) WolfKeeper (Talk) 20:40, 17 January 2008 (UTC)
From IBM Deep Blue: Deep Blue's evaluation function was initially written in a generalized form, with many to-be-determined parameters (e.g. how important is a safe king position compared to a space advantage in the center, etc.). The optimal values for these parameters were then determined by the system itself, by analyzing thousands of master games. ... The rules provided for the developers to modify the program between games, an opportunity they said they used to shore up weaknesses in the computer's play revealed during the course of the match. This allowed the computer to avoid a trap in the final game that it had fallen for twice before.- (User) WolfKeeper (Talk) 20:54, 17 January 2008 (UTC)
None of this says anything about tuning against Kasparov... Yes, DB like many programs today, can be tuned in thousands of ways. The problem is no-one knows how to do it properly. Do you really think all it takes to beat kasparov is to run the weights against all his games?? If only tuning against opponents was so easy... Aarontay (talk) 14:28, 30 January 2008 (UTC)
Yes, and that almost certainly means they simply changed Deep Blue's opening book. That has absolutely nothing to do with optimization for a particular opponent, as the computer would have played the same sequence of moves against anyone. The sorts of modifications to evaluation parameters that you mention are extremely common and frequent while the program is being developed, and fairly rare during the middle of a tournament or match. The risk of making the program much weaker rather than stronger by tweaking these parameters when there is not adequate time for testing is too great. Changes to the opening book are easy and relatively low risk. Automatic training can be used to get the evaluation parameters started in development of the program, but to my knowledge has never been used during a computer vs. human match. (Why would it? The programmers can tweak the knobs themselves between games, no need to automate a tricky process.) But again, this has nothing to do with a specific opponent. Quale (talk) 19:14, 27 January 2008 (UTC)
[citation needed]- (User) WolfKeeper (Talk) 05:17, 28 January 2008 (UTC)
Point of order: this discussion is getting really long. Please archive or move to the IBM Deep Blue article. --IanOsgood (talk) 18:47, 17 January 2008 (UTC)
It's an active discussion about this article, so it should not be archived nor moved to a different page. --Prosfilaes (talk) 23:22, 17 January 2008 (UTC)
However I would like it to remain on topic, which was to discuss this sentence which I marked as dubious: "Seen as unfair is that human players must win their title in tournaments which pit them against a diverse set of opponents' styles, while computers are occasionally optimized for the current opponent". Can we at the very least agree that this is should be removed because (a) "seen as unfair" uses weasel words - just who is claiming this? and (b) while it may have been true of Deep Blue, I assert that this claim of tailoring to an opponent has not been made of any chess computer since 1997. Peter Ballard (talk) 23:28, 17 January 2008 (UTC)
I think the "seen as unfair" part is too opinionated unless we have a good source saying it.
One reason why it might be "seen as unfair by some" is that the computers are allowed to violate FIDE's rule against using notes during the game (article 12.2a). Bubba73 (talk), 01:59, 18 January 2008 (UTC)
That seems irrelevant to this point. Furthermore, humans are allowed to using as many notes as they want, as long as they're stored mentally. Are computers really allowed to violate that rule? And does it matter unless we can come up with cites on the matter?--Prosfilaes (talk) 02:09, 18 January 2008 (UTC)

Humans are not allowed to use any external notes or databases during the game, according to the rules. I have the two books now, The Newborn book doesn't cover the last match. The Hsu book says that they had Beep Blue going through Kasparov's openings, looking for novelties. It found only one - a pawn sacrifice that the grandmasters working on the project told Beep Blue not to use. But it is aparant that Deep Blue was specifically pre-computing evaluations or responses to Kasparov's openings. Bubba73 (talk), 18:34, 27 January 2008 (UTC)

So? The computer opening book isn't external notes, it's internal to the computer. In any case, computers have played without opening books before, and they can be successful that way. The computer's opening book is different from a human's only in its size and in the nature of the storage used. Computer memory is better than human memory in some ways, but worse in others (humans are still superior at recognizing patterns, at least for now). Anyone who plays a computer accepts this as a precondition, and no one is forced to play a computer in either a match or a tournament. Kasparov agreed to these conditions. He didn't find them at all objectionable in the first match when he won, but at soon as he lost, he had a lot of complaints. Sour grapes. Even if Deep Blue examined Kasparov's opening repertoire (and this is almost certain), how did it help it in the match? In which games did Kasparov play openings in his standard repertoire? And how is Deep Blue's preparation different from the way that Kramnik would prepare for a match with Kasparov? Quale (talk) 19:14, 27 January 2008 (UTC)
It is different because the computer isn't thinking about what move to make during the game - it is just looking it up in the database. Even if a human has studied the psitions, he is still thinking about whether or not he wants to play the book move. And yes, it would make the same move against anyone in the position, but it based that on Kasparov's games specifically. I haven't read anything about it going through all grandmaster games or all of ECO, for example. In some sense, it was going specifically after Kasparov. Hsu states in his book that he was going specifically after Kasparov, after Kasparov won the first match. But how much impact did it have? Since it was instructed to not use the only novelty it pre-computed, probably little in that sense. However, it seems that Kasparov knew that it had analyzed his openings and sometimes played openings that he wouldn't normally play.
It differs from the way a human prepares for a specific opponent by being able to do many orders of magnitude more analysis beforehand. Then it doesn't have to think during the game - just blindly playing the move it has in its database.
However, this is so controversial that the best thing to do is probably take it out of the article. I'm not against taking it out. Bubba73 (talk), 21:13, 27 January 2008 (UTC)
I'm not sure it does orders of magnitude more analysis; it spends roughly the same amount of time a human does in studying the problem, and if it's a roughly equal player, that should be functionally the same amount of analysis. It thinks as much as a human does playing a pre-studied book; very little. Both a computer and a human have equal opportunity to second-guess the book in play; even if a computer is less likely to, that's a choice on both their part.--Prosfilaes (talk) 21:32, 27 January 2008 (UTC)
Deep Blue could examine 200 million positions per second, or something like that. Also, it had access to as many of Kasparov's games as they wanted, whereas Kasparov didn't have access to any of Deep Blue's games. As far as second-guessing the book, Deep Blue was specifically kept from playing the pawn sacrifice that it thought was the best move (if the position came up). Bubba73 (talk), 23:09, 27 January 2008 (UTC)

10 days on and still no one has produced evidence of any WP:RS claiming the computers have an unfiar advantage. So I've deleted the contentious sentences. As for whether Deep Blue was tuned, that is better discussed at the Deep Blue Talk page. Peter Ballard (talk) 11:40, 28 January 2008 (UTC)

For Peter Ballard: I am surprised you even ask for this. One clear (unfair) advantage is the opening book. No human has or ever had such an extensive repertoire of openings (yeah, not even Alekhine or Fischer). I mean I've seen Fritz playing against other engines without a sweat as far as move 20. That's crazy... it effectively gives the computer the advantage of perfect play well into the middlegame, without even wasting time or wrecking nerves. It's almost like resolving chess. Engines should be endowed with in-depth knowledge of several openings, while maintaining only general knowledge of all others, just like humans.
Another advantage is the endgame databases/tablebases. While some argue that without them computers often struggle solving ending problems that even candidate master level players have no trouble tackling, current tablebases go way beyond that. It is again an unfair advantage because few people (if any) memorize or are able to play perfectly all endings with up to seven figures. 192.55.8.36 (talk) 10:28, 18 February 2008 (UTC)

Playing strength versus computer speed

The notion that doubling the computer speed merely results an a rating increase of 50-70 points needs being challenged. To me computer speed is a vague and loosely used term and perhaps the author would like to clarify what he/she meant. If computer speed ~ CPU frequency, then the statement might be innacurate. Some time ago I was running Fritz 7 on two computers, one working at 1.5 Ghz the other at 3.2 Ghz (single core), both having 512 MB RAM (don't remember other specs such as cache and fsb though). After 10 games with time setting 20/5 the score was 7-3 (6 draws), which if I'm not in error corresponds to a rating difference of about 150 points. That's quite a lot more than the article reads. Did anybody else tried the same, perhaps using different software? 81.96.124.212 (talk) 17:41, 20 February 2008 (UTC)

Ten games is not statistically significant. The estimate of 50-70 points per doubling is measured on computer rating lists across dozens of different engines playing tens of thousands of games. That said, I'm not sure how meaningful that particular number is. A 50 point rating spread at master level (2200) means something very different than the same spread at GM level (2600) due to several factors, such as reduced k-factors in some rating systems and fewer data-points for the ratings at the high end of a rating pool. It is by no means clear that "50 points per doubling" will carry over to any software on any system. --IanOsgood (talk) 18:21, 20 February 2008 (UTC)
I've read three figures recently: 50-70, 80, and 100. But one book also showed that it dropped off as computers/programs got better. Bubba73 (talk), 20:36, 20 February 2008 (UTC)
Cite?- (User) WolfKeeper (Talk) 21:06, 20 February 2008 (UTC)
I think it was in the books by Hsu, Newborn, and Levy & Newborn. Bubba73 (talk), 21:19, 20 February 2008 (UTC)
"50-70" is cited in Ply (game theory) - I think it used to be in this article too, but someone took it out a week or two ago. Bubba73 (talk), 21:22, 20 February 2008 (UTC)
Scratch that - that was for each additional ply, not a factor of 2 in speed. Bubba73 (talk), 21:26, 20 February 2008 (UTC)
I dimly remember that a factor of 4 or something is equivalent to a ply, after allowing for alpha-beta pruning and such like.- (User) WolfKeeper (Talk) 22:53, 20 February 2008 (UTC)
Newborn Kasparov versus Dep Blue, page 122, has a chart showing doubling the number of positions searched (essentially the same as doubling the time), and 30 years ago, doubling resulted in approximately 100 point gain, but that tapered off and ten years ago doubling achieved only a small gain. By 10 years ago, doubling resulted in what looks like only 10-15 point gain (estimating from the chart). Bubba73 (talk), 00:48, 21 February 2008 (UTC)
That's a lot, it's 100 points every 10 years, given Moore's law, without any other advances at all.- (User) WolfKeeper (Talk) 02:04, 21 February 2008 (UTC)
well, 30 years ago a doubling was increasing the strength roughly 100 points. Ten years ago a doubling was increasing only 10-15 points, but the curve is smothing out. It is approaching a point where increased speed isn't going to increase the strength. Also, speed will double about three times in ten years, and that is only about 30 points increase, not 100. Bubba73 (talk), 02:17, 21 February 2008 (UTC)
And then how do you explain the above mentioned results? Fritz 8 wasn't released 10 years ago, and computers working at 1.5 GHz are still around. Yet the difference was measurable, to me at least. Also "statistically irrelevant" is an irrelevant statement (take it from someone who had to pass and exam in statistical physics), unless you have a very well defined error margin, and a good reason for that margin, too... Otherwise you can let two engines play 100 games, or 1000 games for that matter without precisely knowing why you're doing that. Just when you'll think you nailed it, someone else will come along claiming you need at least 10000 games :D. I have to challenge the statement "measured on computer rating lists across dozens of different engines playing tens of thousands of games." Would you be good enough to produce such evidence corroborated with the 50-70 points increase? 81.96.124.212 (talk) 21:45, 21 February 2008 (UTC)
Ten games really aren't enough for such a comparison. Also, the clock rate (frequency) of a cpu alone doesn't provide sufficient info about the "chess speed" each; the cpu type matters a lot. Depending on the cpu architecture, some are more and some less effective in terms of a chess engine's speed, relative to the GHz. If you compared an 1.5 GHz P3 with a 3.2 GHz P4, then the P4 is less than twice as fast, with chess engines. To find the speed factor, the node rate of the same engine(s) on the computers each, can be compared. The comparison should involve several different engines and positions.
As for the doubling gain itself, I think it cannot be determined exactly, and also it is changing with new engines coming up... I recall that in the 1980s, a chess computer would typically need about five times more time for one extra ply of calculation depth. Nowadays, good engines need only twice the time. On the other hand, 6 vs. 5 plies mean a bigger Elo difference than 18 vs. 17 plies. (I am sorry that I have no comprehensive source at hand in the moment, for all this.)
Anyway, if you look for example at CCRL you'll often find several ratings for the same engine, but from 1, 2 and 4 cpu cores. The usual estimation for the speed gain from doubling the cpu cores used, is 1.7. From that, you could calculate the expected typical Elo gain for a factor of 2.0. --80.121.15.73 (talk) 12:09, 6 January 2009 (UTC)
Let's be more precise about "statistical significance" and all that. Consider the hypotheses H1,H2,H3 where Hk := "the rating difference between your two machines was 50*k". How much evidence does your observation (that the better machine drew 6 games and won 4) provide to distinguish between those hypotheses? So, the key definition says that with a rating difference of d, the expectation of the stronger player's score (lose=0, draw=0.5, win=1) in a single game is 1/(1+10^-d/400) = 1/(1+q), say. So the difference in scores, stronger minus weaker, has expectation 1/(1+q) - 1/(1+1/q) = (1-q)/(1+q). Let's adopt the simplified assumption that in every game the better player either wins or draws; then P(win) = expected score = (1-q)/(1+q). Call this p. We can now calculate P(win 4, draw 6) for any given rating difference -- it's (10 choose 4)p^4(1-p)^6, and then Bayes' theorem will tell us how to update our beliefs about the hypotheses. If I've done my calculations right, the probabilities are roughly 0.035, 0.18, 0.25 respectively, which means e.g. that the match result should make you multiply your estimate of the odds of H3 rather than H1 being true by 7:1. (So, e.g., if you used to think 150 points was less likely than 50 points by a factor of 7, you should now think them equally likely.) That's enough evidence to take note of, though clearly not enough to outweigh a big pile of contrary evidence (which I think there is in favour of H1 over H3). There are at least four caveats. (1) I bet playing the same program against itself on different hardware exaggerates the effect of the hardware difference, compared with how each would perform against other players. (2) That's quite a fast time control (assuming it's 5 minutes, not 5 hours). I bet the effect of faster hardware is greater at shorter time controls. (3) As someone else mentioned above, if the CPUs aren't the same apart from clock speed, the actual speed difference might be very different from the 3.2/1.5 one would naively expect. (4) There might be mistakes in my calculations :-). Gareth McCaughan (talk) 13:08, 7 January 2009 (UTC)