Talk:AlphaGo

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Wiki Education Foundation-supported course assignment[edit]

This article was the subject of a Wiki Education Foundation-supported course assignment, between 28 May 2019 and 2 July 2019. Further details are available on the course page. Peer reviewers: Lolhah123456789.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 17:09, 17 January 2022 (UTC)[reply]

Start a page?[edit]

Can we start a page for AlphaGo?— Preceding unsigned comment added by Synesthetic (talkcontribs) 21:21, 28 January 2016‎ (UTC)[reply]

Yes, we can! Go! AlphaGo! --Neo-Jay (talk) 22:09, 28 January 2016 (UTC)[reply]
 Done --Neo-Jay (talk) 00:27, 29 January 2016 (UTC)[reply]

[edit]

The AlphaGo logo is looking less than wonderful at the moment on this page, as most of it is white text on a transparent background, and thus appears on the white page as white-on-white. -- The Anome (talk) 01:25, 29 January 2016 (UTC)[reply]

Processor power[edit]

Would it be relevant to state what processor power AlphaGo was using when it played against Fan Hui, and what processor power it used during training? This information is in the Nature paper. Maproom (talk) 09:09, 29 January 2016 (UTC)[reply]

Yes, I think so. Please feel free to add that information. --Neo-Jay (talk) 11:30, 29 January 2016 (UTC)[reply]
There's a policy problem here. When I wrote the above on 29 January 2016, the information was available at https://storage.googleapis.com/deepmind-data/assets/papers/deepmind-mastering-go.pdf , which I took to be a pdf of the Nature article. It stated the processor power used for training, and for playing. The information is not in fact in the Nature articles, and that pdf has been removed from the web. Can I still use the numbers I copied from the pdf a few days ago? Maproom (talk) 16:57, 1 February 2016 (UTC)[reply]
You can use those numbers if they can be verified by others. According to Wikipedia:Identifying reliable sources, Wikipedia articles should be based on reliable and published sources.--Neo-Jay (talk) 22:07, 1 February 2016 (UTC)[reply]
It seems that the document, about a Google project, was uploaded to a Google-owned domain, somehow became known to ordinary members of the public, and was then taken down again. I judge that it was reliable, but probably not "published". Maproom (talk) 23:55, 1 February 2016 (UTC)[reply]
It's still available at https://web.archive.org/web/20160127234038/https://storage.googleapis.com/deepmind-data/assets/papers/deepmind-mastering-go.pdf
I don't think it matters if it's been published intentionally or unintentionally.
"To efficiently combine MCTS with deep neural networks, AlphaGo uses an asynchronous multi-threaded search that executes simulations on CPUs, and computes policy and value networks in parallel on GPUs. The final version of AlphaGo used 40 search threads, 48 CPUs, and 8 GPUs. We also implemented a distributed version of AlphaGo that exploited multiple machines, 40 search threads, 1202 CPUs and 176 GPUs" The distributed version is said to have an Elo rating of 3140, versus 2890 for the non-distributed one. By skimming over the document I can't see any information on what CPUs or GPUs were used but it still gives a good impression of the scale of machine this thing is running on and there aren't very many options at the time to fit 48 cores into one machine anyway. --Mudd1 (talk) 13:18, 9 March 2016 (UTC)[reply]
From what I read this thread is about adding processor power information about the matches against Fan Hui described in the paper. Strange that it ended up in the section about matches against Lee. I admit that the Economist report meets all the requirement, but with BusinessInsider stating 1204 CPUs and 176 GPUs were used (but unlike The Economist citing nothing it said the numbers were from the Nature paper, discrediting them), I would only say that only "distributed version" is kinda the consensus between the presses. I speculate that Google did not disclose this figure and we probably want to present other opinions. --Lyuflamb (talk) 16:08, 12 March 2016 (UTC)[reply]
Is there a more reliable source than The Economist citing some unknown guy? The Nature paper says they tried both 1202 CPUs / 176 GPUs and 1920 CPUs / 280 GPUs and decided to stick with the former, because the ELO difference was too small. They used the 1202 version to play Fan Hui, and Demis said on Twitter that the version to play Lee Sedol was roughly the same: https://twitter.com/demishassabis/status/708488229750591488. As it currently stands, the quote is misleading, because many people take it as an increase in compute power for the Lee Sedol match. — Preceding unsigned comment added by Eltoder (talkcontribs) 23:31, 12 March 2016 (UTC)[reply]

The current section Hardware is rather misleading, giving an ELO rating as function of the processor power. But these numbers were only true at a certain stage of the training, maybe last October, and the strength of AlphaGo depends much more on the amount of training than on the amount of hardware (according to Demis). — Preceding unsigned comment added by 2001:980:3FF4:1:3453:F38A:3E0B:F6B3 (talk) 23:58, 12 March 2016 (UTC)[reply]

The listed Elo ratings are quite wrong. Alphago has an elo rating of 3586 compared to the listed 3140 and 3168. Ray Tomes (talk) 08:54, 16 March 2016 (UTC)[reply]

Well those are 2 seconds per move ratings. Also you can't really determine a digit correct rating of 10 published games like that website does. -Koppapa (talk) 12:39, 18 March 2016 (UTC)[reply]

wording of "first computer Go program to beat a professional"[edit]

This is not entirely correct; it is the first computer Go program to beat a professional go player in an even match on a full-sized board. Arguably this is what really counts anyhow, but the pedant in me at least thinks to all the previous wins on small boards or even in handicap matches. Winning against humans with large handicaps is prob not interesting, though at least the 4 stone handicap win against legendary Takemiya Masaki 9p by the program Zen on a full-sized board in March 2012 certainly is. Or the win against Yoshio Ishida 9p by CrazyStone with also a 19x19 board and 4 stones handicap.

Computer have however won fair matches against pros on small boards - that should count, though I'm having trouble confirming exact matches and dates googling. Think MoGo wins in 2009/2010 are earliest, not quite sure.

93.142.169.248 (talk) 18:25, 29 January 2016 (UTC)[reply]

You are right. I have added your point to the article. Thank you so much. --Neo-Jay (talk) 00:55, 30 January 2016 (UTC)[reply]

Slightly confusing[edit]

"In October 2015, it became the the first computer Go program to beat a professional human Go player ..."

"After IBM's computer Deep Blue beat world chess champion Garry Kasparov in 1997, it took almost another 20 years for programs using artificial intelligence techniques to be capable of achieving parity with amateur human Go players"

"almost another 20 years" from 1997 takes us to now. So, according to the second sentence, right now computers can achieve parity with amateur players. Yet in the first sentence it says that AlphaGo beat a professional player. It should be clear how these two things may seem contradictory. — Preceding unsigned comment added by 109.145.19.117 (talk) 03:57, 30 January 2016 (UTC)[reply]

"Almost another 20 years" may be interpreted as less than 20 years, thus the years before AlphaGo. And since AlphaGo is the first computer Go program to beat a professional human Go player in an even match on a full-sized board, we may say that other computer Go programs before AlphaGo were only "capable of achieving parity with amateur human Go players". This is providing background information about AlphaGo's breakthrough. I don't see contradiction in there.--Neo-Jay (talk) 07:01, 30 January 2016 (UTC)[reply]
Now is already at the extreme end of interpretations of "almost 20 years" from 1997. It can hardly mean the years before now. 109.145.19.117 (talk) 14:36, 30 January 2016 (UTC)[reply]
Deep Blue defeated Kasparov in May 1997, 18 years before AlphaGo defeated Fan Hui in October 2015. I think that 18 years can be described as "almost another 20 years". If you insist that it be changed to "18 years", that's fine for me. It's just a rhetorical issue. --Neo-Jay (talk) 23:33, 30 January 2016 (UTC)[reply]
OK, I changed "almost another 20 years" to "another 18 years". Hope that there is no confusion any more. --Neo-Jay (talk) 00:13, 31 January 2016 (UTC)[reply]
But being able to beat a professional Go player is - at least in my book - way above "achieving parity with amateur players" (emphasis mine). And having beaten 9 dan (the "9p" needs an explanation as well, would have done it myself but didn't want to conflict with the later explanation of the ranking) players, even with a handicap, in the previous years is nothing an amateur could do. So the phrasing is in my eyes still misleading because of the "amateur" which is a very wide range and depends on your definition anyway. --Ulkomaalainen (talk) 02:12, 31 January 2016 (UTC)[reply]
It is unquestionable that a strong amateur Go player can beat a (maybe weak) professional player with 4 stones handicap (considering that Masaki Takemiya and Yoshio Ishida, although 9p, were old and not as strong as before when they were beat by computer programs in 2012 and 2013). Chinese amateur players beat 9p players like Nie Weiping, Chen Zude, Ma Xiaochun, and Liu Xiaoguang with just two stones handicap in as early as 1988. There is no handicap in matches between professional Go players. Beating a 9p Go player with 4 stones handicap can only, and correctly, be described as being at, no matter however strong, amateur level.--Neo-Jay (talk) 03:23, 31 January 2016 (UTC)[reply]
Well, the problem is still that "amateur" is a wide term, which can include the strongest players who are not playing professional as well as the regular guy who plays at a club and almost always gets beat. And especially when juxtaposing it to "in chess the world champion was already beaten when no program could hurt an amateur player in go" the phrasing "achieving parity" does not make one think of "the strongest players who will occasionally beat pros themselves". So at least the fact we're talking about strong players and not some 15 kyu should IMO be included. But then the development of the strength of go programs within the lower echelons would make for a good addition. --Ulkomaalainen (talk) 21:09, 2 February 2016 (UTC)[reply]
Adding only "strong" to qualify "amateur" is not an improvement. "Amateur Go player" at least has a clear definition, while "strong amateur Go player" is too ambiguous. But if you find any reliable source that can narrow the term "amateur" to indicate how strong the amateur level is (e.g., in which amateur dan), please feel free to add it. Otherwise the current content is acceptable. The article has not said "in chess the world champion was already beaten when no program could hurt an amateur player in go". It only says that computer programs are "capable of achieving parity with amateur human Go players". The phrase "parity with" is not "weaker than", and indicates that programs could hurt an amateur player in go from time to time. By the way, defeating old and weak professional Go players like Masaki Takemiya and Yoshio Ishida at as many as 4 stones handicap is far from being the strongest [amateur] players who will occasionally beat pros themselves. --Neo-Jay (talk) 00:24, 3 February 2016 (UTC)[reply]
OK, per Google's article in Nature, Zen and Crazy Stone were at about amateur 5 dan (out of 9 dan possible) level. I have added this information to the article. Thanks for your discussion. --Neo-Jay (talk) 01:48, 3 February 2016 (UTC)[reply]
Thanks for the find and the addition. I think we are (or were) talking circles around each other. While the term "amateur" has a clear definition in Go, the playing strength of an amateur is not defined and can vary wildly, so "achieving parity" is still an ambiguous wording. Parity with strong amateurs means the programs can easily beat weaker ones, parity with weak amateurs means they are still a pushover for the better ones. "parity" in my understanding does not mean "not weaker than" but "about as strong as". And that is confusing, since there is not "amateur strength" as such with which to compare. --Ulkomaalainen (talk) 17:58, 4 February 2016 (UTC)[reply]
@Ulkomaalainen: The fact is that computer programs' strength can also vary widely, and as widely as amateur players' strength. There are strong programs, and there are also very stupid and weak ones. And programs can even allow us to decrease their strength level to meet weaker human players' need so that I believe 15-kyu-level programs do exist. All the computer programs before AlphaGo were at amateur level, with the strongest programs like Zen and Crazy Stone reaching about amateur 5 dan level. This claim rightly clarifies the range of computer programs' strength, and is well sourced. I don't understand why you still fell confused and how you plan to change it. --Neo-Jay (talk) 02:40, 5 February 2016 (UTC)[reply]
OK, I changed "After IBM's computer Deep Blue beat world chess champion Garry Kasparov in the 1997 match, it took almost two decades for programs using artificial intelligence techniques to be capable of achieving parity with amateur human Go players, with the strongest programs reaching about amateur 5 dan level" to "Almost two decades after IBM's computer Deep Blue beat world chess champion Garry Kasparov in the 1997 match, the strongest Go programs using artificial intelligence techniques only reached about amateur 5 dan level, and still could not beat a professional Go player without handicaps". Is this OK for you? --Neo-Jay (talk) 09:12, 5 February 2016 (UTC)[reply]
This is absolutely okay and I am very happy, because this does what I wanted - tell the reader what computers were capable of. Of course Go programs vary in strength and so do chess programs - Gary will at the time have still beaten everybody else. Maybe confusing wasn't the right section to add my stuff. But we were comparing about what the best computers with the best programming at that time could do and that included beating (weaker amateurs). Thanks for the edit. --Ulkomaalainen (talk) 22:26, 7 February 2016 (UTC)[reply]
Thanks for your improving the article. --Neo-Jay (talk) 06:12, 8 February 2016 (UTC)[reply]

I think the point to bring across here is quite how rapid this recent development has been; that a vast amount of effort has been put into domain-specific work by many different teams of researchers over twenty years to reach parity with amateurs, and then quite suddenly, using mostly the generic techniques of deep learning and Monte Carlo tree search (albeit both of them recent breakthroughs in AI, and with a lot of compute power behind them), AlphaGo has steamed past the rest of the pack to reach the parity with professionals, who play at a much stronger level. -- The Anome (talk) 10:32, 30 January 2016 (UTC)[reply]

Perhaps that can be clarified in the text. 109.145.19.117 (talk) 14:36, 30 January 2016 (UTC)[reply]
Making it look more accurate does not necessarily make it better or more correct. 20 can be interpreted as having 1 or 2 significant digits while 18 must be interpreted as having 2 significant digits. It does not look like the 2 digit accuracy in "18 years" can be justified, e.g. because of the ambiguity of the criterion of being able to beat "amateurs", whereas "almost 20" (with 1 significant digit, i.e. in the sense of "almost 2 decades") possibly can. The period implied by "almost 20" (with 1 significant digit) actually includes "18" and so the original text, while less accurate, was almost certainly more correct and only possibly confusing to those who are unfamiliar with numbers and rounding. I would suggest rephrasing the original to "almost 2 decades" to improve on both options and make the statement more likely to be correct as well as less ambiguous. 110.23.118.21 (talk) 04:22, 31 January 2016 (UTC)[reply]
I changed "another 18 years" to "almost two decades" per your suggestion. Hope that everyone is happy now. --Neo-Jay (talk) 04:54, 31 January 2016 (UTC)[reply]

Style of play[edit]

It would be interesting to have some mention of AlphaGo's style of play during the five-game match against Fan Hui. The reports I read suggested that the computer's play appeared to be very methodical and cautious, with very little aggressive play unless it was behind.-- The Anome (talk) 21:19, 2 February 2016 (UTC)[reply]

https://www.youtube.com/watch?v=NHRHUHW6HQE , http://siliconangle.com/blog/2016/02/01/googles-alphago-plays-just-like-a-human-says-top-ranked-go-player/ (just citing the video) indicate that according to a 9p player, AlphaGo's style was "Japanese style" which meant less direct starting of fights. Be warned that it is pretty obvious that Kim knows his Go really well, he doesn't know AI, so I'd be careful about citing him on how AlphaGo actually came to this style, but he's a good source about the "results". (e.g. that in Game 5 AlphaGo apparently did a very good counting of Ko threats, and dealt well with Ko elsewhere, but that doesn't actually mean AlphaGo "understands" ko or has some special system for it or anything.) SnowFire (talk) 17:06, 3 February 2016 (UTC)[reply]

Possible references and information[edit]

I found this webpage that might be helpful improving this article. http://www.dcine.com/2016/01/28/alphago/ Zamaster4536 (talk) 11:26, 22 February 2016 (UTC)[reply]

Some AI community "Responses to 2016 victory against Lee Sedol" are not really responses[edit]

Especially Stephen Hawking's remark was not about AlphaGo. It was issued before AlphaGo paper in Nature was even published (according to the reference provided, it was in May 2015 vs January 2016 publication of Nature paper and March 2016 match against Lee Sedol) so it cannot possibly be a response to AlphaGo's victory (unless, of course, Hawking succeeded in developing time machine). Some other statements in this section also seem more like sentiment towards progress in AI research in general than responses to that specific match.

I would suggest to use

at the beginning of the subsection instead and remove sentences that are not direct responses to the match. Piotr Zaborski (talk) 16:20, 16 March 2016 (UTC)[reply]

Hi Piotr, I added the "AI community" section; feel free to be WP:BOLD and fix it as desired. The reason I added the material is that all of the quotes apart from Hawking's were specifically in response to the AlphaGo match, and seem on-topic and interesting to me, but it sounds like YMMV. For Hawking's quote, the source (http://phys.org/news/2016-03-game-ai-human-smarts.html), which is explicitly about AlphaGo's 2016 victory, cites Hawking's 2015 statement for the purpose of creating a context for Sandberg's and Ganascia's reactions, which are explicitly about the 2016 victory. I'm not sure how we would explain the context of their March 2016 comments without, like phys.org did, introducing some kind of material published before March 2016. Rolf H Nelson (talk) 07:03, 17 March 2016 (UTC)[reply]

Error in diagram[edit]

When checking a recent edit of the caption under the first diagram at AlphaGo#Example game, I realized that there is an error in the second diagram. There is a black stone surrounded by four white stones at 4-down 14-across. This is impossible; the black stone should have been removed from the board (captured) when the ninety-sixth move was played by white at 3-down 14-across. Could someone please fix the diagram! JRSpriggs (talk) 15:46, 24 January 2017 (UTC)[reply]

I am no expert but white 10 was captured with black 93 then black 93 is captured with white 96 (see note below board) in same position as white 10. I think it is standard to leave them shown like this even though during game captured pieces would be removed. Similarly white 82 is shown despite captured by black 123 so that we know where white 82 was played; if removed it wouldn't be obvious. crandles (talk) 16:40, 24 January 2017 (UTC)[reply]
White 10 and black 93 should be left there in the 1st diagram, and white 82 in the 2nd diagram. But the black stone at 4-down 14-across (black 93) in the 2nd diagram should be removed because, as explained below by Evercat, there is not that black stone at the 2nd diagram's start position (move 100). Very sorry that I made that mistake when I added that kifu on 2 February 2016. --Neo-Jay (talk) 09:59, 27 March 2017 (UTC)[reply]

The 2nd diagram's start position (i.e. the unnumbered stones) should be the position at 100. I've fixed it. Evercat (talk) 13:42, 6 March 2017 (UTC)[reply]

That was my fault when I added that kifu on 2 February 2016. Many thanks for fixing it finally. --Neo-Jay (talk) 09:43, 27 March 2017 (UTC)[reply]

External links modified[edit]

Hello fellow Wikipedians,

I have just modified one external link on AlphaGo. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 18 January 2022).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 05:40, 8 May 2017 (UTC)[reply]

AlphaGo Zero[edit]

Apparently, a new version, named AlphaGo Zero, beat the version that played against Lee Sedol by 100 to 0.--Alexmagnus (talk) 20:45, 18 October 2017 (UTC)[reply]

Here is a link to the abstract in Nature (journal). JRSpriggs (talk) 01:32, 19 October 2017 (UTC)[reply]