Talk:Generative adversarial network
This article is rated B-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||||||||||||||||||||||||||||||
|
History Adversarial Entities
[edit]To have adversarial entities is an old concept. To single out Schmidhuber is awkward.
+ https://stats.stackexchange.com/questions/251460/were-generative-adversarial-networks-introduced-by-j%C3%BCrgen-schmidhuber. + https://media.nips.cc/nipsbooks/nipspapers/paper_files/nips27/reviews/1384.html — Preceding unsigned comment added by Saviourmachine (talk • contribs) 13:08, 1 November 2017 (UTC)
"photographs that look authentic to human observers"
[edit]I'm going to fix the intro para which currently claims: "This technique can generate photographs that look authentic to human observers. For example, a GAN can be used to generate a synthetic photograph of a cat that fools the discriminator into accepting it as an actual photograph.[Salimans et al 2016]" Two things are wrong. Firstly the para conflates human and machine discrimination.
More fundamentally: "photographs that look authentic to human observers" is not demonstrated in the Salimans et al paper. In fact they specifically say they could tell the difference between real and generated photos in over 95% of cases.
If someone has citations to justify that the realism has got even better, then please cite (and cite properly!). Else, we have to stick with the slightly less grand claim that I'm about to edit it to.--mcld (talk) 16:11, 29 November 2017 (UTC)
Free Ahmajala2211rri (talk) 13:44, 15 May 2019 (UTC)
The article talks about the technique only in terms of the adversarial/generative aspect of winnowing down candidates without giving any info on HOW the photographs etc. are created. Is it magic? Pieces of other data? Certain software? Customized software? The entire part about the actual creation of the images needs to be addressed.
Debate over who invented GANs
[edit]There are several independent debates over who invented which aspects of GANs.
One user completely rewrote the history section, giving sole credit for GANs to Ian Goodfellow, using ip addresses such as 2620:149:... from Apple Inc., Cupertino, CA. This user writes: "Ian Goodfellow is generally recognized as having invented GANs[1][2] in 2014.[3]"
However, basic ideas of GANs were published in a 2010 blog post by Olli Niemitalo.[4] The NIPS 2014 GAN paper[3] does not mention it. User 2620:149:... wrote: "This idea was never implemented and did not involve stochasticity in the generator and thus was not a generative model." Olli Niemitalo[4] did describe a GAN though: a so-called cGAN.[5]
User 2620:149:... also writes: "Review papers about GANs from academic sources, such as [6] generally assign credit for the idea to Ian Goodfellow and make no mention of Juergen Schmidhuber."
On the other hand, Ian Goodfellow's own peer-reviewed GAN paper[3] does mention Jürgen Schmidhuber's unsupervised adversarial technique called predictability minimization or PM (1992).[7] Here an agent contains two artificial neural networks, Net1 and Net2. Net1 generates a code of incoming data. The code is a vector of numbers between 0 and 1. Net2 learns to predict each such number from the remaining numbers. It learns to minimize its error. As a consequence, Net2 learns the conditional expected value of each number, given the remaining numbers. At the same time, however, the adversarial Net1 learns to generate codes that maximize the error of Net2. In the ideal case, absent local minima, Net1 learns to encode redundant input patterns through codes with statistically independent components, while Net2 learns the probabilities of these codes, and therefore the probabilities of the encoded patterns.[7] In 1996, this zero sum game was applied to images, and produced codes similar to those found in the mammalian brain.[8]
There is a blog post by Jürgen Schmidhuber[9] which disputes the claim of the GAN paper[3] that PM is not a minimax game (references adjusted): "The well-known NIPS 2014 GAN paper[3] claims that PM is not based on a minimax game with a value function that one agent seeks to maximise and the other seeks to minimise, and that for GANs "the competition between the networks is the sole training criterion, and is sufficient on its own to train the network," while PM "is only a regularizer that encourages the hidden units of a neural network to be statistically independent while they accomplish some other task; it is not a primary training criterion"[3]. However, good old PM is indeed a pure minimax game, too, e.g.,[8], Equation 2 (there is no "other task"). In particular, PM was also trained[8][7] (also on images[8]) such that "the competition between the networks is the sole training criterion, and is sufficient on its own to train the network."[9]
After checking the paper[8] this seems to be true, and the GAN paper[3] seems to be wrong.
- This isn't remotely how Wikipedia works. Please submit to a peer-reviewed journal, not here. Rolf H Nelson (talk) 03:51, 16 July 2019 (UTC)
Furthermore, even before PM, other unsupervised adversarial networks were proposed by Jürgen Schmidhuber in 1990 for artificial curiosity. An agent contains two artificial neural networks, Net1 and Net2. Net1 generates an output action that produces new data. Net2 tries to predict the data. It learns to minimize its error. At the same time, however, the adversarial Net1 learns to generate outputs that maximize the error of Net2. Thus Net1 learns to generate difficult data from which Net2 can still learn something. In the original work, both Net1 and Net2 were recurrent neural networks, such that they could also learn to generate and perceive sequences of actions and data.[10][11] In the 1990s, this led to many papers on artificial curiosity and zero sum games.[9] The NIPS 2014 GAN paper[3] does not cite this work although artificial curiosity seems a lot like GANs.
The debate above attracted many comments from the machine learning community[12][13][5][14]and was even picked up by the popular press,[15] so far without generally accepted conclusion.
Also relevant for the history section: In 2012, Yan Zhou et al. applied the adversarial principle to Support Vector Machines.[16]
The idea to infer models in a competitive setting (model versus discriminator) was also adopted by Li, Gauci and Gross in 2013.[17] Their method is used for behavioral inference. It is termed Turing Learning,[18] as the setting is akin to that of a Turing test. Turing Learning is a generalization of GANs.[19] Models other than neural networks can be considered. Moreover, the discriminators are allowed to influence the processes from which the datasets are obtained, making them active interrogators as in the Turing test.
Starbag (talk) 16:47, 28 May 2019 (UTC)
- I'm staying out of editing the page itself, but here's my take: My idea matches that of a conditional GAN (cGAN) with a least squares loss function (thus cLSGAN) conditioned on a part of the data and without a noise input vector. The example I use to present the idea is image inpainting. Olli Niemitalo (talk) 04:43, 31 May 2019 (UTC)
- Thanks for this! There is a new paper called Unsupervised Minimax: Adversarial Curiosity, Generative Adversarial Networks, and Predictability Minimization by Jürgen Schmidhuber.[20] It mentions your cGAN and LSGAN. Section 2 and 2.1 formulate GAN and cGAN as special cases of his Adversarial Curiosity (1990).[10][11] Section 5.1 disputes again[9] the claim of the GAN paper[3] that PM is not a minimax game, citing Equation 2 of the 1996 paper on PM.[8] There is yet another thread on this at reddit.[21] This debate should be reflected in the history section. Starbag (talk) 18:09, 13 June 2019 (UTC)
- ^ Klok, Christie (February 21, 2018). "The GANfather: The man who's given machines the gift of imagination". MIT Technology Review. Retrieved 14 May 2019.
- ^ "Invented a way for neural networks to get better by working together". Innovators under 35. MIT Technology Review.
- ^ a b c d e f g h i Goodfellow, Ian; Pouget-Abadie, Jean; Mirza, Mehdi; Xu, Bing; Warde-Farley, David; Ozair, Sherjil; Courville, Aaron; Bengio, Yoshua (2014). Generative Adversarial Networks (PDF). Proceedings of the International Conference on Neural Information Processing Systems (NIPS 2014). pp. 2672–2680.
- ^ a b Niemitalo, Olli (February 24, 2010). "A method for training artificial neural networks to generate missing data within a variable context". Internet Archive (Wayback Machine). Retrieved February 22, 2019.
- ^ a b "GANs were invented in 2010?". reddit r/MachineLearning. 2019. Retrieved 2019-05-28.
- ^ Cresswell, Antonia; White, Tom; Dumoulin, Vincent; Arulkumaran, Kai; Sengupta, Biswa; Bharath, Anil (Jan 1, 2018). "Generative Adversarial Networks: An Overview". IEEE Signal Processing.
- ^ a b c Schmidhuber, Jürgen (November 1992). "Learning Factorial Codes by Predictability Minimization". Neural Computation. 4 (6): 863–879. doi:10.1162/neco.1992.4.6.863.
- ^ a b c d e f Schmidhuber, Jürgen; Eldracher, Martin; Foltin, Bernhard (1996). "Semilinear predictability minimzation produces well-known feature detectors". Neural Computation. 8 (4): 773–786.
- ^ a b c d Jürgen Schmidhuber (2018). "Unsupervised Neural Networks Fight in a Minimax Game". Retrieved Feb 20, 2019.
- ^ a b Schmidhuber, Jürgen (1990). "Making the world differentiable: On using fully recurrent self-supervised neural networks for dynamic reinforcement learning and planning in non-stationary environments" (PDF). TR FKI-126-90. Tech. Univ. Munich.
- ^ a b Schmidhuber, Jürgen (1991). "A possibility for implementing curiosity and boredom in model-building neural controllers". Proc. SAB'1991. MIT Press/Bradford Books. pp. 222–227.
- ^ "What's happening at NIPS 2016? (Jurgen Schmidhuber)". reddit r/MachineLearning. 2016. Retrieved 2019-05-28.
- ^ "Schmidhuber's new blog post on Unsupervised Adversarial Neural Networks and Artificial Curiosity in Reinforcement Learning". reddit r/MachineLearning. 2018. Retrieved 2019-05-28.
- ^ "Hinton, LeCun, Bengio receive ACM Turing Award". reddit r/MachineLearning. 2019. Retrieved 2019-05-28.
- ^ Vance, Ashlee (May 15, 2018). "Quote: It was another way of saying, Hey, kid, you didn't invent this". Bloomberg Business Week. Retrieved 2019-01-16.
- ^ Zhou, Yan; Kantarcioglu, Murat; Thuraisingham, Bhavani; Xi, Bowei (August 12–16, 2012). "Adversarial Support Vector Machine Learning". Proceedings of KDD’12. Beijing, China: ACM.
- ^ Li, Wei; Gauci, Melvin; Gross, Roderich (July 6, 2013). "A Coevolutionary Approach to Learn Animal Behavior Through Controlled Interaction". Proceedings of the 15th Annual Conference on Genetic and Evolutionary Computation (GECCO 2013). Amsterdam, The Netherlands: ACM. pp. 223–230. doi:10.1145/2463372.2465801.
- ^ Li, Wei; Gauci, Melvin; Groß, Roderich (30 August 2016). "Turing learning: a metric-free approach to inferring behavior and its application to swarms". Swarm Intelligence. 10 (3): 211–243. doi:10.1007/s11721-016-0126-1.
- ^ Gross, Roderich; Gu, Yue; Li, Wei; Gauci, Melvin (December 6, 2017). "Generalizing GANs: A Turing Perspective". Proceedings of the Thirty-first Annual Conference on Neural Information Processing Systems (NIPS 2017). Long Beach, CA, USA. pp. 1–11.
- ^ Schmidhuber, Juergen (2019). "Unsupervised Minimax: Adversarial Curiosity, Generative Adversarial Networks, and Predictability Minimization". arXiv:1906.04493 [cs.NE].
- ^ "[R] [1906.04493] Unsupervised Minimax: Adversarial Curiosity, Generative Adversarial Networks, and Predictability Minimization (Schmidhuber)". reddit r/MachineLearning. 2019. Retrieved 2019-06-12.
Physics applications
[edit]Hi, just in case people weren't aware. CERN actively uses this method to train particle track algorithms. In this case the model tries to predict how a given decay "should" work with known data and then copies information on the fly in the areas of interest to improve data throughput. — Preceding unsigned comment added by 88.81.156.140 (talk) 06:07, 26 March 2021 (UTC)
Some necessary details aren’t yet discussed or clear
[edit]if I wanted to create a GAN. For example, the article describes generation that targets statistical similarity. In what specific ways? You’d think the choice of characteristics to analyze would be key to succeeding. So how do you choose those characteristics? And, how would possible candidates be generated? Just randomly flicking pixels on and off and evaluating the result? That would be pretty inefficient. Davidmsm22 (talk) 15:50, 25 December 2021 (UTC)
A Commons file used on this page or its Wikidata item has been nominated for deletion
[edit]The following Wikimedia Commons file used on this page or its Wikidata item has been nominated for deletion:
Participate in the deletion discussion at the nomination page. —Community Tech bot (talk) 09:22, 25 February 2022 (UTC)
A Commons file used on this page or its Wikidata item has been nominated for deletion
[edit]The following Wikimedia Commons file used on this page or its Wikidata item has been nominated for deletion:
Participate in the deletion discussion at the nomination page. —Community Tech bot (talk) 23:07, 10 February 2023 (UTC)