Jump to content

Inceptionv3

From Wikipedia, the free encyclopedia
Inception-v3 model.
An individual module of the Inception-v3 model. On the left is a standard module, and on the right is a dimension-reduced module.

Inception v3[1][2] is a convolutional neural network for assisting in image analysis and object detection, and got its start as a module for GoogLeNet. It is the third edition of Google's Inception Convolutional Neural Network, originally introduced during the ImageNet Recognition Challenge. The design of Inceptionv3 was intended to allow deeper networks while also keeping the number of parameters from growing too large: it has "under 25 million parameters", compared against 60 million for AlexNet.[1]

Just as ImageNet can be thought of as a database of classified visual objects, Inception helps classification of objects[3] in the world of computer vision. The Inceptionv3 architecture has been reused in many different applications, often used "pre-trained" from ImageNet. One such use is in life sciences, where it aids in the research of leukemia.[4]

The original name (Inception) was codenamed this way after a popular "we need to go deeper" internet meme went viral, quoting a phrase from the Inception film of Christopher Nolan.[1]

References

[edit]
  1. ^ a b c Szegedy, Christian (2015). "Going deeper with convolutions" (PDF). Cvpr2015.
  2. ^ Tang (May 2018). Intelligent Mobile Projects with TensorFlow. Packt Publishing. pp. Chapter 2. ISBN 9781788834544.
  3. ^ Karim and Zaccone (March 2018). Deep Learning with TensorFlow. Packt Publishing. pp. Chapter 4. ISBN 9781788831109.
  4. ^ Milton-Barker, Adam. "Inception V3 Deep Convolutional Architecture For Classifying Acute Myeloid/Lymphoblastic Leukemia". intel.com. Intel. Retrieved 2 February 2019.