Quoc V. Le

From Wikipedia, the free encyclopedia
(Redirected from Quoc Viet Le)
Quoc V. Le
Born
Lê Viết Quốc

1982 (age 41–42)
EducationAustralian National University
Stanford University
Known forseq2seq
Google Neural Machine Translation
Scientific career
FieldsMachine learning
InstitutionsGoogle Brain
Thesis Scalable feature learning  (2013)
Doctoral advisorAndrew Ng
Other academic advisorsAlex Smola

Lê Viết Quốc (born 1982),[1] or in romanized form Quoc Viet Le, is a Vietnamese-American computer scientist and a machine learning pioneer at Google Brain, which he established with others from Google. He co-invented the doc2vec[2] and seq2seq[3] models in natural language processing. Le also initiated and lead the AutoML initiative at Google Brain, including the proposal of neural architecture search.[4][5][6][7]

Education and career[edit]

Le was born in Hương Thủy in the Thừa Thiên Huế province of Vietnam.[5] He studied at Quốc Học Huế High School.[8] In 2004, Le moved to Australia and attended Australian National University for Bachelor's program, during which he worked under Alex Smola on Kernel method in machine learning.[9] In 2007, Le moved to Stanford University for graduate studies in computer science, where his PhD advisor was Andrew Ng.

In 2011, Le became a founding member of Google Brain along with his then PhD advisor Andrew Ng, Google Fellow Jeff Dean and Google researcher Greg Corrado.[5] Le led Google Brain's first major discovery, a deep learning algorithm trained on 16,000 CPU cores, which learned to recognize cats after watching only YouTube videos, and without ever having been told what a "cat" is.[10][11]

In 2014, Ilya Sutskever, Oriol Vinyals and Le proposed the seq2seq model for machine translation. In the same year, Tomáš Mikolov and Le proposed the doc2vec model for representation learning of documents. Le is among the lead authors and researchers of Google Neural Machine Translation.[12]

Le initiated and lead the AutoML project at Google Brain, including the proposal of neural architecture search. Le is among the authors of LaMDA, a conversational large language model, originally developed and introduced as Meena in 2020. In 2022, Le and co-authors proposed chain-of-thought prompting as a method to improve the reasoning ability of large language models.[13]

Honors and awards[edit]

Le was named MIT Technology Review's innovators under 35 in 2014.[14] He has been interviewed by and his research has been reported in major media outlets including Wired,[6] the New York Times,[15] the Atlantic,[16] and the MIT Technology Review.[17] Le was named an Alumni Laureate of the Australian National University School of Computing in 2022.[18]

See also[edit]

References[edit]

  1. ^ "'Quái kiệt' AI Lê Viết Quốc - người đứng sau thuật toán Transformers của ChatGPT". Viettimes - tin tức và phân tích chuyên sâu kinh tế, quốc tế, y tế (in Vietnamese). 2023-02-09. Retrieved 2023-07-03.
  2. ^ Le, Quoc V.; Mikolov, Tomas (2014-05-22). "Distributed Representations of Sentences and Documents". arXiv:1405.4053 [cs.CL].
  3. ^ Sutskever, Ilya; Vinyals, Oriol; Le, Quoc V. (2014-12-14). "Sequence to Sequence Learning with Neural Networks". arXiv:1409.3215 [cs.CL].
  4. ^ Zoph, Barret; Le, Quoc V. (2017-02-15). "Neural Architecture Search with Reinforcement Learning". arXiv:1611.01578 [cs.LG].
  5. ^ a b c "Le Viet Quoc, a young Vietnamese engineer who holds Google's brain". tipsmake.com. Retrieved 2022-11-24.
  6. ^ a b Hernandez, Daniela. "A Googler's Quest to Teach Machines How to Understand Emotions". Wired. ISSN 1059-1028. Retrieved 2022-11-25.
  7. ^ Chow, Rony (2021-06-07). "Quoc V. Le: Fast, Furious and Automatic". History of Data Science. Retrieved 2022-11-26.
  8. ^ "Fulbright scholars Vietnam - Le Viet Quoc".
  9. ^ "Meet Le Viet Quoc, a Vietnamese talent at Google". Tuoi Tre News. 2019-02-15. Retrieved 2022-11-25.
  10. ^ Markoff, John (June 25, 2012). "How Many Computers to Identify a Cat? 16,000". The New York Times.
  11. ^ Ng, Andrew; Dean, Jeff (2012). "Building High-level Features Using Large Scale Unsupervised Learning". arXiv:1112.6209 [cs.LG].
  12. ^ "A Neural Network for Machine Translation, at Production Scale". Google Research Blog. 2016-09-27. Retrieved 2023-07-02.
  13. ^ "Language Models Perform Reasoning via Chain of Thought". Google Research Blog. 2022-05-22. Retrieved 2023-07-02.
  14. ^ "Quoc Le". MIT Technology Review. Retrieved 2022-11-24.
  15. ^ Lewis-Kraus, Gideon (2016-12-14). "The Great A.I. Awakening". The New York Times. ISSN 0362-4331. Retrieved 2022-11-26.
  16. ^ Madrigal, Alexis C. (2012-06-26). "The Triumph of Artificial Intelligence! 16,000 Processors Can Identify a Cat in a YouTube Video Sometimes". The Atlantic. Retrieved 2022-11-26.
  17. ^ "AI's Language Problem". MIT Technology Review. Retrieved 2022-11-26.
  18. ^ "Celebrating 50 years of teaching computer science at ANU". ANU College of Engineering, Computing and Cybernetics. Retrieved 2023-07-02.