Jump to content

P(doom)

From Wikipedia, the free encyclopedia

P(doom) is a term in AI safety that refers to the probability of catastrophic outcomes (or "doom") as a result of artificial intelligence.[1][2] The exact outcomes in question differ from one prediction to another, but generally allude to the existential risk from artificial general intelligence.[3]

Originating as an inside joke among AI researchers, the term came to prominence in 2023 following the release of GPT-4, as high-profile figures such as Geoffrey Hinton[4] and Yoshua Bengio[5] began to warn of the risks of AI.[6] In 2022, a survey of AI researchers, which had a 17% response rate, found that the majority believed there is at least a 10% chance that our inability to control AI could cause an existential catastrophe.[7]

Sample P(doom) values[edit]

Name P(doom) Notes
Dario Amodei 10-25%[6] CEO of Anthropic
Elon Musk 10-20%[8] Businessman and CEO of X, Tesla, and SpaceX
Paul Christiano 50%[9] Head of research at the US AI Safety Institute
Lina Khan 15%[6] Chair of the Federal Trade Commission
Emmet Shear 5-50%[6] Co-founder of Twitch and former interim CEO of OpenAI
Geoffrey Hinton 10%[6][Note 1] AI researcher, formerly of Google
Yoshua Bengio 20%[3][Note 2] Computer scientist and scientific director of the Montreal Institute for Learning Algorithms
Jan Leike 10-90%[1] AI alignment researcher at Anthropic, formerly of DeepMind and OpenAI
Vitalik Buterin 10%[1] Cofounder of Ethereum
Dan Hendrycks 80%+ [1][Note 3] Director of Center for AI Safety
Grady Booch 000% c. 0%[1][Note 4] American software engineer
Casey Newton 5%[1] American technology journalist
Eliezer Yudkowsky 99%+ [10] Founder of the Machine Intelligence Research Institute
Roman Yampolskiy 99.9%[11][Note 5] Latvian computer scientist
Marc Andreessen 0%[12] American businessman
Yann Le Cun <0.01%[13][Note 6] Chief AI Scientist at Meta

Criticism[edit]

There has been some debate about the usefulness of P(doom) as a term, in part due to the lack of clarity about whether or not a given prediction is conditional on the existence of artificial general intelligence, the time frame, and the precise meaning of "doom".[6][14]

In popular culture[edit]

See also[edit]

Notes[edit]

  1. ^ Conditional on A.I. not being "strongly regulated", time frame of 30 years.
  2. ^ Based on an estimated "50 per cent probability that AI would reach human-level capabilities within a decade, and a greater than 50 per cent likelihood that AI or humans themselves would turn the technology against humanity at scale."
  3. ^ Up from ~20% 2 years prior.
  4. ^ Equivalent to "P(all the oxygen in my room spontaneously moving to a corner thereby suffocating me)".
  5. ^ Within the next 100 years.
  6. ^ "Less likely than an asteroid wiping us out".

References[edit]

  1. ^ a b c d e f Railey, Clint (2023-07-12). "P(doom) is AI's latest apocalypse metric. Here's how to calculate your score". Fast Company.
  2. ^ Thomas, Sean (2024-03-04). "Are we ready for P(doom)?". The Spectator. Retrieved 2024-06-19.
  3. ^ a b "It started as a dark in-joke. It could also be one of the most important questions facing humanity". ABC News. 2023-07-14. Retrieved 2024-06-18.
  4. ^ Metz, Cade (2023-05-01). "'The Godfather of A.I.' Leaves Google and Warns of Danger Ahead". The New York Times. ISSN 0362-4331. Retrieved 2024-06-19.
  5. ^ "One of the "godfathers of AI" airs his concerns". The Economist. ISSN 0013-0613. Retrieved 2024-06-19.
  6. ^ a b c d e f Roose, Kevin (2023-12-06). "Silicon Valley Confronts a Grim New A.I. Metric". The New York Times. ISSN 0362-4331. Retrieved 2024-06-17.
  7. ^ "2022 Expert Survey on Progress in AI". AI Impacts. 2022-08-04. Retrieved 2024-06-19.
  8. ^ Tangalakis-Lippert, Katherine. "Elon Musk says there could be a 20% chance AI destroys humanity — but we should do it anyway". Business Insider. Retrieved 2024-06-19.
  9. ^ "ChatGPT creator says there's 50% chance AI ends in 'doom'". The Independent. 2023-05-03. Retrieved 2024-06-19.
  10. ^ "TIME100 AI 2023: Eliezer Yudkowsky". Time. 2023-09-07. Retrieved 2024-06-18.
  11. ^ Altchek, Ana. "Why this AI researcher thinks there's a 99.9% chance AI wipes us out". Business Insider. Retrieved 2024-06-18.
  12. ^ Marantz, Andrew (2024-03-11). "Among the A.I. Doomsayers". The New Yorker. ISSN 0028-792X. Retrieved 2024-06-19.
  13. ^ Wayne Williams (2024-04-07). "Top AI researcher says AI will end humanity and we should stop developing it now — but don't worry, Elon Musk disagrees". TechRadar. Retrieved 2024-06-19.
  14. ^ King, Isaac (2024-01-01). "Stop talking about p(doom)". LessWrong.
  15. ^ "GUM & Ambrose Kenny-Smith are teaming up again for new collaborative album 'III Times'". DIY. 2024-05-07. Retrieved 2024-06-19.