Jump to content

Safe and Secure Innovation for Frontier Artificial Intelligence Models Act

From Wikipedia, the free encyclopedia
Safe and Secure Innovation for Frontier Artificial Intelligence Models Act
California State Legislature
Full nameSafe and Secure Innovation for Frontier Artificial Intelligence Models Act
IntroducedFebruary 7, 2024
Senate votedMay 21, 2024 (32-1)
Sponsor(s)Scott Wiener
GovernorGavin Newsom
BillSB 1047
WebsiteBill Text

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, is a 2024 California bill intended to "mitigate the risk of catastrophic harms from AI models so advanced that they are not yet known to exist".[1] Specifically, the bill would apply to models which cost more than $100 million to train and were trained using a quantity of computing power greater than 1026 integer or floating-point operations.[2] SB 1047 would apply to all AI companies doing business in California—the location of the company does not matter.[3] The bill creates protections for whistleblowers,[4] requires developers to perform risk assessment on their models prior to release, and establishes a Division of Frontier Models in the Government Operations Agency. It would also establish CalCompute, a public cloud computing cluster in the Department of Technology for startups, researchers and community groups.

Background

[edit]

The rapid increase in capabilities of AI systems in the 2020s, including the release of ChatGPT in November 2022, caused some researchers and members of the public to express concern about the risks associated with increasingly powerful AI systems.[5][6]

Governor Newsom and President Biden issued executive orders on artificial intelligence in 2023.[7][8][9] Senator Wiener first proposed AI legislation for California through an intent bill called SB 294, the Safety in Artificial Intelligence Act, in September 2023.[10][11][12] SB 1047 was introduced in February 2024. Wiener says his bill draws heavily on the Biden executive order, and is motivated by the absence of federal legislation: "I would love to have one unified, federal law that effectively addresses AI safety. Congress has not passed such a law. Congress has not even come close to passing such a law."[13] Several technology companies have made voluntary commitments to conduct safety testing, for example at the AI Safety Summit and AI Seoul Summit.[14][15] Wiener argues that his bill makes some of these voluntary tests mandatory: "The CEOs of Meta, Google, of OpenAI—all of them—have volunteered to testing and that's what this bill asks them to do."[16]

Provisions

[edit]

SB 1047 would establish a new Frontier Model Division in the existing Government Operations Agency, to be funded in part by fees and fines charged to companies that ask permission to create, improve, or operate AI models.[citation needed] The bill would require a developer, beginning January 1, 2028, to annually retain a third-party auditor to perform an independent audit of compliance with the requirements of the bill, as provided.[citation needed]

The division would review the results of safety tests and incidents, and issue guidance, standards, and best practices. SB 1047 would create a public cloud computing cluster called CalCompute, a public AI research cluster intended to allow startups, researchers, and community groups to participate in the development of large-scale AI systems.

SB 1047 covers AI models with training compute over 1026 integer or floating-point operations and a cost of over $100 million, as well as models that have been fine-tuned from covered models.[2][17].

Prior to model training, developers of covered models and derivatives are required to submit a certification, subject to perjury and auditing, of mitigation of "reasonable" risk of "critical harms" of the covered model and its derivatives, including post-training modifications. Critical harms are defined with respect to four categories:[1][18]

  • Creation or use of a weapon of mass destruction
  • Cyberattacks on critical infrastructure causing mass casualties or at least $500 million of damage
  • Autonomous crimes causing mass casualties or at least $500 million of damage
  • Other harms of comparable severity

Developers of covered models are also required to implement "reasonable" safeguards to reduce risk, including the ability to shut down the model. Whistleblowing provisions protect employees who report safety problems and incidents.[4] What is "reasonable" will be defined by the California Frontier Model Division, which provides advice on jury instructions and also advises on a "AI state of emergency."[1][18]

Governance of the California Frontier Model Division is via a 5-member board appointed by the California legislature and Governor.[17]

Reception

[edit]

Supporters of the bill include Turing Award recipients Yoshua Bengio and Geoffrey Hinton,[19], Kevin Esvelt[20], former OpenAI employee Daniel Kokotajlo,[21] Lawrence Lessig,[22] Sneha Revanur,[23] Stuart Russell[22] and Max Tegmark.[24] The Center for AI Safety, Economic Security California[25] and Encode Justice[26] are sponsors. Yoshua Bengio writes that the bill is a major step towards testing and safety measures for "AI systems beyond a certain level of capability [that] can pose meaningful risks to democracies and public safety."[27] Max Tegmark likened the bill's focus on holding companies responsible for the harms caused by their models to the FDA requiring clinical trials before a company can release a drug to the market. He also argued that the opposition to the bill from some companies is "straight out of Big Tech's playbook."[24]

Andrew Ng, Fei-Fei Li,[28] Ion Stoica, Jeremy Howard, Turing Award recipient Yann LeCun and Congressmembers Zoe Lofgren and Ro Khanna have come out against the legislation.[6][29][30] Andrew Ng argues specifically that there are better more targeted regulatory approaches, such as targeting deepfake pornography, watermarking generated materials, and investing in red teaming and other security measures.[27]. University of California and Caltech researchers have also written open letters in opposition.[29]

Industry

[edit]

The bill is opposed by industry trade associations including the California Chamber of Commerce, the Chamber of Progress,[a] the Computer & Communications Industry Association[b] and TechNet.[c][2] Companies including Meta[34] and OpenAI are opposed to or have raised concerns about the bill, while Google,[34] Microsoft and Anthropic[24] have proposed substantial amendments.[3]

Several startup founder and venture capital organizations are opposed to the bill, for example, Y Combinator,[35][36] Andreessen Horowitz,[37][38][39] Context Fund[40][41] and Alliance for the Future.[42]

Open source developers

[edit]

Critics expressed concerns about liability on open source software imposed by the bill if they use or improve existing freely available models. Yann LeCun, Chief AI Officer of Meta, has suggested the bill would kill open source AI models.[27] Currently (as of July 2024), there are concerns in the open-source community that due to the threat of legal liability companies like Meta may choose not to make models (for example, Llama) freely available.[43][44]. The AI Alliance has written in opposition to the bill, among other open-source organizations.[29]

Public opinion polls

[edit]

The Artificial Intelligence Policy Institute, a group founded to prevent existential risk from artificial general intelligence, ran two polls of California respondents in July and August 2024. Support of the policy increased from 59% to 65%, however opposition also increased from 20% to 25%.[45][46][47][48]

A David Binder Research poll commissioned by the Center for AI Safety, another group focused on existential risk, found that 77% of Californians support a proposal to require companies to test AI models for safety risks, and 86% consider it an important priority for California to develop AI safety regulations.[49][50][51]

See also

[edit]

Notes

[edit]
  1. ^ whose corporate partners include Amazon, Apple, Google and Meta[31]
  2. ^ whose members include Amazon, Apple, Google and Meta[32]
  3. ^ whose members include Amazon, Anthropic, Apple, Google, Meta and OpenAI[33]

References

[edit]
  1. ^ a b c Bauer-Kahan, Rebecca. "ASSEMBLY COMMITTEE ON PRIVACY AND CONSUMER PROTECTION" (PDF). California Assembly. State of California. Retrieved 1 August 2024.
  2. ^ a b c Daniels, Owen J. (2024-06-17). "California AI bill becomes a lightning rod—for safety advocates and developers alike". Bulletin of the Atomic Scientists.
  3. ^ a b Rana, Preetika (2024-08-07). "AI Companies Fight to Stop California Safety Rules". The Wall Street Journal. Retrieved 2024-08-08.
  4. ^ a b Thibodeau, Patrick (2024-06-06). "Catastrophic AI risks highlight need for whistleblower laws". TechTarget. Retrieved 2024-08-06.
  5. ^ Henshall, Will (2023-09-07). "Yoshua Bengio". TIME.
  6. ^ a b Goldman, Sharon. "It's AI's "Sharks vs. Jets"—welcome to the fight over California's AI safety bill". Fortune. Retrieved 2024-07-29.
  7. ^ "Governor Newsom Signs Executive Order to Prepare California for the Progress of Artificial Intelligence". Governor Gavin Newsom. 2023-09-06.
  8. ^ "President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence". White House. 2023-10-30.
  9. ^ Riquelmy, Alan (2024-02-08). "California lawmaker aims to put up guardrails for AI development". Courthouse News Service. Retrieved 2024-08-04.
  10. ^ Perrigo, Billy (2023-09-13). "California Bill Proposes Regulating AI at State Level". TIME. Retrieved 2024-08-12.
  11. ^ David, Emilia (2023-09-14). "California lawmaker proposes regulation of AI models". The Verge. Retrieved 2024-08-12.
  12. ^ "Senator Wiener Introduces Safety Framework in Artificial Intelligence Legislation". Senator Scott Wiener. 2023-09-13. Retrieved 2024-08-12.
  13. ^ Myrow, Rachael (2024-02-16). "California Lawmakers Take On AI Regulation With a Host of Bills". KQED.
  14. ^ Milmo, Dan (2023-11-03). "Tech firms to allow vetting of AI tools, as Musk warns all human jobs threatened". The Guardian. Retrieved 2024-08-12.
  15. ^ Browne, Ryan (2024-05-21). "Tech giants pledge AI safety commitments — including a 'kill switch' if they can't mitigate risks". CNBC. Retrieved 2024-08-12.
  16. ^ Metz, Cade (2024-08-14). "A California Bill to Regulate A.I. Causes Alarm in Silicon Valley". The New York Times. Retrieved 2024-08-15.
  17. ^ a b "07/01/24 - Assembly Judiciary Bill Analysis". California Legislative Information.
  18. ^ a b "Analysis of the 7/3 Revision of SB 1047". Context Fund.
  19. ^ Kokalitcheva, Kia (2024-06-26). "California's AI safety squeeze". Axios.
  20. ^ Riquelmy, Alan (2024-08-14). "California AI regulation bill heads to must-pass hearing". Courthouse News Service. Retrieved 2024-08-15.
  21. ^ Johnson, Khari (2024-08-12). "Why Silicon Valley is trying so hard to kill this AI bill in California". CalMatters. Retrieved 2024-08-12.
  22. ^ a b Pillay, Tharin (2024-08-07). "Renowned Experts Pen Support for California's Landmark AI Safety Bill". TIME. Retrieved 2024-08-08.
  23. ^ "Assembly Standing Committee on Privacy and Consumer Protection". CalMatters. Retrieved 2024-08-08.
  24. ^ a b c Samuel, Sigal (2024-08-05). "It's practically impossible to run a big AI company ethically". Vox. Retrieved 2024-08-06.
  25. ^ DiFeliciantonio, Chase (2024-06-28). "AI companies asked for regulation. Now that it's coming, some are furious". San Francisco Chronicle.
  26. ^ Korte, Lara (2024-02-12). "A brewing battle over AI". Politico.
  27. ^ a b c Edwards, Benj (2024-07-29). "From sci-fi to state law: California's plan to prevent AI catastrophe". Ars Technica. Retrieved 2024-07-30.
  28. ^ Li, Fei-Fei. "'The Godmother of AI' says California's well-intended AI bill will harm the U.S. ecosystem". Fortune. Retrieved 2024-08-08.
  29. ^ a b c "SB 1047 Impacts Analysis". Context Fund.
  30. ^ "Assembly Judiciary Committee 2024-07-02". California State Assembly.
  31. ^ "Corporate Partners". Chamber of Progress.
  32. ^ "Members". Computer & Communications Industry Association.
  33. ^ "Members". TechNet.
  34. ^ a b Korte, Lara (2024-06-26). "Big Tech and the little guy". Politico.
  35. ^ "Little Tech Brings a Big Flex to Sacramento". Politico.
  36. ^ "Proposed California law seeks to protect public from AI catastrophes". The Mercury News.
  37. ^ "California's Senate Bill 1047 - What You Need to Know". Andreessen Horowitz.
  38. ^ "California's AI Bill Undermines the Sector's Achievements". Financial Times.
  39. ^ "Senate Bill 1047 will crush AI innovation in California". Orange County Register. 10 July 2024.
  40. ^ "AI Startups Push to Limit or Kill California Public Safety Bill". Bloomberg Law.
  41. ^ "The Batch: Issue 257". Deeplearning.ai. 10 July 2024.
  42. ^ "The AI Safety Fog of War". Politico. 2024-05-02.
  43. ^ Piper, Kelsey (2024-07-19). "Inside the fight over California's new AI bill". Vox. Retrieved 2024-07-29.
  44. ^ Piper, Kelsey (2024-06-14). "The AI bill that has Big Tech panicked". Vox. Retrieved 2024-07-29.
  45. ^ Bordelon, Brendan. "What Kamala Harris means for tech". POLITICO Pro. (subscription required)
  46. ^ "New Poll: California Voters, Including Tech Workers, Strongly Support AI Regulation Bill SB1047". Artificial Intelligence Policy Institute. 22 July 2024.
  47. ^ Sullivan, Mark (2024-08-08). "Elon Musk's Grok chatbot spewed election disinformation". Fast Company. Retrieved 2024-08-13.
  48. ^ "Poll: Californians Support Strong Version of SB1047, Disagree With Anthropic's Proposed Changes". Artificial Intelligence Policy Institute. Retrieved 2024-08-13.
  49. ^ "California Likely Voter Survey: Public Opinion Research Summary". David Binder Research.
  50. ^ Lee, Wendy (2024-06-19). "California lawmakers are trying to regulate AI before it's too late. Here's how". Los Angeles Times.
  51. ^ Piper, Kelsey (2024-07-19). "Inside the fight over California's new AI bill". Vox. Retrieved 2024-07-22.
[edit]