| Formation | 2022 |
|---|---|
| Founders |
|
| Headquarters | San Francisco, California, US |
Director | Dan Hendrycks |
| Website | safe |
The Center for AI Safety (CAIS) is an American nonprofit organization based in San Francisco, that promotes the safe development and deployment of artificial intelligence (AI). CAIS's work encompasses research in technical AI safety and AI ethics, advocacy, and support to grow the AI safety research field.[1][2] It was founded in 2022 by Dan Hendrycks and Oliver Zhang.[3]
In May 2023, CAIS published the statement on AI risk of extinction signed by hundreds of professors of AI, leaders of major AI companies, and other public figures.[4][5][6][7][8]
Research
CAIS researchers published "An Overview of Catastrophic AI Risks", which details risk scenarios and risk mitigation strategies. Risks described include the use of AI in autonomous warfare or for engineering pandemics, as well as AI capabilities for deception and hacking.[9][10] Another work, conducted in collaboration with researchers at Carnegie Mellon University, described an automated way to discover adversarial attacks of large language models, that bypass safety measures, highlighting the inadequacy of current safety systems.[11][12]
Activities
Other initiatives include a compute cluster to support AI safety research, an online course titled "Intro to ML Safety", and a fellowship for philosophy professors to address conceptual problems.[10]
The Center for AI Safety Action Fund is a sponsor of the California bill SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.[13]
In 2023, the cryptocurrency exchange FTX, which went bankrupt in November 2022, attempted to recoup $6.5 million that it had donated to CAIS earlier that year.[14][15]
See also
References
- ↑ "AI poses risk of extinction, tech leaders warn in open letter. Here's why alarm is spreading". USA TODAY. May 31, 2023.
- ↑ "Our Mission | CAIS". www.safe.ai. Retrieved April 13, 2023.
- ↑ Edwards, Benj (May 30, 2023). "OpenAI execs warn of "risk of extinction" from artificial intelligence in new open letter". Ars Technica.
- ↑ Center for AI Safety's Hendrycks on AI Risks, Bloomberg Technology, May 31, 2023
- ↑ Roose, Kevin (May 30, 2023). "A.I. Poses 'Risk of Extinction,' Industry Leaders Warn". The New York Times. ISSN 0362-4331. Retrieved June 3, 2023.
- ↑ "Artificial intelligence warning over human extinction – all you need to know". The Independent. May 31, 2023. Retrieved June 3, 2023.
- ↑ Lomas, Natasha (May 30, 2023). "OpenAI's Altman and other AI giants back warning of advanced AI as 'extinction' risk". TechCrunch. Retrieved June 3, 2023.
- ↑ Castleman, Terry (May 31, 2023). "Prominent AI leaders warn of 'risk of extinction' from new technology". Los Angeles Times. Retrieved June 3, 2023.
- ↑ Hendrycks, Dan; Mazeika, Mantas; Woodside, Thomas (2023). "An Overview of Catastrophic AI Risks". arXiv:2306.12001 [cs.CY].
- 1 2 Scharfenberg, David (July 6, 2023). "Dan Hendrycks from the Center for AI Safety hopes he can prevent a catastrophe". The Boston Globe. Retrieved July 9, 2023.
- ↑ Metz, Cade (July 27, 2023). "Researchers Poke Holes in Safety Controls of ChatGPT and Other Chatbots". The New York Times. Retrieved July 27, 2023.
- ↑ "Universal and Transferable Attacks on Aligned Language Models". llm-attacks.org. Retrieved July 27, 2023.
- ↑ "Senator Wiener Introduces Legislation to Ensure Safe Development of Large-Scale Artificial Intelligence Systems and Support AI Innovation in California". Senator Scott Wiener. February 8, 2024. Retrieved June 28, 2024.
- ↑ Randles, Jonathan; Church, Steven (October 25, 2023). "FTX Is Probing $6.5 Million Paid to Leading Nonprofit Group on AI Safety". Bloomberg News.
- ↑ Ochsner, Evan (October 25, 2023). "FTX Moves to Subpoena AI Safety Group for Donation Information". Bloomberg Law.