AI Futures Project

Wikipedia

AI Futures Project
FormationOctober 2024; 1 year ago (2024-10)
FoundersDaniel Kokotajlo
Founded atBerkeley, California
Type501(c)(3)
Registration no.EIN 99-4320292
Legal statusNonprofit research organization
FocusForecasting of artificial intelligence
Location
Executive Director
Daniel Kokotajlo
COO
Jonas Vollmer
Key people
Websiteai-futures.org
Formerly called
Artificial Intelligence Forecasting Inc

The AI Futures Project is a nonprofit research organization based in the United States that specializes in forecasting the development and societal impact of advanced artificial intelligence. The organization is best known for its 2025 scenario forecast, AI 2027, which examines the potential near-term emergence of artificial general intelligence (AGI) and its possible global consequences.

History

The AI Futures Project was founded in 2025 by Daniel Kokotajlo, a former researcher in the governance division of OpenAI.[1] Kokotajlo resigned from OpenAI in April 2024, expressing concerns that the company prioritized rapid product development over AI safety and was advancing without sufficient safeguards. He founded the nonprofit to conduct independent forecasting and policy research.[2]

The organization is registered as a 501(c)(3) nonprofit in the United States and is funded through donations. It operates with a small research staff and network of advisors drawn from fields including AI policy, forecasting, and risk analysis.[3][non-primary source needed]

Activities

The mission of the AI Futures Project is to develop detailed scenario forecasts of the trajectory of advanced AI systems to inform policymakers, researchers, and the public.

In addition to written reports, the group has conducted tabletop exercises and workshops based on its scenarios, involving participants from academia, technology, and public policy.[4]

AI 2027

In April 2025, the AI Futures Project released AI 2027, a detailed scenario forecast describing possible developments in AI between 2025 and 2027.[5] The report was authored by Daniel Kokotajlo along with Eli Lifland, Thomas Larsen, and Romeo Dean, with editing assistance from blogger Scott Alexander.[6]

The scenario depicts very rapid progress in AI capabilities, including the development of autonomous AI systems capable of recursive self-improvement. AI 2027 presents two alternative endings: one in which international competition over advanced AI leads to catastrophic loss of human control, and another in which coordinated global action slows down development and averts imminent disaster. The authors emphasize that the narratives are hypothetical and intended as planning tools rather than literal forecasts.[7]

Reception

AI 2027 attracted attention from technology journalists and AI researchers.[7][8] Some commentators praised the report for its level of detail and its usefulness as a strategic planning exercise, while others criticized the scenario as implausibly aggressive in its timelines.[9][10]

The report was cited in policy discussions about AI governance. U.S. Vice President J. D. Vance reportedly read AI 2027 and referenced its warnings in conversations about international AI coordination.[11][12][7]

More recent reporting noted that the authors of AI 2027 had publicly revised some of their timelines.[11][12] According to Kokotajlo, developments since the report's original publication suggested a slower path toward fully autonomous AI research systems than initially forecasted.[13]

References

  1. Kokotajlo, Daniel; Ball, Dean (2024-10-15). "4 Ways to Advance Transparency in Frontier AI Development". Time.
  2. "OpenAI Insiders Warn of a 'Reckless' Race for Dominance". The New York Times. 2024-06-04. Retrieved 2025-04-19.
  3. "AI Futures Project". AI Futures Project.
  4. "Why this former OpenAI researcher thinks it's time to start gaming out AI's future". Fortune. 2025-06-26.
  5. "AI 2027". AI 2027.
  6. "About AI 2027". AI 2027.
  7. 1 2 3 Piper, Kelsey (2025-04-03). "One chilling forecast of our AI future is getting wide attention. How realistic is it?". Vox.
  8. Roose, Kevin (2025-04-03). "This A.I. Forecast Predicts Storms Ahead". The New York Times. Retrieved 2025-05-21.
  9. Chau, Brian (2025-04-10). "Apocalyptic Predictions About AI Aren't Based in Reality". City Journal.
  10. Wong, Matteo (2025-08-21). "The AI Doomers Are Getting Doomier". The Atlantic.
  11. 1 2 Sheridan, Leila (2026-01-02). "AI expert predicted AI would end humanity in 2027. Now he's changing his timeline". Inc.
  12. 1 2 Down, Aisha (2026-01-06). "Leading AI expert delays timeline for possible destruction of humanity". The Guardian.
  13. Kokotajlo, Daniel (2025-12-14). "AI Futures Model: Dec 2025 Update". AI Futures Project Blog.