With the unchecked race to build smarter-than-human AI intensifying, humanity is on track to almost certainly lose control. In "Keep The Future Human", FLI Executive Director Anthony Aguirre explains why we must close the 'gates' to AGI - and instead develop beneficial, safe Tool AI designed to serve us, not replace us. We're at a crossroads: continue down this dangerous path, or choose a future where AI enhances human potential, rather than threatening it. 🔗 Read Anthony's full "Keep The Future Human" essay - or explore the interactive summary - at the link in the comments:
Future of Life Institute (FLI)
Civic and Social Organizations
Campbell, California 20,569 followers
Independent global non-profit working to steer transformative technologies to benefit humanity.
About us
The Future of Life Institute (FLI) is an independent nonprofit that works to reduce extreme, large-scale risks from transformative technologies, as well as steer the development and use of these technologies to benefit life. The Institute's work primarily consists of grantmaking, educational outreach, and policy advocacy within the U.S. government, European Union institutions, and United Nations, but also includes running conferences and contests. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
- Website
-
http://futureoflife.org
External link for Future of Life Institute (FLI)
- Industry
- Civic and Social Organizations
- Company size
- 11-50 employees
- Headquarters
- Campbell, California
- Type
- Nonprofit
- Specialties
- artificial intelligence, biotechnology, European Union, nuclear, climate change, technology policy, and grantmaking
Locations
-
Primary
Get directions
300 Orchard City Dr
Campbell, California 95008, US
-
Get directions
Avenue des Arts / Kunstlaan 44
Brussels, 1040, BE
Employees at Future of Life Institute (FLI)
-
David Nicholson
Director, Future of Life Award @ Future of Life Institute | Harvard University ALM
-
Andrea Berman
Philanthropy - Partnerships - Program Development - Strategy
-
Mark Brakel
AI Policy Director | Board Member
-
Risto Uuk
Head of EU Policy and Research @ Future of Life Institute | PhD Researcher @ KU Leuven | Systemic risks from general-purpose AI
Updates
-
🚨 Governor Newsom has signed SB 53 into law, a landmark moment for the AI safety movement. "Lawmakers have finally begun establishing basic protections around advanced AI systems -- the same safeguards that exist for every other industry," said Michael Kleinman, our Head of US Policy. The data is compelling: 82% of Republicans support limits on AI capabilities, and over 70% of voters want government safety standards. This summer, the Senate voted 99-1 against blocking state action. There is more work to do, but momentum is growing for the AI safety movement. Until we have strong federal standards, states will continue stepping up to protect our children, communities, and jobs. Full statement in the comments:
-
-
🇺🇳 At the UN General Assembly last week, FLI's Executive Director Anthony Aguirre joined Foreign Policy's Tech and AI Forum for a talk on "Safeguarding the Future: Coordinated Approaches to Global AI Policy". 📺 You can watch the full talk here, starting from 1:15:00: https://lnkd.in/gDWAmmRX 📷 credit: Jonathan Heisler
-
-
🗣️ 🇺🇦 "We need global rules – now – for how AI can be used in weapons. And this is just as urgent as preventing the spread of nuclear weapons." 🚨 "It's only a matter of time, not much, before drones are fighting drones, attacking critical infrastructure and targeting people all by themselves, fully autonomous and no human involved, except the few who control AI systems." 📢 "We are now living through the most destructive arms race in human history because this time, it includes artificial intelligence." Ukrainian President Zelenskyy last week at the UN General Assembly: https://lnkd.in/ggWN_g9e
-
-
🤖 "If you have an agent that has very broad goals and a very open-ended autonomy, you're gonna lose a lot of meaningful oversight of that system, most likely. So, that's the biggest shift I think between a Tool AI and this more agentic path that we're on right now. I think you could have a Tool AI that's still an agent, but it would have a very bounded autonomy." -📻 Foresight Institute Existential Hope Program Director Beatrice Erkers on the newest FLI episode. 🔗 Tune in now on your favourite podcast player, or at the link in the comments!
-
🤝 👏 A wonderful collaboration on Capitol Hill last week! Thanks Federation of American Scientists. 👇
Worried about someone building AM, HAL, SkyNet, or all of the above? Worry no longer. Last week, our team hosted a much-needed primer on AGI and Global Risk on Capitol Hill in partnership with the Future of Life Institute (FLI). Opening remarks from Congressman Bill Foster, Congressman Ted Lieu, and John Bailey helped set the stage for a powerful day of conversations—bringing together voices from government, academia, and industry to explore what AGI means for national and global security. Highlights Include: - Malo Bourgon, Joel Predd, and Oliver Stephenson focusing on AGI capabilities, not definitions. - Mark Beall, Alexa Courtney, Brodi Kotila, JD, PhD, and Jon Wolfsthal on what an AGI grand strategy could look like. - Jim Mitre, Jessica Brandt, Matt Sheehan, and Hamza Chaudhry on US-China dynamics and global risks. We’re grateful to all the speakers, participants, and partners who helped shape a thoughtful, strategic discussion. These conversations are just the beginning.
-
-
-
-
-
+2
-
-
AI can deliver incredible benefits to humanity - but without guardrails, we're heading towards a dangerous future of mounting AI-driven harms. That’s why we're joining the Global Call for AI Red Lines, along with 70+ top organizations and 200+ well-known leaders and experts - including former heads of state Mary Robinson, Juan Manuel Santos, and Enrico Letta, Nobel laureates including Joseph Stiglitz, Maria Ressa, and Daron Acemoglu, AI pioneers and Turing Award winners including Yoshua Bengio, Geoffrey Hinton, and Andrew Chi-Chih Yao, former President of the United Nations General Assembly Csaba Kőrösi, experts from leading AI companies including Ian Goodfellow, and other influential voices like Yuval Noah Harari and Stephen Fry. Our call is clear: Governments must reach agreement on red lines for artificial intelligence by the end of 2026 - preventing the most severe risks for humanity and global stability. We can't afford to wait. Help us build awareness about the need for global #AIRedLines by sharing this post:
-
-
🆕 "A lot of elected officials are starting to realize the danger. A lot of common people are starting to realize the danger. But no one really knows that everyone else sees the issue... We're in this weird state where a lot of people are alarmed, but no one wants to look alarmist." 📻 "If Anyone Builds It, Everyone Dies" co-author and Machine Intelligence Research Institute President Nate Soares on the newest FLI Podcast episode. 🔗 Listen to it now at the link in the comments:
-
🆕📊 A new survey frm The Institute for Family Studies finds that 90% of Americans want safeguards on AI, especially to protect children. Highlights: ➡️ Americans overwhelmingly agree that technology companies should be prohibited from deploying A.I. chatbots that engage in sexual conversations with minors. ➡️ All age groups and income brackets and both parties agree that the priority of Congress should be to protect children, over working to keep states from regulating A.I. companies. ➡️ 90% of Americans agree that families should be granted the right to sue an A.I. company, “if its products contributed to harms such as suicide, sexual exploitation, psychosis, or addiction in their child.” 🔗 Read more of the results at the link in the comments:
-
-
Tomorrow in DC, FLI's Hamza Chaudhry and the Federation of American Scientists will host a gathering on artificial general intelligence (AGI), its risks, and how they may be managed. Learn more: https://luma.com/hro96dyl
What is artificial general intelligence, and what could its implications be for global risks? Join us on Thursday, September 18, to hear from experts on the large-scale risks that AGI may pose and to chart effective strategies for managing said risks. Jointly hosted with Future of Life Institute (FLI), this gathering will explore the future of AGI, national security implications, and U.S.-China dynamics in a rapidly evolving technological landscape. Space is extremely limited, and RSVPs are subject to approval. 📍 https://luma.com/hro96dyl
-