Supporting House bills to boost AI safety

View profile for Chris Lehane

Chief Global Affairs Officer @ OpenAI

Today we’re announcing our support for three bipartisan U.S. House bills that will help advance our goal of building AI that is safe and broadly beneficial. These bills include 1) the AI Advancement and Reliability Act, which would back a center focused on ensuring frontier AI models are developed and deployed safely; 2) the CREATE AI Act, which formalizes the creation of a new effort to democratize access to AI research resources; and 3) the Workforce for AI Trust Act to strengthen our AI workforce and teach the next generation about AI tools. These bills are important steps toward maximizing AI’s benefits while minimizing potential risks, and align with our support of similar legislation in the U.S. Senate including the Future of AI Innovation Act and the NSF AI Education Act. The federal government has an important role to play in ensuring the safety of frontier AI models and we want to thank Rep. Don Beyer (VA-08), Rep. Anna G. Eshoo (CA-16), Rep. Ted Lieu (CA-36), Rep. Zoe Lofgren (CA-18), Rep. Frank Lucas (OK-3), Rep. Michael McCaul (TX-10), and Rep. Jay Obernolte (CA-23) for their leadership on these issues. We’re also taking other actions to ensure that our models are safe and reliable, including: -> Partnering with the U.S. AI Safety Institute - Through our agreement with the USAISI, we’ll share access to major new models prior to and following their public release in order to inform AI safety research, testing and evaluation. We’ve long supported the USAISI’s mission and we’re excited that our collaboration will help support development of frontier AI model safety best practices and standards. -> Adhering to our Preparedness Framework - To reiterate what we have previously shared: We will not release AI systems that pose a risk level of “High” or “Critical” unless our mitigation efforts can lower these systems’ risks to a “Medium” level, specifically defined in our Preparedness Framework as a model that does not introduce new types of threats, enable non-experts, or automate previously impossible processes. The Preparedness Framework lets us build and widely share the benefits of increasingly capable AI while helping us detect and protect against a specific set of risks as early as possible if they do arise. -> Following through on our voluntary commitments — Last year we signed onto a set of voluntary commitments from the White House to promote the safe use and development of AI. These commitments have helped guide our work over the past year and we continue to work alongside governments, civil society, and other industry leaders to advance AI governance going forward. In keeping with our belief in AI built with democratic values, we appreciate all the inputs we get from the OpenAI community, where we solicit and welcome contributions to the conversations around these important issues. We’ll continue to work with AI stakeholders to push for policies that help make the technology safe and beneficial for the most people possible.

Steven Cobb

Division Chief/ Government Services Division

9mo

Chris I can’t tell you how much I use this product everyday for work. It’s a great tool and I appreciate what you are all doing to make sure that we have protections for this for years to come.

Like
Reply
John Adams

Killarney Youthreach Co-ordinator at Kerry Education and Training Board

9mo

Best wishes Chris.

Like
Reply
PHILLIP CHANG

redevelopment from Commerical property to multifamily home property.

9mo

Congrats Chris!

Like
Reply
Maura Tuohy Di Muro

Marketing Executive, Global Speaker, Advisor, Guest Lecturer

9mo

Great to see Open AI partnering with government to lay the foundation for safe and trusted AI. We’ll need more safeguards and investments from good actors to recognize the positive benefits of AI without damaging risk to our society.

Ron Fournier

Recovered White House Correspondent and media executive. Now: Consultant

9mo

Into the future …

Like
Reply
See more comments

To view or add a comment, sign in

Explore topics