What is the U.S. AI Safety Institute? USAISI Explained as OpenAI, Amazon, Google Join Consortium

The launch of the U.S. AI Safety Institute (USAISI) took place last month; seeing hundreds of the biggest names in AI and the wider tech space in the name of AI regulation and ethics. 

But what is the USAISI? Who’s leading it, and which companies have joined in support?

In this blog post, we’ll walk you through everything you need to know about the U.S. AI Safety Institute (USAISI).

What is the U.S. AI Safety Institute (USAISI)?

The U.S. AI Safety Institute (USAISI) is an initiative by the United States federal government to address AI safety and trust. 

Led by the US National Institute of Standards and Technology, it aims to work on priority actions outlined in President Biden’s October AI Executive Order. Signed in late 2023, Biden’s EO outlines “developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content.”

USAISI, which launched in February, is also housing a consortium of major tech companies, as well as banks, academic institutions and government agencies to help meet the priorities of Biden’s EO. 

“President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem,” said Secretary of Commerce Gina Raimondo during the announcement of USAISI. “That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do.”

Who is in the U.S. AI Safety Institute (USAISI)?

More than 200 companies have joined USAISI so far, including OpenAI, Google, Microsoft, Meta, Apple, Amazon, Nvidia, Bank of America, IBM and Mastercard. 

The consortium represents the largest collection of test and evaluation teams and will focus on creating foundations for a “new measurement science in AI safety”.

OpenAI is among the tech giants onboard with the USAISI

OpenAI is among the tech giants onboard with the USAISI

“Thanks to President Biden’s landmark executive order, the AI Safety Consortium provides a critical forum for all of us to work together to seize the promise and manage the risks posed by AI,” said Bruce Reed, White House deputy chief of staff in the announcement.

Earlier this week, USAISI also announced several members of their leadership team. Specifically, Elizabeth Kelly was named to lead the institute as its inaugural director, and Elham Tabassi was tapped to serve as the chief technology officer.

Kelly will provide executive leadership, management and oversight and will coordinate with other AI policy initiatives across government. Tabassi will lead the institute’s key technical programs and will help shape efforts to conduct research and evaluations of AI models and to develop guidance.

Why is AI regulation so crucial?

“Headlines and media buzz – although important for awareness – are dangerous in that they can easily whip up fear,” says Waseem Ali, CEO of Rockborne

“Regulation will aim to ensure that AI is developed, deployed, and used in an ethical manner, with appropriate consideration given to privacy and data protection. 

“It will also help to set up parameters to prevent the misuse of AI that could lead to discrimination and other unethical practices. With AI’s transformative potential extending to the wider economy and the job market, regulation will serve to mitigate pitfalls and any negative impact on employment, ensuring fair labour practices.

 “Finding the way to reap the benefits, whilst simultaneously building data governance and safety policies into the very structure of the movement should be the goal, rather than implementing them as an afterthought.

“It requires collaboration between policymakers, industry experts and researchers to develop agile, adaptive, and forward-thinking regulatory frameworks that keep pace with the rapidly evolving landscape. We should be sharing experiences to devise frameworks that work across the board and instead of waiting on official legislation to catch up, data leaders should be taking matters into their own hands and implementing interim regulation.

“For this to work, it is important to open up easy lines of communication between industry leaders who too often operate in silos, only sharing information after an incident such as a data breach.

“Put simply, the universal impact of AI requires a universal response.”


Follow us on LinkedIn

Check out The Data and AI Podcast

Looking to grow your data team or AI team? Contact us