Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development
White House Launches AI Safety Consortium
The National Group Will Develop Guidelines for AI Safety, Security and Red-TeamingThe White House is recruiting more than 200 artificial intelligence companies, stakeholders and a wide array of organizations across public society for the first-ever U.S. consortium dedicated to AI safety.
See Also: Mitigating Identity Risks, Lateral Movement and Privilege Escalation
The Artificial Intelligence Safety Institute Consortium will develop guidelines for red-teaming, safety evaluations and other security measures, according to an announcement published Thursday by the Department of Commerce. The new coalition, housed under the National Institute of Safety and Technology's AI Safety Institute, aims to serve as a liaison between AI developers and federal agencies. It will also work to develop collaborative research and security guidelines for advanced AI models.
Cybersecurity experts, lawmakers and legal scholars previously raised concerns that AI developers lack comprehensive regulations, standards or even a set of best practices to follow when developing advanced models that can have significant risks for national security and public health (see: G7 Unveils Rules for AI Code of Conduct - Will They Stick?). The consortium will provide a "critical forum" for the public and private sector to work together in developing AI safeguards and security standards, according to Bruce Reed, White House deputy chief of staff.
"To keep pace with AI, we have to move fast and make sure everyone - from the government to the private sector to academia - is rowing in the same direction," Reed said in a statement.
The inaugural cohort of AISIC members includes a vast list of nonprofits, universities, research groups and major corporations such as Amazon, Adobe, Google, Microsoft, Meta, Salesforce and Visa. It also features prominent academic AI hubs, including the University of Buffalo's Institute for AI and Data Science and the University of South Carolina's AI Institute, as well as leading AI developers such as ChatGPT maker OpenAI.
The White House issued an executive order in October invoking the Defense Production Act and requiring AI developers to share the results of red-teaming and other safety evaluations with the federal government (see: White House Issues Sweeping Executive Order to Secure AI). According to NIST, AISIC is the "largest collection of test and evaluation teams established to date" and will focus on "establishing the foundations for a new measurement science in AI safety."
AISIC will be tasked with establishing a space where Ai stakeholders can share knowledge and data, and it will seek to improve information sharing between members of the consortium. The group will also recommend measures to facilitate "the cooperative development and transfer of technology and data between and among consortium members."