In a groundbreaking announcement, the United States is set to establish an AI safety institute aimed at conducting in-depth assessments of both established and emerging risks associated with cutting-edge artificial intelligence models. The revelation was made by Secretary of Commerce, Gina Raimondo, during her keynote address at the prestigious AI Safety Summit held in Britain.
Secretary Raimondo emphasized the imperative need for collective efforts in this endeavor, underscoring the indispensable role of academia and industry leaders in forming a consortium dedicated to this mission. “We cannot embark on this critical mission in isolation; the active involvement of the private sector is imperative,” she affirmed.
Furthermore, Secretary Raimondo conveyed her commitment to forging a formal partnership between the newly proposed U.S. institute and the esteemed United Kingdom Safety Institute. This collaborative approach is expected to foster global cooperation in the realm of AI safety.
This transformative initiative will fall under the purview of the National Institute of Standards and Technology (NIST), serving as the vanguard of the U.S. government’s initiatives in AI safety, with a particular focus on the comprehensive evaluation of advanced AI models.
The forthcoming institute is poised to serve as a catalyst for the formulation of standardized protocols concerning the safety, security, and testing procedures for AI models. It will also spearhead the development of authentication standards for AI-generated content, while simultaneously providing controlled testing environments for researchers to scrutinize emerging AI risks and address well-documented consequences.
In a related development, President Joe Biden recently signed an executive order pertaining to artificial intelligence. This directive mandates that developers of AI systems with potential repercussions for U.S. national security, economic stability, public health, or safety must divulge the results of their safety evaluations to the U.S. government. This requirement aligns with the principles enshrined in the Defense Production Act and is aimed at ensuring responsible AI deployment.
The executive order further directs federal agencies to establish rigorous standards for AI testing and addresses associated risks in areas such as chemical, biological, radiological, nuclear, and cybersecurity domains.
Excitingly, these developments mark a significant stride towards enhancing AI safety and upholding ethical standards in the fast-evolving world of artificial intelligence.