U.S. Secretary of Commerce Gina Raimondo also revealed plans for global cooperation among AI safety institutes.
During the AI Seoul Summit, U.S. Secretary of Commerce Gina Raimondo released a strategic vision for the U.S. Artificial Intelligence Safety Institute (AISI). At President Biden’s direction, the National Institute of Standards and Technology (NIST) within the Department of Commerce launched the AISI, building on NIST’s long-standing work on AI.
The Strategic Vision
The strategic vision outlines steps AISI plans to take to advance the science of AI safety and facilitate safe and responsible AI innovation. Since AISI was established, it has since built an executive leadership team that brings together some of the brightest minds in academia, industry and government.
The strategic vision describes the AISI’s philosophy, mission, and strategic goals. Rooted in two core principles—first, that beneficial AI depends on AI safety; and second, that AI safety depends on science—the AISI aims to address key challenges, including a lack of standardized metrics for frontier AI, underdeveloped testing and validation methods, limited national and global coordination on AI safety issues, and more.
The AISI will focus on three key goals:
- Advance the science of AI safety;
- Articulate, demonstrate, and disseminate the practices of AI safety; and
- Support institutions, communities, and coordination around AI safety.
To achieve these goals, the AISI plans to, among other activities, conduct testing of advanced models and systems to assess potential and emerging risks; develop guidelines on evaluations and risk mitigations, among other topics; and perform and coordinate technical research. The U.S. AI Safety Institute will work closely with diverse AI industry, civil society members, and international partners to achieve these objectives.
An International Network of AI Safety Institutes
Secretary Raimondo also announced that the Department and the AISI will help launch a global scientific network for AI safety by engaging with AI Safety Institutes and other government-backed scientific offices focused on AI safety and committed to international cooperation. This network aims to promote safe, secure, and trustworthy artificial intelligence systems for people around the world by enabling closer collaboration on strategic research and public deliverables.