The US Department of Commerce and US Department of State have announced the inaugural meeting of the International Network of AI Safety Institutes, scheduled for November 20 to 21, 2024, in San Francisco.
This global network aims to foster international cooperation on artificial intelligence (AI) safety.
First introduced by US Secretary of Commerce Gina Raimondo at the AI Seoul Summit in May of this year, the network is designed to unite representatives from each member country’s AI safety institute, or the equivalent scientific office.
The meeting’s objectives include establishing collaborative priorities and advancing global knowledge on AI safety.
“AI is the defining technology of our generation. With AI evolving at a rapid pace, we at the Department of Commerce, and across the Biden-Harris Administration, are pulling every lever. That includes close, thoughtful coordination with our allies and like-minded partners,” said Raimondo in a Wednesday (September 18) release. “We want the rules of the road on AI to be underpinned by safety, security, and trust, which is why this convening is so important.’
The International Network of AI Safety Institutes includes 10 founding members: Australia, Canada, the EU, France, Japan, Kenya, the Republic of Korea, Singapore, the United Kingdom and the US.
The San Francisco meeting will feature technical experts and representatives from member countries, with discussions on priority areas for the network and the start of detailed work on joint AI safety projects.
The meeting is intended to lay the groundwork for collaboration leading up to the AI Action Summit, which is scheduled to take place in Paris in February 2025.
US Secretary of State Antony Blinken emphasized the importance of this initiative as AI usage increases rapidly at a global scale. The push for safety comes amid ongoing challenges in US legislative efforts to regulate AI technology.
In response to the rapid advancement and potential risks associated with AI, the US Department of Commerce has proposed new reporting requirements for advanced AI developers and cloud computing providers. These requirements are aimed at ensuring these new technologies are safe and resilient against cyber threats.
The initiative also aligns with broader US policy goals set by President Joe Biden.
In October 2023, he signed an executive order requiring AI developers to report safety test results for systems posing risks to national security, public health or safety before their public release.
As AI continues to evolve and integrate into society, the establishment of the International Network of AI Safety Institutes represents a proactive step toward addressing the complexities associated with this transformative technology.
Securities Disclosure: I, Giann Liguid, hold no direct investment interest in any company mentioned in this article.