The UK will host a conference in San Francisco for discussions with AI developers on how they can put into practice commitments made at the AI Action Summit.
Held on 21 and 22 November, the event will feature workshops and discussions focused on AI safety ahead of France hosting the AI Action Summit in February 2025.
Earlier this year, 16 companies from around the world, including those from the US, EU, Republic of Korea, China, and the UAE, agreed to publish their latest AI safety frameworks ahead of the next Summit.
These frameworks will lay out their plans to tackle the most severe potential AI risks, including if bad actors misuse the technology.
As part of these commitments, companies also agreed to stop the deployment or development of any models if their potential risks cannot be sufficiently addressed.
Sharing AI safety frameworks
The event will be a moment for AI companies to take stock and share ideas and insights to support the development of their AI safety frameworks through a targeted day of talks between signatory companies and researchers.
Science, Innovation and Technology Secretary Peter Kyle explained: “The conference is a clear sign of the UK’s ambition to further the shared global mission to design practical and effective approaches to AI safety.
“We’re just months away from the AI Action Summit, and the discussions in San Francisco will give companies a clear focus on where and how they can bolster their AI safety plans building on the commitments they made at the last Summit in Seoul.”
From today, attendees are also urged to share thoughts on potential areas of discussion at November’s conference, including existing and current proposals for developer safety plans, the future of AI model safety evaluations, transparency and methods for setting out different risk thresholds.
A network of AI institutes
Discussions, co-hosted with the Centre for the Governance of AI and led by the UK’s AI Safety Institute (AISI), will help build a deeper understanding of how the Frontier AI Safety Commitments are being implemented.
The UK’s AI Safety Institute is the world’s first state-backed body dedicated to AI safety. The UK has continued to play a global leadership role in developing the growing international network of AI Safety Institutes, including its landmark agreement with the US earlier this year.
The conference has been designed as a forum for attendees to exchange ideas on best practices in implementing the commitments.
This ensures a transparent and collaborative approach for developers as they refine their AI safety frameworks ahead of the AI Action Summit.
It follows the US government’s announcement yesterday of the first meeting of the International Network of AI Safety Institutes, which will take place on the days before 20-21 November 2024 in San Francisco.
The UK launched the world’s first AI Safety Institute at Bletchley Park last November, and since then, nations around the world have raced to establish their own AI safety testing bodies.
The US-hosted convening will bring together technical experts on artificial intelligence from each country’s AI safety institute or equivalent government-backed scientific office to align on the network’s priority work areas and begin advancing global collaboration and knowledge sharing on AI safety.