UK launches the world’s first AI Safety Institute

The UK Government has announced the country will host the world’s first AI Safety Institute – a global hub for assessing the dangers of emerging AI technologies.

Revealed at the AI Safety Summit at Bletchley Park, and following four months of building the first team in the G7 that can evaluate the risks of frontier AI models, the Frontier AI Taskforce will now transition into the AI Safety Institute, with Ian Hogarth remaining as Chair.

The AI Safety Institute will also be advised by the Taskforce’s External Advisory Board, which consists of industry leaders in the national security and computer science fields.

Leading nations and AI companies have supported the AI Safety Institute, including Japan, Canada, OpenAI, DeepMind, the Alan Turing Institute, Imperial College London, TechUK, and the Startup Coalition.

The UK has also established two partnerships with the US AI Safety Institute and the Government of Singapore to collaborate on AI safety testing.

This deep international collaboration will position the UK at the forefront of AI safety, ensuring the benefits of AI can be reaped nationwide.

Prime Minister Rishi Sunak commented: “Our AI Safety Institute will act as a global hub on AI safety, leading vital research into the capabilities and risks of this fast-moving technology.

“It is fantastic to see such support from global partners and the AI companies themselves to work together so we can ensure AI develops safely for the benefit of all our people. This is the right approach for the long-term interests of the UK.”

How will the AI Safety Institute de-risk AI?

The Institute will test novel frontier AI before they are released to combat the potentially dangerous capabilities of AI models.

The new global hub will work closely with the Alan Turing Institute on all risks, from social harms, such as bias and misinformation, to extreme risks, like humanity losing complete control of AI.

This work will be essential, with numerous powerful AI models released over the next year with capabilities that aren’t fully understood.

The Institute’s first job will be to put in place systems and processes to test these AI models, including open-source models, before launch.

The Institute will inform UK and International policy and also provide technical tools for governance and regulation.

This includes analysing data being used to train these systems for bias and ensuring AI developers are not self-regulating their AI safety.

Ian Hogarth added: “The support of international governments and companies is an important validation of the work we’ll be carrying out to advance AI safety and ensure its responsible development.

“Through the AI Safety Institute, we will play an important role in rallying the global community to address the challenges of this fast-moving technology.”

Growing UK investment in AI technology

Researchers at the Institute will have access the industry-leading computing power, including the £300m AI Research Resource, a network of Europe’s largest supercomputers, Bristol’s Isambard-AI, and the Dawn supercomputer in Cambridge.

These supercomputers will be pivotal in developing the Institute’s programme of research in the frontier AI safety models.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements



Similar Articles

More from Innovation News Network