AI incident reporting: Addressing a gap in UK AI regulation

A new report by the Centre for Long-Term Resilience (CLTR) says that the UK needs an incident reporting system to log the misuse and malfunctions of artificial intelligence (AI).

The CLTR recommends that the government create an incident reporting system for logging AI failures in public services and consider building a hub where all AI-related issues can be collated.

It says such a system is vital if the technology is to be used successfully.

AI incidents are on the rise

AI has a history of failing unexpectedly, with over 10,000 safety incidents recorded by news outlets in deployed systems since 2014.

With greater integration of AI into society, incidents are likely to increase in number and scale of impact.

In other safety-critical industries, such as aviation and medicine, incidents like these are collected and investigated by authorities in a process known as ‘incident reporting’.

The CLTR believes that a well-functioning incident reporting regime is critical for the regulation of AI, as it provides fast insights into how AI is going wrong.

However, there is a concerning gap in the UK’s regulatory plans.

The urgent need for incident reporting

Incident reporting is a proven safety mechanism, and will support the UK Government’s ‘context-based approach’ to AI regulation by enabling it to:

  • Monitor how AI is causing safety risks in real-world contexts, providing a feedback loop that can allow course correction in how AI is regulated and deployed.
  • Co-ordinate responses to major incidents where speed is critical, followed by investigations into root causes to generate cross-sectoral learnings.
  • Identify early warnings of larger-scale harms that could arise in future, for use by the AI Safety Institute and Central AI Risk Function in risk assessments.

Recommended next steps for the UK Government

The CLTR recommends three immediate next steps when it comes to incident reporting. They are:

  • Create a system for the UK Government to report incidents related to its own use of AI in public services: These incidents could be fed directly to a government body and possibly shared with the public for transparency and accountability.
  • Commission UK regulators and consult experts to confirm where there are the most concerning gaps: This is essential to ensure effective coverage of priority incidents and to understand the stakeholders and incentives required to establish a functional regime.
  • Build capacity within DSIT to monitor, investigate and respond to incidents, possibly including the creation of a pilot incident database: This should focus initially on the most urgent gap identified by stakeholders, but could eventually collect all reports from UK regulators.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements

Media Partners



Similar Articles

More from Innovation News Network