What does the EU AI Act mean in practice?

The EU AI Act came into force on 1 August, regulating what artificial intelligence can and cannot do in the EU.

Now, a team led by computer science professor Holger Hermanns from Saarland University and law professor Anne Lauber-Rönsberg from Dresden University of Technology has examined how the EU AI Act impacts the practical work of programmers.

“The Act shows that politicians have understood that AI can potentially pose a danger, especially when it impacts sensitive or health-related areas,” said Holger Hermanns, professor of computer science.

But how does the AI Act affect the work of the programmers who actually create AI software? According to Hermanns, there is one question that almost all programmers are asking about the new law.

How the EU AI Act impacts high-risk systems

The EU AI Act aims to protect future users of a system from the possibility that an AI could treat them in a discriminatory, harmful or unjust manner.

If AI does not intrude in sensitive areas, it is not subject to the extensive regulations that apply to high-risk systems.

Professor Hermanns explained: “If AI software is created with the aim of screening job applications and potentially filtering out applicants before a human HR professional is involved, then the developers of that software will be subject to the provisions of the AI Act as soon as the programme is marketed or becomes operational.

“However, an AI that simulates the reactions of opponents in a computer game can still be developed and marketed without the app developers having to worry about the AI Act.”

A strict set of rules must be followed

However, high-risk systems, which also include algorithmic credit rating systems, medical software, or programmes that manage access to educational institutions such as universities, must conform to a strict set of rules set out in the EU AI Act.

“Firstly, programmers must ensure that the training data is fit for purpose and that the AI trained from it can actually perform its task properly,” said Hermanns.

“These systems must also keep records so that it is possible to reconstruct which events occurred at what time, similar to the black box recorders fitted in planes,” added Sarah Sterz, who collaborated with Hermanns on the research.

The EU AI Act also requires software providers to document how the system functions, as in a conventional user manual. The provider must also make all information available to the deployer so that the system can be properly monitored during use to detect and correct errors.

Moving forward with new restrictions

Hermanns summarised the impact of the AI Act in the following way: “The EU AI Act introduces a number of very significant constraints, but most software applications will barely be affected.”

Things that are already illegal today, such as using facial recognition algorithms to interpret emotions, will remain prohibited. Non-contentious AI systems, such as those used in video games or spam filters, will be hardly impacted by the AI Act.

Hermanns and his colleagues take an overall favourable view of the AI Act – the first piece of legislation that provides a legal framework for the use of artificial intelligence across an entire continent.

“I see little risk of Europe being left behind by international developments as a result of the AI Act,” Hermanns concluded.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements



Similar Articles

More from Innovation News Network