EU enforces landmark Artificial Intelligence Act

Today marks a significant milestone in the realm of AI regulation as the European Artificial Intelligence Act officially comes into effect.

The world’s first comprehensive regulation of its kind, The Artificial Intelligence Act, is designed to ensure that AI developed and utilised in the EU is trustworthy and safeguards fundamental human rights.

The AI Act aims to foster a harmonious internal market for AI technologies while encouraging innovation and investment.

Thierry Breton, Commissioner for Internal Market, commented: “Today marks a major milestone in Europe’s leadership in trustworthy AI.

“With the entry into force of the AI Act, European democracy has delivered an effective, proportionate and world-first framework for AI, tackling risks and serving as a launchpad for European AI start-ups.”

Defining AI through a safety and risk-based lens

The AI Act adopts a forward-thinking approach to defining AI, categorising systems based on their risk levels:

Minimal risk AI

Most AI systems, including AI-enabled recommender systems and spam filters, fall into the minimal risk category.

These systems, deemed to pose a negligible threat to citizens’ rights and safety, face no obligations under the AI Act.

However, companies can choose to voluntarily adopt additional codes of conduct to enhance transparency and accountability.

Specific transparency risk AI

Certain AI systems, such as chatbots, must clearly inform users that they are interacting with a machine.

AI-generated content, including deep fakes, must be appropriately labelled, and users need to be aware when biometric categorisation or emotion recognition systems are in use.

Providers are required to ensure that synthetic content is marked in a machine-readable format, making it detectable as artificially generated or manipulated.

High-risk AI

AI systems identified as high-risk are subject to stringent requirements. These include risk mitigation measures, high-quality data sets, activity logging, detailed documentation, clear user information, human oversight, and robust cybersecurity protocols.

Regulatory sandboxes will support responsible innovation and the development of compliant AI systems. High-risk applications include AI used for recruitment, loan assessments, or autonomous robots.

Unacceptable risk AI

AI systems posing a clear threat to fundamental human rights will be banned. This includes AI applications that manipulate human behaviour, such as voice-assisted toys encouraging dangerous actions by minors, social scoring systems, and certain forms of predictive policing.

Additionally, some biometric systems, like emotion recognition in the workplace or real-time remote biometric identification for law enforcement in public spaces, will be prohibited, except in narrow exceptions.

Rules for general-purpose models

The Artificial Intelligence Act also introduces regulations for general-purpose AI models, which are highly versatile and capable of performing a wide range of tasks, such as generating human-like text.

These models will be subject to transparency requirements along the value chain to address potential systemic risks.

Implementation and enforcement

EU Member States have until 2 August 2025 to designate national competent authorities to oversee the application of AI rules and conduct market surveillance.

The Commission’s AI Office will serve as the primary implementation body at the EU level and enforce regulations for general-purpose AI models.

Three advisory bodies will support the implementation: the European Artificial Intelligence Board, which ensures uniform application across Member States; a scientific panel that provides technical advice and risk alerts; and an advisory forum composed of diverse stakeholders offering guidance.

Consequences for non-compliance

Companies that fail to comply with the Artificial Intelligence Act will face substantial fines.

Violations of banned AI applications can result in fines up to 7% of the global annual turnover, while other obligations can attract fines up to 3%, and supplying incorrect information can lead to fines up to 1.5%.

As the Artificial Intelligence Act sets a global precedent, the world will be watching how this pioneering regulation shapes the future of AI in Europe and beyond.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements



Similar Articles

More from Innovation News Network