Antonina Burlachenko, Head of Quality and Regulatory Consulting at Star, outlines the ways businesses will be affected by the EU AI Act and explains what businesses must do to prepare for this.
The EU’s Artificial Intelligence (AI) Act as voted on by the European Parliament last month has been described as the “toughest set of rules on AI in the world” as concerns over the potential risks of the technology are debated and calls for regulation grow louder.
As the first law on AI by a major regulator, the proposed legislation aims to provide developers and users with clear requirements and obligations regarding AI.
The EU AI Act focuses on managing risk and strengthening rules around AI use that align with EU values. This includes issues like data quality, transparency, human oversight, and accountability, but also looks to address more ethical questions around privacy and non-discrimination.
At its heart, the AI Act defines a classification system that defines the level of risk an AI technology might pose – unacceptable risk, high risk, low or minimal risk. For example, AI-based systems such as spam filters or video games are regarded as low risk, while real-time biometric identification systems in public spaces would be categorised as unacceptable.
Some of Europe’s businesses have already reacted to the proposals in an open letter to the European Parliament, claiming regulation could impact competition. More than 150 executives from major companies like Renault, Heineken, Airbus, and Siemens have criticised the EU AI Act for its potential to “jeopardise Europe’s competitiveness and technological sovereignty.”
The question is how will this legislation affect businesses? Here are our observations.
Businesses must understand the boundaries for AI usage
With the emergence of classifications for how AI can be used, organisations need to be confident about the boundaries within which AI can and should be used. This is because it might limit some of the use cases in the short term.
This particularly applies to the high-risk category, where AI technology is permitted but will have to adhere to strict regulations around Machine Learning model training, testing, data quality management, and an accountability framework that details human oversight.
Autonomous vehicles and medical devices fit into this category, so businesses will need to understand this, tailoring their AI product development strategy accordingly.
Businesses will also need to demonstrate that their AI systems adhere to the prescribed rules by conducting impact assessments and maintaining full transparency.
The EU AI Act proposes strict penalties for non-compliance
When it comes to compliance, the AI Act proposes steep non-compliance penalties. Fines can reach up to €30m or 6% of global income for companies. Submitting false or misleading documentation to regulators can also result in fines.
In addition, compliance with the new regulations will become a prerequisite for entry to the European market, and any organisations wanting to trade or conduct business in the European Union will need to adhere to the regulatory requirements.
While many businesses will jump at the new product opportunities this presents, developing AI in a controlled, traceable, and compliant manner will be a key challenge in getting them off the ground.
This means businesses will need to start allocating the appropriate budgets, time, and resources to ensure they meet the regulatory requirements. This might have a negative short-term impact on levels of innovation and new product development, so working closely with a partner that has established regulatory expertise will help mitigate this.
An awareness of the shortcomings in certain generative models
We already know that human oversight, privacy, and non-discrimination will become even more top-of-mind as these regulations come into effect. AI products and solutions that address these issues will be fertile ground for innovation.
Beyond regulation, we are still seeing shortcomings in the generative models with the ethical issues of diversity, representation, and inclusivity. These models tend to reinforce the most dominant view without making a judgement on how fair or correct it is.
Organisations need to be aware of these shortcomings and avoid echo chambers caused by AI, where people are exposed to information or beliefs that simply align with their own and are often associated with mis/disinformation or extremist views. It will be interesting to see how these concerns will be addressed in the future.
What will the future of AI look like when the Act comes into effect?
The EU AI Act could be adopted as early as next year with a two-year transition period, meaning that it will come into force in 2026.
Whether this is enough time to ensure compliance across all stakeholders, from regulators to adopters, any businesses looking at developing, innovating, or working with AI technologies should be looking into the details of it and aligning their strategies now.