The Artificial Intelligence Act: Getting it right for EU companies

Nathalie Laneret, AI Issue Leader for The European Tech Alliance (EUTA), outlines the key points to consider to ensure that the EU’s Artificial Intelligence Act is delivered correctly and effectively to benefit companies as much as possible.

The EU Artificial Intelligence Act (AIA) has recently entered the trilogue phase, where the co-legislators – the European Commission, European Council, and European Parliament – agree on the final text of the AIA. After more than two years since the launch of the EU AI strategy, it is crucial that the co-legislators strike the right balance to provide opportunities for innovation to EU companies.

EU companies are all already or soon-to-be somehow ‘AI-driven’ either because they develop Artificial Intelligence (AI) systems (AI components, AI models either developed internally or by third parties) or because they use AI systems to create innovative products, better serve their clients or improve their processes. AI technologies convey huge opportunities for the EU ecosystem to compete with global players and foster EU competitiveness. It is therefore of the utmost importance that lawmakers do not jeopardise this potential by not properly calibrating the rules to the specifics of AI technology and the variety of its uses.

The legislative initiative of the European Commission was originally based on the premise to promote trustworthy AI aligned with our EU fundamental values and to limit legislation to situations where the risks to individuals and society were the highest. EU companies, including the European Tech Alliance, have always supported this approach. It seems, however, that the legislative process risks derailing from this minimalist and pragmatic objective by pivoting to a broader approach that could ultimately have a negative impact on EU businesses.

The success of the AI Act as an innovation enabler for EU companies lies in several essential pillars that consist of providing EU companies with legal certainty and a safe space to innovate; maintaining a narrow use case and risk-based approach to regulating AI; providing for a wide array of innovation tools; and setting the conditions for effective enforcement of the AIA.

Making legal certainty a springboard for innovation

As AI technologies are borderless and the AI supply chain extends beyond EU borders, it is important that the legal definition of AI aligns with a widely accepted definition of the technology by like-minded countries, such as the OECD definition. Having a uniform definition of AI provides legal certainty for AI developers, AI users and EU research labs that can innovate in confidence and, where necessary, enter into international discussions and partnerships on the basis of a similar taxonomy.

In addition, while it is necessary that the AI Act outlaws the AI practices that are clearly in contradiction with our core EU fundamental values (such as social scoring or public biometric mass surveillance), lawmakers need to ensure that any prohibition on AI uses does not create unintended effects by excluding perfectly permissible and innocuous AI uses. The boundaries of ‘prohibited AI’ should therefore be clearly defined to avoid creating ambiguities that could make EU companies reluctant to develop or use AI in some circumstances, such as for personalisation purposes.

The same care in drafting the law should apply for AI categories that would require a greater degree of transparency. It is important that deepfakes provide adequate information and transparency to individuals that the content has been modified to avoid deceiving them.

This should, however, not extend to legitimate and longstanding commercial practices in the field of marketing and advertising. In the same vein, the creation of AI-generated content for artistic purposes should be clearly exempted from the scope of these provisions in the AIA.

To ensure legal certainty for citizens, consumers, and companies, it is crucial that the AIA only fills existing legislative gaps and does not add an additional layer on top of existing laws, including laws which the EU has recently enacted and those that are yet to be implemented. Adding this extra layer of legislation would not add any value or, even worse, could create conflicts of laws.

In this context, particular vigilance is needed when it comes to the interaction between the AI Act and the General Data Protection Regulation (GDPR) – as in many instances AI will involve the processing of personal data; or the recently adopted Digital Service Act (DSA) that creates requirements regarding recommender systems for online platforms, regardless of whether they are powered by AI; or the future Platform Work Directive (PWD) for similar reasons.

High-risk use case based and not technology based

The AIA should only regulate specific uses of AI systems that pose the highest risks to security, safety, and fundamental rights. In other words, the AIA must have a focused approach – as set out in the original proposal of the European Commission – and stay away from any general or sector-based broad reach that would regulate all actors or the technology itself regardless of context.

If the AI Act fails to keep this risk-based approach, it could have unintended effects by regulating the innocuous use of AI and would unduly commit companies’ resources which could be better allocated to R&D and innovation, for example.

It is equally important that the use cases classified as being high risk, and therefore subject to stringent obligations, are selected and defined with special care to avoid that innocuous AI uses are incidentally captured.

As a consequence, the AIA should focus only on a narrow number of highly problematic use cases that have already harmed, or have a high potential to seriously harm, individuals. As it stands, some AI use cases do not meet the high threshold that the AIA intends to put in place. This concerns, for instance, AI-powered targeted job advertisements that have a trivial impact compared to the use of AI to recruit, dismiss, or promote employees.

Alphabets,Ai,On,Advanced,Central,Processing,Unit,(cpu),Chip,And,AI,Act
© shutterstock/Dragon Claws
The enforcement of the AIA must be aligned with its objective to reach a pragmatic balance between innovation and the protection of EU fundamental values

Similarly, task allocation is an essential part of digital labour platforms relying on AI systems to match tasks with workers and does not meet the high-risk threshold. Creditworthiness evaluations may equally not always create a high risk, especially in the case of lower-value consumer credits. Recommender systems for user-generated content, such as to recommend music or videos, for instance, should be outside of the scope of the AIA.

Another example is AI used in the management and operation of road traffic. Putting a blanket high-risk stamp on AI used in this sector could have a negative impact on innovation that saves lives, not endangers them.

The AI Act should also stay away from a very burdensome ‘ex ante’ approach, whereby AI providers or users would always have to obtain confirmation from the regulator that their AI system does not present a significant risk of harm to the health, safety, or fundamental rights of individuals. This type of outdated, bureaucratic approach has the potential to cause excessive formalities and delays.

Many harmless AI systems would be subject to a burdensome clearance procedure before they can be utilised or safely brought to the market.

The compliance burden would disproportionately affect mid-sized EU companies over global giants with more legal and engineering resources. Non-EU countries could take advantage of this by ensuring a faster approval process to attract AI innovation and investments.

AI evolves very fast and would rather need agile and flexible processes to facilitate the development of a dynamic AI ecosystem in Europe and the smooth introduction of new AI systems and components in the EU single market.

Lastly, any inclusion in the scope of the AI Act of elements of the AI supply chain – such as foundation models, pre-trained models, generative AI or general-purpose AI – must equally fully abide by the risk-based approach: they should be captured by the AIA only in so far as they are used in the context of the narrow list of high-risk AI use cases.

Imposing extensive obligations on these AI components, even when they are intended for harmless purposes, would completely change the approach of the AI Act from a limited and use-case-based approach to a wide-ranging, horizontal approach that would regulate the technology as such. This would definitely discourage AI development and use while putting European players at a competitive disadvantage with their non-European competitors who possess the resources to independently develop and reuse these AI components models for their own products and services.

Provide for a robust AI innovation toolbox

In order to strengthen innovation, the AI Act should provide relevant tools to incentivise and facilitate research activities and innovation in the vibrant AI European ecosystem. The AIA must provide for a clear, comprehensive, and unambiguous exemption for research to ensure a safe space where European companies can safely perform their research activities and test their innovative products and services.

In addition, the AIA should clearly exclude AI components offered under free and open-source licences from the scope of the AIA. Many European companies heavily depend on these components to develop their own AI systems, particularly in the case of foundation models that can be further fine-tuned and customised to meet their specific needs. Such open-source foundation models play a vital role in the development of innovative AI products and services by EU companies.

Finally, as AI is a developing technology that relies on experimentation and testing, the AIA must require the setting up of regulatory sandboxes as a safe space to discuss innovative products and their compliance with legal obligations and regulatory interpretation.

The possibility of testing high-risk AI systems in real-world environments, beyond the confines of AI regulatory sandboxes, is also essential. This testing will enable EU companies to gather pertinent data on the functioning of their AI systems, ensuring compliance with the AIA, and accelerating the introduction of their high-risk AI systems to the market.

Set the conditions for effective enforcement

The enforcement of the AI Act must be aligned with its objective to reach a pragmatic balance between innovation and the protection of EU fundamental values. This requires authorities responsible for overseeing the Act to have the right resources, expertise, and culture to be able to understand sectoral market dynamics and deploy the AIA’s risk-based approach and innovation objective.

As a consequence, the right approach may be to resort to a mix of authorities to gather various relevant competences, rather than a single enforcer. The Data Protection Authorities, for instance, may not have the right experience to implement the AI Act due to their proven risk-averse approach in the interpretation of the GDPR.

Final thoughts on the AI Act

Writing a law is a complex exercise that requires drawing a fine line between several seemingly conflicting interests. In the case of AI, creating legal categories to capture the uses of a fast-evolving technology, while ensuring the EU leverages its potential economic and societal benefits, is a clear challenge.

If these categories are not properly scoped, the unintended effects could be high and the AIA may backfire, causing huge competitive disadvantages for Europe. Where the technology is complex, and the stakes are very high, constructive engagement between lawmakers and relevant stakeholders has never been so critical to make a law that is fit for the needs of EU companies.

Please note, this article will also appear in the fifteenth edition of our quarterly publication.

Contributor Details

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements



Similar Articles

More from Innovation News Network