Thomas Schneider, Chair of the Committee on Artificial Intelligence (CAI) at the Council of Europe, explains how the world is collaborating to ensure human rights and democracy are protected as the use of AI technology soars.
Artificial Intelligence (AI) systems have significantly increased in popularity in recent years, and are particularly in the spotlight in 2023 with the introduction of a particular type of system known as ‘Large Language Models’ (LLMs). Examples of such systems include ChatGPT and Bard.
Though AI technology, of which LLMs are just one example, is still a work in progress, it is already clear that the use of AI systems will profoundly change our working lives, private lives, and – perhaps most importantly in the bigger picture – how we organise and govern our societies. This is not because algorithms are truly more intelligent than humans, but because they offer economic stability and efficiency in the execution of numerous, both basic and advanced, tasks at a level on which most forms of human intelligence cannot compete.
Concerns surrounding AI use
Whereas the public debate in general tends to focus on the economic benefits and downsides of using AI technology, the introduction of AI systems in public administration and the judicial system – but also their use in relation to the provision of certain essential services by private actors – gives rise to serious concerns about how to ensure continued protection of human rights and democracy, and respect for the rule of law, if AI systems assist or even replace human decision-making. Their use may also significantly impact democratic processes, including elections, the right to assembly and association, and the right to hold opinions and receive or impart information – in short, the very fundaments of liberal democracy.
It is thus high time for all States around the world and intergovernmental organisations to address the challenges posed by AI technology and create the necessary legal framework that will promote much needed innovation, but not at the cost of human rights and fundamental freedoms. We need to consider carefully how we can use AI systems to improve the way our societies function, to improve the protection of our environment and boost our economies without inadvertently creating a dystopic and undemocratic world governed by the rule of algorithm rather than the rule of law.
Governing AI technology use
In 2019, the Council of Europe, the continent’s oldest intergovernmental regional organisation with 46 Member States and perhaps most widely known around the world for its European Court of Human Rights, began groundbreaking work on the feasibility and need for an international treaty on AI based on its own and other relevant international legal standards in the area of human rights, democracy and the rule of law. The results of this important pioneering work at international level led to the creation in 2022 of the Committee on Artificial Intelligence (CAI). The task of the CAI is to elaborate a Framework Convention on AI technology that will set out legally binding requirements, principles, rights, and obligations in relation to the design, development, use, and decommissioning of AI systems from a human rights, democracy, and rule of law point of view.
A global approach
‘No man is an island’, as the saying goes, and no region in the world can stand entirely on its own. We all form part of a globalised economy and are ultimately facing the same challenges and policy choices. AI technology knows no borders, and a meaningful international standard-setting for the human rights and democracy aspects relating to AI systems can obviously not be limited to a specific region of the world. Accordingly, the Committee of Ministers of the Council of Europe has decided to allow for the inclusion in the negotiations of interested non-European States sharing the values and aims of the Council of Europe, and a growing number of States from around the globe have already joined, or are in the process of joining our efforts.
Likewise, it has been important for the Council of Europe to closely involve relevant non-state actors in these negotiations. There are currently 61 civil society and industry representatives in the CAI as observers, participating in the negotiations together with States and representatives of other international organisations and relevant Council of Europe bodies and committees.
Protecting human rights and democracy
In the European region, the European Union (EU) plays a key role in the regulation of AI systems for its 27 Member States and, for that reason, is also directly involved in the Council of Europe negotiations on their behalf. When entering into force, the EU’s AI Act and the Council of Europe’s Framework Convention are set to mutually reinforce each other, providing an example of how best to make use of the combined strengths and competencies of both European organisations.
The draft Framework Convention (a consolidated ‘working draft’ is publicly available at the Council of Europe website for the CAI) is focused on ensuring that the use of AI technology does not lead to the existence of a legal vacuum in terms of the protection of human rights, the functioning of democracy and democratic processes, or the respect of the principle of rule of law. In line with the findings of the feasibility study prepared by the former Ad Hoc Committee on Artificial Intelligence (CAHAI), which preceded the CAI, its aim is not to create new substantive human rights specific to the AI context, but to guarantee that the existing human rights and fundamental freedoms protected most notably by international law cannot be violated. This will be achieved by requiring parties to oblige regulators, developers, providers and other AI actors to consider risks to human rights and democracy and the rule of law from the moment of conception and throughout the lifecycle of these systems. Moreover, the system of legal remedies available to victims of human rights violations should be updated in view of such specific challenges posed by AI technologies as their transparency and explainability.
Threats to democracy
When it comes to the potential threats to democracy and democratic processes emanating from AI technology, it is, in particular, the capacity of such systems to be used to manipulate or deceive individuals which will be addressed by the treaty. This includes the use of so-called ‘deep fakes’, microtargeting, or more direct interferences with the rights to freedom of expression, to form and hold an opinion, freedom of assembly and association, and to receive or impart information. The Framework Convention will contain legally binding obligations for its parties to provide for effective protection against such practices.
The ‘rule of law’ is a longstanding legal-philosophical concept encompassing, amongst other things, the ideas that government, as well as private actors, are accountable under the law; that the law should be clear and publicised; that laws are enacted, administered, and enforced in an accessible, fair, and efficient manner; and that access to impartial dispute resolution is guaranteed for everyone. It is obvious that this basic notion of what constitutes a fair and liberal, law-abiding society must be respected when designing and using AI systems that may be used in sensitive contexts, such as (but not limited to) the drafting of laws, public administration, and not least the administration of justice through the courts of law. The Framework Convention will also set out specific obligations for parties in this regard.
A balancing act: A risk-based approach to AI
The draft Framework Convention – and indeed all the work of the CAI – adopts a risk-based approach to the design, development, use, and decommissioning of AI systems, and in doing so puts a premium on human dignity and agency. It is important that we are not carried away by the obvious possibilities offered by AI technology without carefully considering the potential negative consequences of using AI systems in various contexts. The draft Framework Convention therefore also obliges parties to raise awareness and stimulate an informed public debate on how AI technology should be used.
As is clear from the above, AI and other new and emerging digital technologies raise many fundamental questions and challenges for democratic societies. At the same time, these technologies also offer us the opportunity to make invaluable progress in science, medicine, protection, and improvement of the environment, just to mention a few key areas. They also promise to boost the global economy and ultimately create better living conditions for all of humanity. Some influential voices in the public debate have recently been calling for a moratorium or even a ban on AI technology, because they consider that the dangers it poses outweighs the advantages it offers. Whereas the legitimate concerns about AI raised need to be taken seriously, we must however acknowledge that the genie is out of the bottle, and there is no way we can effectively roll back the scientific and technological developments that have enabled the creation of advanced and powerful AI systems.
Therefore, the realistic approach must be to find ways to use AI and other digital technologies responsibly and to make sure that as many people around the world as possible can benefit from them and enjoy protection from any abuse of such technologies.
This is a colossal task which requires a concerted effort by like-minded States and support from civil society, the tech industry, and academia to succeed. It is our hope and ambition that the Framework Convention, which the Council of Europe is engaged in elaborating together with States from all over the world, will provide some much requested legal clarity and guarantees of protection of fundamental rights and that it will become a focal point for the current and future discussions on how to formulate balanced and durable policy solutions to the challenges which the introduction of new and powerful digital technologies poses to human rights and democracy.
Please note, this article will also appear in the fifteenth edition of our quarterly publication.