On 17 December 2020, the Council of Europe’s* Ad hoc Committee on Artificial Intelligence (CAHAI) published a Feasibility Study (the “Study”) on Artificial Intelligence (AI) legal standards. The Study examines the feasibility and potential elements of a legal framework for the development and deployment of AI, based on the Council of Europe’s human rights standards. Its main conclusion is that current regulations do not suffice in creating the necessary legal certainty, trust, and level playing field needed to guide the development of AI. Accordingly, it proposes the development of a new legal framework for AI consisting of both binding and non-binding Council of Europe instruments.

The Study recognizes the major opportunities of AI systems to promote societal development and human rights. Alongside these opportunities, it also identifies the risks that AI could endanger rights protected by the European Convention on Human Rights (ECHR), as well as democracy and the rule of law. Examples of the risks to human rights cited in the Study include AI systems that undermine the right to equality and non-discrimination by perpetuating biases and stereotypes (e.g., in employment), and AI-driven surveillance and tracking applications that jeopardise individuals’ right to freedom of assembly and expression.

A wide range of legislative instruments applicable to AI are considered by the Study, including: (1) international legal instruments, such as the ECHR and EU Charter of Fundamental Rights; (2) AI ethics guidelines, including ones developed by private companies and public-sector organisations; and (3) national AI instruments and strategies. After weighing the advantages and disadvantages of these measures, the Study concludes that no international legal instrument specifically tailored to the challenges posed by AI systems exists, and that there are gaps in the current level of human rights protections. Such gaps include (amongst other factors) the need to ensure:

  • sufficient human control and oversight;
  • the technical robustness of AI applications; and
  • effective transparency and explainability.

To respond to the human rights challenges presented by AI, the Study sets out the principles, rights, and obligations that could act as the main elements of a future legal framework. The proposed framework seeks to translate existing human rights to the context of AI by specifying more concretely what falls under a broader human right; how it could be invoked by those subjected to AI systems; and the requirements that AI developers and deployers should meet to protect such right. The Study identifies nine principles that are essential to respect human rights in the context of AI:

  • Human Dignity: AI deployers should inform individuals that they are interacting with an AI system whenever confusion may arise, and individuals should be granted the right to refuse interaction with an AI system whenever this can adversely impact human dignity.
  • Prevention of Harm to Human Rights, Democracy, and the Rule of Law: AI systems should be developed and used in a sustainable manner, and AI developers and deployers should take adequate measures to minimise any physical or mental harm to individuals, society and the environment.
  • Human Freedom and Human Autonomy: Individuals should have the right to effectively contest and challenge decisions informed or made by an AI system and the right to decide freely to be excluded from AI-enabled manipulation, individualised profiling, and predictions.
  • Non-Discrimination, Gender Equality, Fairness and Diversity: Member States should impose requirements to effectively counter the potential discriminatory effects of AI systems deployed by both the public and private sectors, and to protect individuals from their negative consequences.
  • Principle of Transparency and Explainability of AI Systems: Individuals should have the right to a meaningful explanation of how an AI system functions, what optimisation logic it follows, what type of data it uses, and how it affects one’s interests, whenever it generates legal effects or has similar impacts on individuals’ lives. The explanation should be tailored to the particular context, and should be provided in a manner that is useful and comprehensible for an individual.
  • Data Protection and the Right to Privacy: Member States should take particular measures to effectively protect individuals from AI-driven surveillance, including remote biometric recognition technology and AI-enabled tracking technology, as this is not compatible with the Council of Europe’s standards on human rights, democracy and the rule of law.
  • Accountability and Responsibility: Developers and deployers of AI should identify, document, and report on potential negative impacts of AI systems on human rights, democracy and the rule of law, and put in place adequate mitigation measures to ensure responsibility and accountability for any harm caused. Member States should ensure that public authorities are able to audit AI systems, including those used by private actors.
  • Democracy: Member States should take adequate measures to counter the use or misuse of AI systems for unlawful interference in electoral processes, for personalised political targeting without adequate transparency mechanisms, and more generally for shaping voters’ political behaviours and manipulating public opinion.
  • Rule of Law: Member States should ensure that AI systems used in justice and law enforcement are in line with the essential requirements of the right to a fair trial. They should pay due regard to the need to ensure the quality, explainability, and security of judicial decisions and data, as well as the transparency, impartiality, and fairness of data processing methods.

The Study recommends that the Council of Europe establish a binding legal instrument (such as a convention) establishing the main principles for AI systems, which would provide the basis for relevant national legislation. It also suggests that the Council of Europe develop further binding or non-binding sectoral instruments with detailed requirements that address specific sectoral challenges of AI. The Study recommends that the proposed legal framework should pursue a risk-based approach by targeting specific AI application contexts, and acknowledging that not all AI systems pose an equally high level of risk.

Next Steps

The Study was adopted by CAHAI during its plenary meeting in December 2020. Next, it will be presented to the Committee of Ministers of the Council of Europe, who may instruct CAHAI to begin developing the specific elements of a legal framework for AI. This could include a binding legal instrument, as well as non-binding and sectoral instruments.

CAHAI’s work joins similar international initiatives looking to provide guidance and build a global consensus on the development and regulation of AI, including the OECD member states’ recent adoption of OECD Principles on AI — the first international AI standards agreed on by governments — and the establishment of the Global Partnership on Artificial Intelligence (GPAI) in June 2019. We anticipate further developments in this area in 2021, including the European Commission’s forthcoming proposals for AI legislation.

In particular, the principles and recommendations for further action set out in the Study share similar themes with ongoing EU initiatives on AI regulation, including the EU High-Level Working Group’s Ethics Guidelines for Trustworthy AI and the European Commission’s White Paper on AI. Like the Council of Europe’s Study, these initiatives propose a risk-based approach to regulating AI, centred on upholding fundamental human rights like non-discrimination, and ensuring that AI applications are developed and deployed in a trustworthy, transparent, and explainable manner

Stay tuned for further updates.

* The Council of Europe is an international organization that is distinct from the European Union.  Founded in 1949, the Council of Europe has a mandate to promote and safeguard the human rights enshrined in the European Convention on Human Rights. The organization brings together 47 countries, including all of the 27 EU member states.  Recommendations issued by the Council of Europe are not binding, but EU institutions often build on Council of Europe standards when drawing up legislation.

Photo of Lisa Peets Lisa Peets

Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm’s Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she…

Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm’s Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she has worked closely with leading multinationals in a number of sectors, including many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU law issues, including data protection and related regimes, copyright, e-commerce and consumer protection, and the rapidly expanding universe of EU rules applicable to existing and emerging technologies. Lisa also routinely advises clients in and outside of the technology sector on trade related matters, including EU trade controls rules.

According to the latest edition of Chambers UK (2022), “Lisa is able to make an incredibly quick legal assessment whereby she perfectly distils the essential matters from the less relevant elements.” “Lisa has subject matter expertise but is also able to think like a generalist and prioritise. She brings a strategic lens to matters.”

Photo of Marty Hansen Marty Hansen

Martin Hansen has represented some of the world’s leading information technology, telecommunications, and pharmaceutical companies on a broad range of cutting edge international trade, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under the World Trade…

Martin Hansen has represented some of the world’s leading information technology, telecommunications, and pharmaceutical companies on a broad range of cutting edge international trade, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under the World Trade Organization agreements, treaties administered by the World Intellectual Property Organization, bilateral and regional free trade agreements, and other trade agreements.

Drawing on ten years of experience in Covington’s London and DC offices his practice focuses on helping innovative companies solve challenges on intellectual property and trade matters before U.S. courts, the U.S. government, and foreign governments and tribunals. Martin also represents software companies and a leading IT trade association on electronic commerce, Internet security, and online liability issues.

Photo of Sam Jungyun Choi Sam Jungyun Choi

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous…

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous vehicles. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

Sam advises leading technology, software and life sciences companies on a wide range of matters relating to data protection and cybersecurity issues. Her work in this area has involved advising global companies on compliance with European data protection legislation, such as the General Data Protection Regulation (GDPR), the UK Data Protection Act, the ePrivacy Directive, and related EU and global legislation. She also advises on a variety of policy developments in Europe, including providing strategic advice on EU and national initiatives relating to artificial intelligence, data sharing, digital health, and online platforms.

Photo of Marianna Drake Marianna Drake

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating…

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating to AI and data. She also advises clients on matters relating to children’s privacy, online safety and consumer protection and product safety laws.

Her practice includes defending organizations in cross-border, contentious investigations and regulatory enforcement in the UK and EU Member States. Marianna also routinely partners with clients on the design of new products and services, drafting and negotiating privacy terms, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of AI technologies.

Marianna’s pro bono work includes providing data protection advice to UK-based human rights charities, and supporting a non-profit organization in conducting legal research for strategic litigation.