Hal 9000The EU has recently proposed a draft regulation that would address future challenges of artificial intelligence. The EU is attempting a balancing act between promoting the development of AI against over-regulation of new technologies. The regulation categorizes AI based on the perceived risk level, regulating more strictly an AI system that is deemed as being higher-risk.

The draft regulation adopts a broad definition of AI to encompass software developed with a variety of machine learning techniques. These include supervised, unsupervised and reinforcement learning and a variety of methods such as deep learning, logic and knowledge-based and statistical approaches, Bayesian estimation, and search optimization methods that generate outputs. Relevant outputs include content, predictions, recommendations, or decisions influencing the environments with which the AI interacts.

Below are highlights of the main aspects of the draft regulation.

High-risk AI: The primary focus of the new Regulation is on “high-risk” and certain prohibited AI systems. The European Commission is encouraging the adoption of voluntary codes of conduct for other forms of AI. High-risk AI systems are listed in two annexes to the Regulation and include AI systems used in the transport sector, medical devices, radio equipment, protective equipment and a range of other safety-critical products. Other AI systems are also deemed high risk, including those used in critical infrastructure, recruitment, employee monitoring and evaluation, student admission and assessment, individual creditworthiness checks and credit scoring, and biometric identification or categorization of natural persons. This list can be amended in the future by the European Commission.

High risk AI systems will need to undergo a conformity assessment to ensure they comply with the requirements of the regulation before they are placed on the market or put into service in the EU. Certain types of standalone conforming AI systems must be placed in an EU database. The regulation requires the establishment of risk and quality management systems, technical documentation and record-keeping. There are also transparency, human oversight and accuracy, robustness, cybersecurity obligations, record keeping, and quality management system obligations.

Human oversight: High-risk AI systems must be designed and developed such that they can be effectively overseen by natural persons. Human oversight should aim to prevent or minimize the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose (including reasonably foreseeable misuse). Human oversight must include:

  • understanding the capacities and limitations of the high-risk AI system and the ability to monitor its operation, so that signs of anomalies, dysfunctions and unexpected performances can be detected and addressed as soon as possible;
  • awareness of the possible tendency of over-relying on the output produced by a high-risk AI system (‘automation bias’);
  • the ability to correctly interpret the high-risk AI system’s output and to decide not to use the system or otherwise disregard, override or reverse the output of the AI system;

Regarding real-time or remote biometric identification of natural persons (such as facial ID applications), no action can be taken by a user on the basis of the AI identification, unless it has been verified and confirmed by at least two other people.

Data and data governance: Datasets used by AI high-risk systems will need to be of high quality to minimize the risk of adverse outcomes. They must be subject to appropriate data governance and management practices, with relevant design choices, data collection, data preparation, assumption formulation, assessments of quality and suitability. Such datasets must also be reviewed for bias and data gaps or shortcomings in analysis. This heightens the importance of access to good quality and representative datasets. There are also training, validation and data testing obligations.

Accuracy and robustness: High-risk AI systems must be designed so that they achieve an appropriate level of accuracy, robustness and cybersecurity throughout their lifecycle. High-risk AI systems must also be resilient in regard to errors, faults or inconsistencies, in particular due to their interaction with natural persons or other systems. High-risk AI systems that continue to “learn” after becoming operational must be developed so as to ensure that possibly biased outputs arising from feedback loops must be subject to appropriate mitigation measures—meaning that an AI system cannot develop biases from its own algorithms.

Technical documentation: technical documentation for high risk AI systems (demonstrating regulatory conformity) must be in place before the system becomes operational.

Transparency obligations: Users of high-risk AI systems must be able to interpret the system’s output and use it appropriately (including an obligation to supply instructions for use). Additional transparency obligations for AI systems (not necessarily high-risk ones) include:

  • users must be notified they are interacting with an AI system, where this is not evident;
  • users must notify humans where they are interacting with an emotion recognition or biometric categorization system; and
  • users engaged in “deep fake” image or video manipulation must disclose that the content has been artificially generated or manipulated.

There are criminal law enforcement exceptions for some of the transparency obligations.

Application to other AI systems: the draft regulation identifies certain AI practices as particularly harmful and that are therefore prohibited for all AI systems. These practices include the use of subliminal techniques to materially distort behaviors or exploit vulnerabilities, social scoring by public authorities and the use of certain forms of remote biometric identification systems in public spaces for law enforcement. The regulation would allow for the use of remote biometric identification systems to search for crime victims (such as missing children), to prevent the specific threat of a terrorist attack, or to detect, identify, or prosecute serious crime with certain protections (including prior judicial or administrative approval).

Affected parties: The draft regulation would apply to both providers and users:

  • Provider: a natural or legal person that develops an AI system with the goal of placing it on the EU market;
  • User: a natural or legal person using an AI system, except where the use relates to personal non-professional activity.

Extra-territorial reach: The draft regulation would have extraterritorial effect:

  • Providers (even if based outside and not established within the EU) that place AI systems on the market or put them into service in the EU;
  • Users of AI systems located in the EU; and
  • Providers and users of AI systems located in a third country, where the output produced by the system is used in the EU.

Providers based outside the EU must appoint an authorized representative if no importer can be identified. The extra-territorial reach of the regulation echoes the approach taken by the EU with respect to the protection of personal data under the GDPR. If an AI system is placed on the EU market or its use affects people in the EU, the regulation will apply, whether or not the relevant AI system providers or users are located in the EU. This long-arm approach will bring parties outside the EU within the scope of the regulation and potentially liable for fines.

Other actors: The regulation will potentially have an effect on other actors in the AI value chain, including importers, distributors and third-party suppliers. Importers and distributors will have to ensure that conformity with the regulation has occurred and both they, third-parties, and users may have primary liability where they substantially modify AI systems that are subject to conformity rules. Distributors who have reason to believe that a high-risk AI system is non-conforming must not make the system available on the market and must inform the provider or importer; distributors may have product recall and other reporting obligations. Storage and transport arrangements must not jeopardize AI regulatory conformity.

Monitoring: After a high-risk system is put into use, the providers must establish a post-launch monitoring system to collect operational data, ensure continuous compliance, and effect any necessary corrective actions.

Enforcement: The regulation will be enforced by EU Member State authorities with assistance of the European AI Board, which will seek to ensure cooperation at the national level in the EU and will provide guidance and oversight. Compliance with these requirements will be verified through fines of up to €30 million or, for companies, 6% of turnover for certain violations and the possibility of forced withdrawal of the AI system from the market.


Next steps: The draft regulation provides insight into the European Commission’s stance towards AI, but much will depend on the views of EU lawmakers and the pressure applied by lobbyists during the adoption process. The draft proposals put forward by the EU Parliament last fall and subsequent submissions by Parliamentarians to the European Commission show a desire for stronger positions on fundamental rights and anti-discrimination. EU Member States also have different perspectives, with France adopting a more permissive approach to security measures and Germany placing greater emphasis on the constitutional rights associated with privacy.

This is a good time for affected businesses to begin evaluating possible risks to their business models and examine their AI supply chains to consider how any such risks associated with the draft regulation can be offset.