The European Commission cast a wide regulatory net over artificial intelligence (AI) technologies and practices last month when it proposed new rules for AI on April 21, 2021 (link to PDF here). Similar to how the AI technologies the rules are intended to regulate solve problems, assessing whether the Regulations apply to a particular AI system or practice essentially reduces to a data-driven classification solution, though a complex and legal one. The task involves assigning the AI system or practice to the most appropriate “prohibited,” “high risk,” “limited risk,” or “minimal risk” category (defined in the Regulations and on the EC’s website). (Also, the proposed Regulations would apparently not apply to private, non-professional uses of AI technologies, so another class could be “non-regulated,” for lack of a better label).
Why perform a risk-based classification assessment? For the simple reason that learning about potential future regulatory obligations may help direct near-term planning efforts and resource spending. But it can also provide insight into other area of one’s business. For example, a company may want to assess the adequacy of its insurance coverage or the sufficiency of existing financial risk factor assessments (U.S. companies). Others may wish to assess the capabilities of in-house resources to address potentially burdensome regulatory requirements or whether making changes to an AI system and practices now could avoid the Regulations altogether. Whatever the reason, here is how the proposed Regulations and EC define AI systems an practices under each classification:
-
- Prohibited: AI systems and practices classified as prohibited are those that possess or exhibit unacceptable risk of infringement of the fundamental rights of others. Title II of the proposed Regulations describes the prohibited AI systems and practices, in some cases using functional and results-oriented language (some of which is ambiguous and could lead to uncertainty about a company’s status under the Regulations). Example prohibited AI systems and practices include certain remote biometric surveillance applications by law enforcement (with the exception of certain strictly necessary uses).
-
- High Risk: AI systems and practices classified as high risk are identified in Title III of the proposed Regulations. Generally, they include those that create a high risk of adverse impact on the health and safety or fundamental rights of natural persons, taking into account a system’s functions and the specific purposes and modalities for which the system is to be used. Specifically-identified high risk systems are listed in Annex III (which may be updated as new high risk systems are identified by regulatory authorities), and include:
- systems and practices used in critical infrastructure;
- educational or vocational training;
- product safety;
- employment, workers management and access to self-employment;
- essential private and public services;
- certain law enforcement practices;
- migration, asylum and border control management; and
- administration of justice and democratic processes.
- High Risk: AI systems and practices classified as high risk are identified in Title III of the proposed Regulations. Generally, they include those that create a high risk of adverse impact on the health and safety or fundamental rights of natural persons, taking into account a system’s functions and the specific purposes and modalities for which the system is to be used. Specifically-identified high risk systems are listed in Annex III (which may be updated as new high risk systems are identified by regulatory authorities), and include:
-
- Limited Risk: According to the EC’s website, AI systems classified as limited risk include those with a clear risk of manipulation, such as chatbots. In such cases, providers may be subject to basic obligations under the rules.
-
- Minimal Risk: The EC predicts that most AI systems and practices will be classified as having a minimal risk of adverse impacts on citizen’s rights or safety. Minimal risk AI systems include AI-enabled video games and spam filters.
Determining which class one’s AI system and practices most likely falls in is the goal of performing a regulatory applicability assessment. Obviously, one should fully document the data and information used in reaching the decision, especially where an AI system and practices are not explicitly identified in the proposed Regulations and the classification assessment thus relies on subjective assumptions or rule interpretations. Moreover, an initial classification is not the end of the process. Making changes to an AI system or practices (e.g., a system’s network architecture or its intended purpose) after the effective date of the Regulations may result in the updated system or practices being reclassified. For example, a limited risk AI system could be modified such that in its new configuration it should fairly be reclassified as high risk. Likewise, a technological change made to a high risk system could place it in a lower risk classification.
The post Approaching Applicability of Europe’s Proposed AI Regulations as a Classification Task first appeared on ARTIFICIAL INTELLIGENCE TECHNOLOGY AND THE LAW.