On April 11, 2023, the Cyberspace Administration of China (“CAC”) released draft Administrative Measures for Generative Artificial Intelligence Services (《生成式人工智能服务管理办法（征求意见稿）》) (“draft Measures”) (official Chinese version available here) for public consultation. The deadline for submitting comments is May 10, 2023.
The draft Measures would regulate generative Artificial Intelligence (“AI”) services that are “provided to the public in mainland China.” These requirements cover a wide range of issues that are frequently debated in relation to the governance of generative AI globally, such as data protection, non-discrimination, bias and the quality of training data. The draft Measures also highlight issues arising from the use of generative AI that are of particular concern to the Chinese government, such as content moderation, the completion of a security assessment for new technologies, and algorithmic transparency. The draft Measures thus reflect the Chinese government’s objective to craft its own governance model for new technologies such as generative AI.
Further, and notwithstanding the requirements introduced by the draft Measures (as described in greater detail below), the text states that the government encourages the (indigenous) development of (and international cooperation in relation to) generative AI technology, and encourages companies to adopt “secure and trustworthy software, tools, computing and data resources” to that end.
Notably, the draft Measures do not make a distinction between generative AI services offered to individual consumers or enterprise customers, although certain requirements appear to be more directed to consumer-facing services than enterprise services.
This blog post identifies a few highlights of the draft Measures.
Definition and Scope
The draft Measures apply to “research and development into, as well as the use of, generative AI” that is offered to “the public” within the territory of China. Generative AI is defined as technology that “generates content in the form of text(s), picture(s), audio, video(s) and code(s) based on algorithms, models, and rules.” (Article 2)
It is unclear from the wording of the draft Measures whether “the public” refers to consumers in China, thus excluding generative AI services offered to enterprises from their scope. It is also unclear whether providers of generative AI outside of China that is not specifically targeting the Chinese market will be subject to these rules.
The draft Measures define “a provider of generative AI” as an entity or individual that utilizes generative AI products to provide services such as those that can generate chat, text, picture and audio. Note that this definition includes service providers that allow others to generate chat, text, picture and audio through API or other means. (Article 5) The draft Measures do not distinguish between providers of generative AI offering back-end technologies and those that build services at the application level. Both are responsible for content produced by generative AI products and are required to protect personal information in accordance with China’s Personal Information Protection Law (“PIPL”).
Article 4 of the draft Measures requires providers of generative AI to adhere to the following principles:
- ensure that content created by generative AI is consistent with the “social order and societal morals,” and does not endanger national security;
- adopt measures to avoid discrimination when designing algorithms, training data sets, or providing services that incorporate generative AI;
- ensure that content created by generative AI is true, accurate, and free of fraudulent information; and
- respect intellectual property and comply with all other applicable laws and regulations.
Providers of generative AI are required to adopt measures to filter any inappropriate content created by generative AI, and optimize algorithms to prevent the generation of such content within 3 months. (Article 15) Providers of generative AI are also required to enable the use of tagging mechanisms to identify content/video created by generative AI in accordance with the Provisions on the Management of Deep Synthesis of Internet Information Service (《互联网信息服务深度合成管理规定》). (Article 16)
Security Assessment and Filing
Before offering a generative AI service to the public at large, under the draft Measures a provider must apply to the CAC for a security assessment in accordance with the Provisions on the Security Assessment of Internet Information Services with Characteristics of Opinions or Capable of Social Mobilization (《具有舆论属性或社会动员能力的互联网信息服务安全评估规定》) (“Assessment Provisions”). (Article 6)
The Assessment Provisions were released in 2018 with the objective to govern Internet information services such as public forums, live streaming, and other types of information-sharing activities online. Under the Assessment Provisions, in-scope service providers are required to carry out a self-assessment or engage a third-party agency to carry out the assessment. Factors to be considered in the assessment largely overlap with the requirements provided under the draft Measures, including, for instance: (1) verification of the real identity of users; (2) technical measures adopted to protect personal information; and (3) internal mechanisms for content review.
Providers of generative AI are also required to file certain information regarding its use of algorithms with the CAC in accordance with the requirements provided under the Provisions on the Management of Algorithm recommendation of Internet Information Services (《互联网信息服务算法推荐管理规定》) (Article 6) – including, for instance, the name of the service provider, service form, algorithm type, and algorithm self-assessment report.
Protection of the Rights and Interests of End Users
Providers of generative AI are required to ask end users to provide real identity information. (Article 9) Further, such providers must specify the targeted end users and use cases of the services provided, and adopt measures to prevent end users from becoming addicted to the services. (Article 10)
Providers of generative AI are also required to disclose information that might impact users’ choices, including a “description of the source, scale, type, quality, and other details of pre-training and optimized-training data, rules for manual labeling, the scale and types of manually-labeled data, as well as fundamental algorithms and technical systems.” At present, it is unclear how such information should be disclosed or to whom such information needs to be disclosed. (Article 17)
Providers of generative AI are further required to protect data submitted by end users, as well as the activity logs of end users. Providers are prohibited from conducting user profiling or sharing information related to end users with third parties. (Article 11)
Providers of generative AI must also establish a mechanism to intake and review complaints from end users. (Article 13)
Finally, providers of generative AI should “guide” end users to properly utilize generative AI and not to use it to “damage the image, reputation, or other legitimate rights and interests of others, and do not engage in commercial hype or improper marketing.” (Article 18) If a provider of generative AI discovers improper use of the technology by its users, it should suspend or terminate the services provided to such end users. (Article 19) In addition, an end user can report a provider of generative AI to the CAC if the generated content does not comply with the requirements of the draft Measures. (Article 18)
Discrimination and Training Data
Article 7 of the draft Measures also impose obligations on the research and development of generative AI. Specifically, providers of generative AI must ensure data used for training and optimization is obtained through legal means, and such data must:
- comply with requirements stipulated by the Cybersecurity Law;
- not contain content that infringes intellectual property;
- if it constitutes personal information, be obtained on the basis of consent from data subjects, or otherwise comply with the requirements provided under applicable Chinese laws and regulations;
- be accurate, objective and sufficiently diverse; and
- comply with other regulatory requirements related to generative AI released by the CAC.
Providers of generative AI must also define clear rules for data annotation and train employees involved in such annotation. (Article 8) Further, providers of generative AI should not generate discriminatory content based on the race, nationality, gender or other characteristics of the user. (Article 12)
Article 20 of the draft Measures specifies that a provider of generative AI that violates the requirements provided under the draft Measures will be penalized in accordance with the Personal Information Protection Law, Cybersecurity Law, Data Security Law orother relevant regulations. If these laws and regulations do not specify a particular penalty, a violator may receive warnings, be ordered to take corrective actions, suspend services, or pay fines, or be held criminally liable.