Policy Monitor

China – Measures for the Management of Generative Artificial Intelligence Services

The draft rules apply to the research, development, and use of products with generative AI functions, and the provision of services to Chinese citizens. The rules should respond to the rapid rise of existing generative AI applications such as OpenAI's ChatGPT (banned in China) and the various announced models from Baidu, Tencent and Alibaba and others. These rules add a new chapter to recent Chinese regulations on AI. New rules on deep synthesis technologies came into force in January and legislation on recommender algorithms was introduced just last year.

What: Legislative proposal

Impact score: 2

For who: policymakers, regulators, businesses, researchers

Key Takeaways for Flanders

  • The government imposes burdensome data governance requirements on providers of generative AI (pre-training and optimisation training). Most likely, the primary impact of these regulations will be that Chinese organizations will face difficulty compiling the massive datasets necessary to keep up with their global rivals.
  • Relevant in the context of the Belgian news surrounding the Eliza chatbot is the fact that the government is also pushing strongly on end-user protection. For example, generative AI providers must provide measures that protect the user from over-reliance and addiction

Translations of the draft rules:

Original document:

http://www.cac.gov.cn/2023-07/...

Chart of Current vs. Draft rules for Generative AI

https://www.chinalawtranslate....

While previous regulations set by the Cyberspace Administration of China (CAC) were primarily concerned with harmful outcomes that posed a threat to national security, the latest regulation takes it a step further. It requires models to be both accurate and truthful, aligned with a specific perspective (reflect Socialist Core Values), and refrain from discriminatory practices based on factors such as race, religion, and gender. Moreover, the document introduces explicit limitations on how these models should be developed, which requires addressing challenges in AI such as hallucination, alignment, and bias that currently do not have reliable solutions.

An important adjustment to the bill and the final legislation is that the rules do not apply if they are not offered to the general public or the people of China. The initial draft regulations also imposed unattainable criteria regarding the quality of training data and output accuracy, essentially requiring flawless performance. The revised version significantly retreats from such strict requirements and compels companies to implement 'effective measures' to strive towards these objectives.

The Chinese government wants to promote the development of native innovation with generative AI and aims for widespread implementation. It is also open to global collaboration in foundational technologies, including AI algorithms and frameworks. Additionally, it advocates for the utilization of reliable and secure software, tools, computing resources, and data.

Content moderation

Article 4 of the document contains five core requirements to which the provision of generative AI products or services must conform:

  • Comply with the Chinese government's vision

This clause stipulates that content created by generative AI must reflect the fundamental principles of socialism and not pose a risk to societal stability. The ‘Core Socialist Values’ is a propaganda effort that centres on promoting civic responsibilities and ethical behaviour, and this initiative has now become deeply ingrained in both Chinese society and its legal framework.

  • Prevention of discrimination
  • Protection of Intellectual Property (IP) Rights

    The draft rules state that providers are responsible for ensuring that their AI training data does not include content that infringes on copyright. However, the phrasing of the rule is vague and leaves many questions unanswered. It does not specify what uses of training material by AI tools are considered an infringement. The draft rules further hold that IP rights must not be violated in the provision of generative AI services. Commercial secrets need to be protected and advantages in algorithms, data,
    platforms cannot be used to establish monopolies.

  • Transparency & accuracy

    Companies should establish effective measures to increase transparency in generative AI services and to increase the accuracy and reliability of generated content.

  • Privacy and data protection

Responsibility

The draft rules also determine that individuals or companies that use generative AI to provide services or provide access to generative AI via APIs have accountability for all issues, including those that stem from decisions made by the client company, such as app design or user behaviour limitations (Article 9). The responsibility for generating misinformation using the technology lies with the provider, not with the user who believes it.

Providers have to submit a security self-assessment to the State Cyberspace and Information Department and get approval before the deployment of the technology. This is only the case when services are offered to the population. Further, procedures of algorithm filing, modification, and cancellation of filing must be carried out.

Research & Development

The document contains broad and demanding requirements for data governance (Article 7). Data used for training and optimization must be obtained by legal means and must:

  • Use data and foundational models that have lawful sources
  • Avoid any intellectual property rights infringement
  • Where personal information is involved, the consent of the personal information subject shall be obtained or it shall comply with other situations provided by laws and administrative regulations
  • Employ effective measures to increase the quality of training data, and increase the truth, accuracy, objectivity, and diversity of
    training data
  • Adhere to other supervision requirements from the state cybersecurity and informatization department regarding generative AI functions and services.

The rules contain no further details, are demanding and can create a lot of uncertainty for Chinese organisations. For example, the fact that training data must be objective decimates a lot of pretraining data.

Protection of end users

Generative AI service providers are required to sign service agreements with users who register for their generative AI services, clarifying the rights and obligations of both parties (Article 9) and adopt measures to prevent users from excessive reliance and addiction to the services (Article 10). They should also disclose information that could influence users' decisions, such as a "description of the source, scale, type, quality, and other details of pre-training and optimized-training data, rules for manual labelling, the scale and types of manually-labelled data, as well as fundamental algorithms and technical systems." (Article 17)

In terms of data security, providers must safeguard user activity logs and data that the user has submitted. User profiling and sharing of information of end users to third parties is prohibited (Article 11).

Finally, providers should foresee mechanisms for end users to report a provider and develop a system to deal with user complaints (Article 18-19).

Penalties

In case providers don’t comply with the short, but comprehensive, list of requirements, they face not only the suspension of their services but also fines and potential criminal liability.