policy monitor

China - Legislative proposal concerning deep synthesis technology

At the beginning of 2022, the CAC (Cyberspace Administration of China) introduced a draft regulation on the management of Deep Synthesis Internet Information Services. The overall aim of this regulation is the governance of deep synthesis technologies, which is defined very broadly in the draft, and therefore also covers deepfake technology.


The draft law contains 25 articles aiming to regulate activities that use these deep synthesis technologies to provide internet information services, as well as activities that provide technical support to deep synthesis services carried out in the territory of China, while at the same time protecting the legitimate interests and rights of users. The draft is mostly targeted to so-called deep synthesis service providers (DSSPs), a term the CAC uses with reference to all organizations that provide deep synthesis services as well as those that provide technical support to deep synthesis services.

Scope of the Regulation

Interesting about the draft is that although it has been reported as mostly targeting deepfakes, the document actually attempts to provide a broader overview of the capacity of algorithms to generate any type of content that can be understood in the broad meaning of "media." Article 2 further defines which technologies are specifically covered by the draft. The draft covers deep synthesis technology i.e. the use of technologies using generative sequencing algorithms to make text, images, audio, video, virtual scenes, or other information, as represented by deep learning and virtual reality. The article further specifically targets six sectors within which deep synthesis technology can generate content:

  • (1) Technologies for generating or editing text content, such as chapter generation, text style conversion, and question-and-answer dialogues;
  • (2) Technologies for generating or editing voice content, such as text-to-speech, voice conversion, and voice attribute editing;
  • (3) Technologies for generating or editing non-voice audio content, such as music generation and scene sound editing;
  • (4) Technologies for generating or editing biometric features such as faces in images and video content, such as face generation, face swapping, personal attribute editing, face manipulation, or gesture manipulation;
  • (5) Technologies for editing non-biometric attributes in images and video content, such as image enhancement and image restoration;
  • (6) Technologies for generating or editing virtual scenes such as 3D reconstruction.

Some commentators use the order of these sectors as an indication of which domains China considers most crucial: text-based AI-generated fake news is of primary importance, with voice synthesis put ahead of video deepfakes in terms of its potential impact. This probably stems from the fact that audio deepfakes had already been used in some major financial crimes, while video deepfakes are to date rather limited to crimes related to pornography.

Furthermore, specific provisions (Art. 4 and Art. 6) have also been adopted in the draft stating that DSSPs should promote the advancement and improvement of their services in a way that respects Chinese laws and regulations. Moreover, it should also be in line with social mores and ethics, the general public opinion orientation and adhere to the “correct political direction”. Deep synthesis technology must not be used to engage in activities that are prohibited by Chinese laws and regulations in this regard, i.e. incitements to subvert state authority, endangerments of national security etc. In article 6, however, attention is also given to infringement of the lawful rights and interests of others such as their reputation, image, privacy, or intellectual property rights as well as the prohibition to use deepfakes for pornography. In summary, the use of this technology would become quite restricted both for service providers and consumers.

Obligations on deep synthesis service providers

The majority of the provisions are addressed to service providers. Some eye-catching obligations they have to comply with, are the obligation to add marks (Art. 13- Art. 15), that do not affect users’ usage, to deep synthesis information content produced using their services so that the content can identify itself and be traced. This is complemented by an indicative list of domains in which marking is necessary when using deep synthesis technology, where again a primary focus is put on text-based AI-generated fake news and on audio deepfakes. Where service providers discover that content was not prominently labeled, they shall immediately stop the transmission of that information and label it before resuming transmission.

Attention is also given to the consent that needs to be attained from someone whose biometric data is used for a deepfake. It is stated that where service providers provide significant functions for editing biometric information, they shall prompt the users of the deep synthesis service to inform and obtain the independent consent of the individual whose personal information is being edited. However, exceptions may be established by other laws and regulations.

Furthermore, service providers are responsible for ensuring management procedures for algorithm review mechanisms, user registration, information content management, data security etc. They are also responsible for conducting reviews of the data input by users of the deep synthesis services and the synthesis results, and establishing and completing a database of characteristics used to identify illegal and negative deep synthesis information content. They must also conduct security assessments and prevent information security risks when tools are provided that can edit biometric information. In addition, deep synthesis service providers shall strengthen the governance of training data. On top of that, they also need to establish rules, standards, and procedures for identifying illegal or adverse information, as well as take measures to penalize users that use deep synthesis technologies to generate illegal or adverse information. Real-name identity verification must also be conducted for users of deep synthesis services in accordance with law. Otherwise, they are not entitled to provide such users with information publishing services. In any case, providers of deep synthesis technology (services) must register their pertinent applications with the state.

User-friendly portals for complaint will also need to be developed, and service providers have to publish expected time limits on the processing of such complaints, as well as being required to offer ‘rumor-refuting mechanisms’. However, there are not many details provided for in the article so it is still a question how this will be implemented in practice. When the provisions of the draft are not respected, warnings and correction orders may be issued, as well as fines between 10,000 and 100,000 RMB (circa. €1500 up to € 15 000).

Lastly, internet application store service providers shall perform security management responsibilities and where relevant state provisions are violated, they shall promptly employ measures to address it, such as not making it available on the market, suspending availability, or taking it off the market.

Responsibility to track deepfakes

China’s new regulation drafts are interesting for many reasons. Firstly, in most legal systems the existing laws regarding deepfakes mostly put the responsibility onto the creators themselves for uploading fake content, which protects big organizations that offer deep synthesis services or social media platforms. Together with the new proposed regulation regarding algorithmic recommendations, China targets, on the other hand, the organizations that offer deep synthesis services and the platforms themselves in the draft law. China appears to be taking a more pragmatic road because deepfake creators are in practice rather hard to track. Secondly, it is also interesting that pressure is placed on multiple actors across the value chain to comply with these rules: service providers, app stores, developers, platforms, industry organizations, …

There are, however, also many issues regarding the follow through of this regulation. One of the reasons being that deepfakes and manipulated media are notoriously hard to detect. Artificial intelligence needs a massive dataset of examples to learn from. This means that only the largest platforms would have the tools and datasets necessary to detect synthetic media. And even then, the most sophisticated algorithms for detection are often not highly accurate.