policy monitor

China – General and specific ethical principles for AI

The Chinese government has published a first set of general and specific ethical principles related to AI. The principles emphasize protecting the rights of users and maintaining human control. The promulgation of these principles is part of China's larger goal of becoming the global leader in AI by 2030.

What: Policy orientation document/ethical guidelines

Impactscore: 4

For who: AI developers, citizens, civil society organizations, policy makers

URL:

Government website (original version) : https://www.most.gov.cn/kjbgz/202109/t20210926_177063.html + Translation

South China Morning Post: https://www.scmp.com/tech/big-...

Summary

The Chinese Ministry of Science and Technology published the first (Chinese) set of ethical principles related to AI (title in English: New Generation Artificial Intelligence Ethics Specifications), focusing on users' rights and maintaining human control. The list was prepared by the National Governance Committee of New Generation Artificial Intelligence and follows previous general statements of principles that were framed within the NGAIDP or came e.g. from BAAI.

The promulgation of these principles falls well within China's broader goal of being the global AI leader by 2030, but also within the goal of limiting the influence of China's BigTech companies (Baidu, Tencent, etc.).

Six general principles

Six basic general ethical principles are promulgated:

  1. Improving human well-being: includes, among other things, respecting fundamental rights, but also promoting "man-machine harmony" and prioritizing public interests
  2. Promoting fairness and justice: includes, among other things, a concern for inclusiveness (e.g., vulnerable or underrepresented groups) and a fair distribution of the benefits of AI throughout society
  3. Protection privacy and security: includes respect for applicable data protection principles
  4. Ensuring controllable and trustworthy AI: This principle includes that humans must have/maintain real decision-making power, have the right to accept or not accept AI services, terminate interaction with an AI system at any time, stop the operation of an AI system at any time, and that AI must be under meaningful human control at all times
  5. Strengthening accountability: includes, among other things, the rule that the ultimate responsibility must always lie with human beings and that accountability mechanisms must be created
  6. Improving ethical literacy: includes the general dissemination of knowledge about AI and ethics and the strong promotion of the application of AI ethics

Specific principles for four domains

Furthermore, specific, applied principles are also promulgated for AI-related business and research, but also with respect to the provision and use of AI systems.

  • With respect to business management, it emphasizes, among other things, the need to actively integrate AI ethics into business processes, to conduct systematic risk assessments, and to develop collaborate interdisciplinary for inclusiveness.
  • Regarding research, it is stated that research on AI systems that violate ethical and moral obligations is prohibited, that data quality must be ensured and that research should focus on safe, transparent, understandable, robust,... AI systems while avoiding biased and discriminatory results.
  • When it comes to the provision of AI systems, it stipulates, among other things, that a data or platform monopoly should not be used to distort the market, but also that users should be informed whether AI is used in products or services. In addition, the functions and limitations of AI systems must be explained and easy solutions must be provided so that users have the (real) choice of whether or not to use AI products or services, in which they should not be hindered in any way. Finally, emergency mechanisms must also be provided and systemic risks avoided. (It is clear that Chinese Bigtech companies are being targeted here, as many of their services use recommender algorithms, where that is not always clear to users.)
  • With regard to the use of AI systems, it must be ensured that AI products or services do not carry out illegal activities, nor should they compromise national, public or industrial security. They must also do no harm to the public interest.