Guest Blog

Gastblog. Good intentions, unintended consequences? How the Proposal for the AI Act might kick innovation out of the EU

22.10.2021

Samenvatting: het voorstel van een AI Verordening introduceert technologiegerichte zorgvuldigheidseisen en nalevingsvoorschriften voor bepaalde AI-systemen. Het verbiedt ook een aantal van die AI-systemen. Volgens de auteur dreigt dit de innovatie van AI-systemen af te remmen. Er is, aldus Jan Czarnocki, een betere weg om zowel innovatie te verzekeren als risico's voor grondrechten te vermijden, namelijk een technologieneutrale en op misbruik gerichte regulering van AI.

Over de auteur: Jan Czarnocki is a Doctoral researcher and Marie Sklodowska Curie Fellow at KU Leuven Centre for IT & IP Law, where he works on issues related to privacy and biometrics protection in health and activity tracking. He holds an LL.M degree in comparative law from the China University of Political Science and Law in Beijing and a master's degree in law from the University of Warsaw. He was an exchange student at Peking University Law School. He spent two years in Beijing studying Chinese law and deepening his knowledge about Chinese history, culture, and politics. After graduation, he came to Brussels where he has been a trainee in the External Policies Directorate of the European People’s Party Group in the European Parliament and a “European View” editor-intern in the Wilfried Martens Centre for the European Studies. Jan speaks fluent English, Mandarin Chinese and is currently learning French and Dutch.

Met deze opinie willen we bijdragen aan het bestaande debat rond AI. Deze bijdrage verscheen op 20 juli op de website van CiTiP KU Leuven. Het Kenniscentrum herpubliceert deze blog (met toestemming van de auteur) omdat we geloven dat dit interessant is voor ons publiek, maar inhoudelijk is dit niet noodzakelijk weergave van de visie van het Kenniscentrum. Disclaimer


Good intentions, unintended consequences? How the Proposal for the AI Act might kick innovation out of the EU

BY JAN CZARNOCKI (@JANCZARNOCKI)

A new proposal for the AI Act introduces technology-oriented due diligence and compliance requirements for AI systems. It also bans some of them. In this way, it risks limiting AI systems’ innovation and migration of innovators outside of the EU. There is a better way to go to both keep innovation inside and risks to fundamental rights outside of the EU: technology-neutral and abusive practices oriented regulation of AI.

The tough art of regulation

It is said that the road to hell is paved with good intentions. For the EU, hell is where innovation and development of AI systems—a huge part of the current and future economy—migrates outside of its borders. The AI legislative activism might be the road there. Although this activism gives me and my fellow lawyers a lot of work (which I am grateful for) it gives a headache to businesses and innovators. The subtle art of regulating without hampering business and innovation is what the democratic process and legislators should aim for. But effects are usually ambiguous. During the legislative process in the EU political forces mix in a cauldron, pushing their agendas and clashing interests. The final bargain resembles the current power structure and is often not what the initial idea might have been. And it is often suboptimal. Therefore, it is not surprising that the European Commission (EC) puts drafts with maximized objectives, whenever it initiates a legislative procedure. The EC is probably aware that these objectives will be watered down soon.

The same applies to the recently proposed AI Act. It defines and classifies AI systems according to the risks they pose to fundamental rights. Then, it bans too harmful ones and puts in place compliance and due diligence procedures for those high-risk AI systems to be introduced to the Single Market. For some systems, the Proposal also sets transparency and information requirements. Probably a lot will change in the Proposal until it takes effect. However, basic prerequisites are already there and will not be easy to change. And, that is why although the objectives of the Proposal are reasonable its effects might not be the development of trustworthy AI respecting fundamental rights, but just a lack of AI innovation in the EU. I believe that that will be the effect if the final AI Act upkeeps its basic tenets and is as complex and complicated as the Proposal.

It is a burden especially for smaller players

Regardless of whether they are equitable and right, the current basic requirements in the EC’s Proposal as well as its overall direction will put a huge administrative burden on AI developers and users. It is justified that technological giants are held accountable for their trespassing our rights—characteristic of current platform capitalism. But not at the price of hampering the innovative efforts of smaller players—especially because fundamental breaches in age of data are usually committed by data monopolies. Legal rules are usually unintelligible to everyone except lawyers, however well designed they are. Thus, for smaller players, it will be impossible to work on AI systems without legal counsel. For instance, how will developers otherwise know if their potential AI systems are harmful or not, and how to introduce them into the Single Market.

A lion’s share of recent decades of software (including AI systems) development was driven by small businesses, which later became giants. Freedom to act and few legal constraints led them to develop themselves. Concerning AI systems development currently, no significant burdens are put on smaller players (except i.e. GDPR). This will change after the AI Act is implemented and if its basic tenets are not changed. So instead of hiring a team of lawyers, it will be easier to set up a business in London or Zurich, not risking a self-chilling effect on innovation in the EU. No AI Sandboxes will remedy this (idea in Article 53 of the Proposal to establish safe spaces for AI systems testing for start-ups). It is just because elsewhere they will not be required. Innovators will simply move and be able to act faster where regulatory requirements are less stringent. They will not care about the EU.

The EU might not care as well and might leverage its political clout through its huge Single Market. Still, no one knows whether an AI Brussels Effect (in brief it is an effective necessity for states around the world to introduce the same standards as the EU in order to get access to the EU’s Single Market) will occur. We also do not know whether the jurisdiction of the Proposal can be bypassed—a thing proved to be hard in the case of personal data processing and the GDPR. It would mean that whenever the system is used by EU citizens it has to comply with the future AI Act. So it is still possible that AI systems developed by people outside the EU will fall under the scope of regulation, once EU citizens use them. But is it worth for the EU to paint itself as effectively innovation hostile?

The way to go

Protecting fundamental rights is a priority, but needs to be somehow reconciled with the ability of the economy to grow, and innovators to innovate. Most of the abuses to fundamental rights and potential of AI systems to do so stems rather from the concentration of data market power or from law enforcement abuses—where AI is a tool, rather than from AI systems themselves. Therefore, the focus should be on particular practices and the broader risks related to the platform economy, rather than on AI systems themselves. Good examples of such approaches are the proposals for a Digital Services Act and Digital Markets Act. They target big established entities and abuses of their data power—striking where real danger lays. They do so by creating a catalog of prohibited practices, rather than regulating technology itself. Regulating AI systems should follow this path, by focusing on abusive practices related to AI systems, rather than on the technology itself. In this way, the AI systems will be free to develop further, while it will be possible to safeguard the EU citizens from harmful practices related to them.