New theme: AI literacy 

Explore here
Guestblog AI sandboxes
19.12.2025

AI Sandboxes: Between Promises and Perils

Artificial Intelligence (AI) regulatory sandboxes, as introduced by the EU AI Act, are regulatory learning instruments designed to support responsible AI innovation while safeguarding safety, health and fundamental rights. They promise a supervised safe space for AI developers to test and validate systems, providing legal certainty and reducing regulatory burdens. 

In this piece, we argue that several unknowns remain regarding AI sandbox functioning and implementation, making it unclear whether its current form will be able to deliver on its intended purpose. To fully capture the potential benefits, sandbox design and implementation must address significant challenges, including an unclear participation scope, uneven resources, potential bureaucratic obstacles, and transparency concerns. We urge regulators to address these issues through the sandboxes’ awaited implementing acts, or, where appropriate, within the Digital Omnibus initiative currently aimed at closing gaps in EU digital regulation.

What are the AI Sandboxes established under the EU AI Act?

The EU AI Act introduces a framework for AI regulatory sandboxes as a key governance mechanism. Defined in Article 3 (55) of the Act, a sandbox is a controlled framework set up by a competent authority, allowing AI system providers to develop, train, validate, and test innovations in collaboration with regulators. A sandbox plan outlines objectives, conditions, timelines, methodologies, and requirements for activities under regulatory supervision. In this context, understanding what sandboxes represent in practice is crucial. While their provisions will only take effect in August 2026, some Member States, such as Spain, have already launched national initiatives ahead of the sandboxes’ implementing act(s), which the European Commission is supposed to adopt under Article 58 (1) of the AI Act.

While there is no single agreed-upon model of what a sandbox should entail, the European Commission’s Joint Research Centre has emphasised regulatory learning across different types of experimentation spaces, including sandboxes, as a means for regulators to respond and adapt to rapidly evolving technologies. While other experimentation spaces exist, such as test beds and living labs, these are primarily motivated by technology upscaling or co-creation. By contrast, regulatory sandboxes focus on generating techno-legal evidence by testing innovations and regulatory approaches in market conditions to improve legal certainty. The goal is to facilitate the development and testing of innovative AI systems (particularly from SMEs and startups) under regulatory oversight, allowing regulators and innovators to learn, adapt, and build trust, thereby creating proper incentives and conditions for innovation to occur.

Sector-specific sandboxes already exist in Europe, particularly in the financial and energy sectors, with Norway and France having set up privacy sandboxes. Unlike these sectoral initiatives, the approach of the AI Act, as set out in Article 57, is novel in mandating that each Member State establish at least one sandbox and allocate sufficient resources to meet the Act’s requirements. Member States can fulfil this obligation by participating in an existing sandbox instead of creating a new one. Drawing on existing sandbox experiences can help stakeholders situate AI sandboxes as part of the broader regulatory toolbox of experimentation spaces rather than an isolated regulatory learning novelty.

One of the purported hallmarks of the regulatory sandbox is the temporary relaxation of certain rules on issuing fines during testing (Article 57 (12)). This enables testing and validation to take place without the immediate risk of administrative fines, as long as guidance from competent authorities is respected. In practice, this could involve temporary waivers, streamlined obligations or phased compliance mechanisms. Unfortunately, the scope of the AI Act’s application in these scenarios remains somewhat ambiguous, while other applicable laws, such as the GDPR or sector-specific regulations, continue to apply during the sandbox process.

Who gets to participate in these sandboxes and why does it matter?

Numerous actors play a role in the sandbox process, with varying roles and responsibilities. Each EU Member State must designate a national competent authority responsible for creating an AI sandbox and admitting participants, providing these actors with a safe harbour  that helps lower entry barriers and supports more inclusive regulatory learning. Besides these competent authorities, “providers or prospective providers of AI systems” may also participate in the sandboxes. While particular emphasis is placed on ensuring accessibility for SMEs, including startups (recital 139), the AI Act does not explicitly exclude larger companies from participation. Focusing on SMEs is a well-intentioned goal to protect European innovators, but greater emphasis should be placed on the broader benefits of sandboxes for companies that, although not classified as SMEs, could contribute more to European innovation.

Clarifying the scope of sandbox participation is particularly relevant considering the structure of the EU economy: as the Draghi Report notes, only four of the world’s top 50 tech companies are European, and larger firms tend not to develop general-purpose AI (GPAI) models nor operate as very large online platforms or very large search engines. However, many mid-sized or larger firms may possess the expertise and resources to deliver cutting-edge AI solutions. Critically, these companies may still need time and funding to strengthen compliance capabilities amid global competition. Overlooking their access to sandboxes could mean missed opportunities for broader innovation and insights that could help revive Europe’s waning competitiveness. There are already examples where the European Commission is exploring similar action: mid-size companies should be benefitting from simplified obligations under the GDPR, and testbeds are being set up to accelerate the deployment of Autonomous Vehicles in the EU. 

By incorporating civil society, innovation hubs, and standardisation bodies, recital 139 of the AI Act broadens the democratic legitimacy of AI experiments. At the same time, this diversity can also slow down sandbox efforts and increase administrative burden if coordination structures are unclear or fragmented. Therefore, while the AI Act currently requires national competent authorities to coordinate with the AI Board and report annually on sandbox implementation, we concur with the Digital Omnibus proposal to establish an EU-level sandbox for the systems listed in Article 75(1) of the AI Act, namely GPAI developers and AI systems embedded in very large online platforms or search engines, while all other AI systems remain within the existing national scope.

What is the value of exit reports and knowledge sharing?

Under the AI Act, competent authorities must offer providers who participate in the AI regulatory sandbox exit reports, and documentation of activities, which can be used as proof of compliance. Exit reports are intended to document the outcomes of a system’s time in a regulatory sandbox and to guide its path to market. Yet the AI Act does not impose the publication of these reports – not even of those parts that do not reveal trade secrets. Public availability of exit reports is contingent upon mutual agreement between the provider(s) and the national authority. As a result, lessons learned, best practices, and risk mitigation strategies developed within sandboxes may remain siloed with the participating organisations and authorities, thereby limiting the potential benefits of these reports for other companies and the wider public, including civil society, which is supposed to be associated with the process.

Unfortunately, we believe the status quo reflects that the broader objectives of collaboration and knowledge-sharing that sandboxes are intended to promote will probably not be met. Critically, the siloing of knowledge might further be exacerbated considering that industry is concerned about financial (50%) and reputational (42%) risks that might arise from AI incidents. Such concerns could encourage selective public reporting, causing exit reports to present a veneer of safety and readiness that does not reflect a system’s actual performance or risk profile in live settings. Research on machinewashing warns that organisations may use selective disclosure to create an appearance of responsible AI while concealing unresolved risks. Machinewashing refers to misleading communication or actions about AI that create a facade of responsibility. This problem is magnified by limited resources and expertise on the side of regulators, which may restrict the scope of testing and overlook reasonably foreseeable risks.

Still, exit reports have an important role to play. For companies, the documentation can accelerate conformity assessment procedures to a reasonable extent and advance readiness for market entry, reducing uncertainty and easing the path toward authorisation. Successful completion of a sandbox process may also help signal reliability to investors and partners, reinforcing trust in the product. If made transparent, reports documenting lessons learned and effective mitigation strategies could help benchmark good practices, extending the benefits beyond the immediate participants of the sandbox. In other areas of EU digital regulation, high-profile cybersecurity incidents have prompted European entrepreneurs to advocate for enhanced reporting obligations under the Digital Omnibus initiative. For AI Act sandbox exit reports, however, a constructive approach could be to adopt a voluntary disclosure model, akin to the AI Literacy Repository, which could help mitigate knowledge siloing and encourage more transparent reporting.

From Promise to Action: Ensuring AI Sandboxes Deliver

AI sandboxes embody both promise and peril. They are envisioned as experimentation spaces for responsible innovation, where regulators and developers jointly test AI systems, anticipate risks, and refine oversight practices. If implemented inclusively, with sufficient resources and with unambiguous regulatory flexibility, they could deliver on this promise. The pitfalls, however, are equally significant. Limited resources may hinder implementation and oversight, and the absence of structured transparency schemes for exit reports risks siloing valuable lessons and limiting collective learning. Uneven participation and capacity disparities across Member States could also undermine the framework’s inclusiveness and exacerbate existing innovation gaps. Moreover, positive sandbox results should not be mistaken for guarantees of safety; doing so could fuel a false sense of security and enable machinewashing. Developing mechanisms to assess whether sandboxes genuinely support both regulators and participants is therefore crucial.

Ultimately, the success of the sandbox framework depends on whether EU institutions and Member States approach it as a genuine instrument of co-regulation and mutual learning or allow it to devolve into bureaucratic ritual. The challenge is to strike the right balance: sandboxes must remain flexible enough to foster innovation while robust enough to safeguard rights, ensure accountability, and strengthen public trust. Regulatory sandboxes for AI are only one element of the EU’s broader digital legislative stack, but their impact will hinge on coherent implementation and clarity. We therefore urge the regulator to address these challenges through the implementing acts that will be adopted pursuant to Article 58 (1) — or, where necessary, within the broader Digital Omnibus initiative — and to consider the perspectives that could help respond to the European urgencies underlined by the Draghi Report.

Disclaimer

This article was originally published on the KU Leuven website. This article solely reflects the views of the authors, and does not represent the position of the Knowledge Centre Data & Society. 

The Knowledge Centre Data & Society is not responsible for the content of guest blogs and therefore will not correspond about the content. In case of questions, comments or concerns: contact the author(s)!

About

Copyright cover image: AdobeStock reference number: 1733857113

Authors

Aina Errando Researcher imec SMIT VUB

Aina Errando

Aina Errando is a doctoral researcher part of the Media, Economics and Policy Unit at imec-SMIT, Vrije Universiteit Brussel, focusing on algorithms and news personalisation in media organisations.

Abdullah Elbi Researcher Ci Ti P

Abdullah Elbi

Abdullah Elbi is a doctoral researcher at KU Leuven Centre for IT and IP Law, focusing on the regulation of AI, fundamental rights, data protection, biometrics, employee surveillance, and human oversight.

Alicja Guestblog AI Sandboxes 2025

Alicja Halbryt

Alicja Halbryt is a philosopher of technology and human-centred designer with a focus on AI ethics and experience working across policy-making organisations and government.

Oystein Guestblog AI Sandboxes 2025

Øystein Flø Baste

Øystein Flø Baste is a doctoral researcher at the Digital Welfare State project, Department of Public and International Law, University of Oslo.

Edoardo AI sandboxes guestblog 2025

Edoardo Peña

Edoardo Peña is a Regulatory Affairs Engineer for Autonomous Vehicles and General Safety at Nissan Motor Corporation, and a Doctoral Candidate in Electrical Engineering at KU Leuven (COSIC) researching Cybersecurity and Autonomous Driving Systems Governance. Opinions expressed are solely his own and do not express the views or opinions of his employer.