The EU AI Act introduces a framework for AI regulatory sandboxes as a key governance mechanism. Defined in Article 3 (55) of the Act, a sandbox is a controlled framework set up by a competent authority, allowing AI system providers to develop, train, validate, and test innovations in collaboration with regulators. A sandbox plan outlines objectives, conditions, timelines, methodologies, and requirements for activities under regulatory supervision. In this context, understanding what sandboxes represent in practice is crucial. While their provisions will only take effect in August 2026, some Member States, such as Spain, have already launched national initiatives ahead of the sandboxes’ implementing act(s), which the European Commission is supposed to adopt under Article 58 (1) of the AI Act.
While there is no single agreed-upon model of what a sandbox should entail, the European Commission’s Joint Research Centre has emphasised regulatory learning across different types of experimentation spaces, including sandboxes, as a means for regulators to respond and adapt to rapidly evolving technologies. While other experimentation spaces exist, such as test beds and living labs, these are primarily motivated by technology upscaling or co-creation. By contrast, regulatory sandboxes focus on generating techno-legal evidence by testing innovations and regulatory approaches in market conditions to improve legal certainty. The goal is to facilitate the development and testing of innovative AI systems (particularly from SMEs and startups) under regulatory oversight, allowing regulators and innovators to learn, adapt, and build trust, thereby creating proper incentives and conditions for innovation to occur.
Sector-specific sandboxes already exist in Europe, particularly in the financial and energy sectors, with Norway and France having set up privacy sandboxes. Unlike these sectoral initiatives, the approach of the AI Act, as set out in Article 57, is novel in mandating that each Member State establish at least one sandbox and allocate sufficient resources to meet the Act’s requirements. Member States can fulfil this obligation by participating in an existing sandbox instead of creating a new one. Drawing on existing sandbox experiences can help stakeholders situate AI sandboxes as part of the broader regulatory toolbox of experimentation spaces rather than an isolated regulatory learning novelty.
One of the purported hallmarks of the regulatory sandbox is the temporary relaxation of certain rules on issuing fines during testing (Article 57 (12)). This enables testing and validation to take place without the immediate risk of administrative fines, as long as guidance from competent authorities is respected. In practice, this could involve temporary waivers, streamlined obligations or phased compliance mechanisms. Unfortunately, the scope of the AI Act’s application in these scenarios remains somewhat ambiguous, while other applicable laws, such as the GDPR or sector-specific regulations, continue to apply during the sandbox process.