Italy has become the first EU member state to adopt a national AI law that explicitly aligns with the EU AI Act while also adding national supplements. The government presents the law as a way to accelerate innovation within the boundaries of public interest, with a strong emphasis on privacy, safety, transparency, and the protection of children.
The law applies across sectors and sets principles for domains such as healthcare, labor/workplace, public services, justice, education, and sport. Central are traceability of AI-driven decisions and human oversight (human-in-the-loop). These principles translate the EU requirements into national obligations and force organisations to adopt policies and documentation around transparency, risk assessment, and accountability.
Oversight & institutional framework
Instead of creating a new authority, Italy assigns responsibilities to existing institutions:
- The Agency for Digital Italy (AgID) and the National Cybersecurity Agency (ACN) take leading roles in AI development and safety.
- Sector-specific regulators (e.g. the Bank of Italy and Consob (independent authority that regulates and supervises financial markets) retain powers for AI use in their domains (such as financial services and markets).
A striking national addition is the age limit: access to AI services under 14 years requires parental consent. This goes further than generic EU provisions and specifically targets platforms, educational apps, and consumer services. Providers will be expected to implement age verification and parental consent mechanisms.
The law also introduces new offences and aggravating circumstances:
- Harmful deepfakes and unlawful distribution of AI-generated content can lead to 1–5 years of imprisonment if damage is caused.
- Crimes such as identity theft or fraud committed with AI attract harsher penalties.
This marks a shift from purely administrative sanctions to criminal deterrence. - For crimes against political rights (e.g. deception in electoral processes), using AI may lead to imprisonment of 2 to 6 years under the relevant penal articles, when AI is used to mislead.
On Copyright & text-and-data mining (TDM), the law clarifies that AI-assisted works may be protected if they involve a human intellectual contribution. For TDM, use is in principle limited to non-protected content or scientific research by authorised institutions, which is stricter than some EU exemptions and enforces compliance more explicitly.
The Italian AI law also introduces mandatory labelling of AI-generated or AI-altered content to prevent deception. Audiovisual, broadcast and radio material that is wholly or partly created with AI in a way that could mislead must carry a clear AI marking, watermark, or audible notice at the beginning, end, and after ad breaks, unless the content is manifestly artistic, satirical, or fictional. Online video-sharing platforms must provide uploaders with a function to declare whether their videos contain AI-generated elements, and they are also required to adopt safeguards against misleading AI content. More broadly, editorial and informational media are covered by transparency duties, requiring them to be clearly identified as AI-generated or elaborated.
Sectoral rules – healthcare and labor
- Healthcare: AI may support diagnosis and care under conditions, with the physician remaining ultimately responsible and with a right to information for the patient. This requires clinical validation, logging, and transparency about AI’s role in treatment.
- Workplace: Employers must inform employees about AI use (e.g. in monitoring, scheduling, evaluation) and document how risks are mitigated. This reflects broader European labor law trends on algorithmic transparency.
Public sector, education, sport & justice
The law also requires traceability and human oversight in government decision-making, education applications, and sport (such as performance analysis or refereeing assistance). In justice, AI must be handled cautiously, with an emphasis on explainability and human final decision-making to avoid bias and due process issues.
To strengthen Italy’s strategic AI position, the law allocates up to €1 billion through a state-backed venture capital fund that can make equity investments in SMEs and larger companies active in AI, cybersecurity, quantum, and telecoms. The fund is expected to crowd in private co-financing.
Relationship with the EU AI Act & implementation
Italy reiterates EU core obligations (risk-based approach, transparency, oversight) but adds national accents (youth protection, criminal law, sectoral rules, investment). Companies will face a dual compliance track: meeting EU regulation and Italy-specific requirements (e.g. age limit, sectoral protocols). Coordination between authorities (AgID/ACN, sectoral regulators, Garante (data protection authority) will be key for predictable enforcement.
Practical implications for organisations
- Governance & DPIAs: review impact assessments with a focus on minors and criminal misuse risks (deepfakes/fraud).
- Transparency flows: update notice-and-consent processes for employees and consumers; implement parental consent where required.
- Human oversight: establish procedures for override and final clinical/operational responsibility.
- IP & TDM: clarify creative workflows and TDM bases; limit scraping to lawful corpora or research exceptions.
- Security: align with ACN expectations (supply chain, model security, logging).
- Funding: assess eligibility for the investment fund and possible co-financing opportunities.