New theme: AI literacy 

Explore here
USA cover2
13.11.2025

United States – California’s Transparancy in Frontier AI Act is signed into law

Introduction – On 29 September 2025 the California Governor Gavin Newsom signed the Senate Bill No. 53, also known as the Transparancy in Frontier Artificial Intelligence Act. By signing this law, the state California follows jurisdictions like the European Union who opted for a risk-based regulatory framework. For California, this law marks a significant legal milestone by imposing various transparency-related obligations to frontier AI developers. In this blog, we delve deeper into these obligations and other upcoming measures that enhance both safety and innovation of AI models.

What: Law

 

Impactscore:

 

For whom: Government, policy makers, businesses and citizens

 

URL: Senate Bill 53: Bill Text - SB-53 Artificial intelligence models: large developers. 

Background – On 29 September 2025 the California Governor Gavin Newsom signed the Senate Bill No. 53, also known as the Transparancy in Frontier Artificial Intelligence Act (“TFAIA”). By signing this law, the state California follows jurisdictions like the European Union who opted for a risk-based regulatory framework for high-impact AI systems. California, home to Silicon Valley, is currently the state with the most AI companies and has become the first state in the United States that put into law a comprehensive framework for transparency, safety and accountability of advanced AI models. The TFAIA can be seen as a watered-down version of the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act that Gov. Newsom vetoed earlier in 2024, which would have included an annually obligatory third-party audit for frontier models and the introduction of a “kill-switch” that would shut down models when necessary. 

 

Scope – The TFAIA will apply to a small number of companies, but this is expected to grow in the future. The law imposes its obligations on “frontier developers” which are entities that have trained or are training a frontier model. For the definition of a “frontier model” the law benchmarks on a certain amount of computing power that was used to train, fine-tune, or modify the model, namely 10^26 integer or floating-point operations. The so-called “large frontier developers” are frontier developers that have an annual gross revenue exceeding five hundred million dollars in the prior calendar year and have additional obligations. The law will be future-proof as the TFAIA gives the California Department of Technology the ability to submit annually recommendations regarding the definitions to the legislature to ensure that they reflect technological developments, scientific literature and the widely accepted national and international standards. 

 

Frontier AI Framework – Transparency obligations are placed on frontier developers. These obligations are found in several ways. One of the obligations for large frontier developers is the publication of a Frontier AI Framework. This framework should contain technical and organizational protocols to manage, assess and mitigate catastrophic risks. The framework also should describe the (inter)national best practices, and how the developer approaches third parties to evaluate the potential for catastrophic risks and the effectiveness of mitigations. It can be seen as a self-regulating instrument, given that the large frontier developer must write, implement and also comply with it. These disclosures must be updated annually and when material modifications are made to the framework. 

 

Transparancy Report – Another transparency obligation is the publication of a Transparency Report by every frontier developer. This report should include among other things, its release date, a channel through which a natural person can communicate with the developer, as well as the intended uses of the model and the applicable restrictions or conditions. 

 

Critical Safety Incidents – The TFAIA also requires that the California Office of Emergency Services (“CAL OES”) sets up a mechanism where a frontier developer or a member of the public can report any critical safety incident. Frontier developers must report incidents to CAL OES within 15 days of discovery. When a critical safety incident poses an imminent risk of death or physical injury the developer must notify any appropriate authority within 24 hours of discovery.  

 

Whistleblower protections – The TFAIA also increase the transparency through a whistleblower protection for those working for developers of frontier models. This introduces a protective measure for employees of all frontier developers who report violations or raise concerns. It also prohibits employers from taking retaliating actions against their employees. In addition, large frontier developers must set up an internal process through which employees can anonymously disclose information to the developer. 

 

Enforcement –  Failure of a frontier developer to comply with the TFAIA, will have far-reaching consequences. Depending on the severity of the violation the frontier developer will be subjected to a financial penalty that can go up to one million dollar per violation. 

 

A consortium to advance safe AI – Next to introducing a regulatory framework for frontier models, the TFAIA promotes research and innovation. The law establishes the creation of a consortium for a public cloud computing cluster to be named “CalCompute”. This state-backed initiative is intended to enhance the safe, ethical and sustainable development and deployment of artificial intelligence that benefits the public and enabling innovation by expanding access to computational resources. 

 

Good things never come alone? – The TFAIA is not the only AI-related law that will come into effect on 1 January 2026. Assembly Bill No. 2013 will also apply from that date, obliging developers to publish a high-level summary of the datasets used to train generative AI systems or services for public use. Senate Bill No. 942 imposes to providers of generative AI systems that have over one million monthly visitors or users to make available an AI detection tool at no cost to the users. 

 

Conclusion – California has reached a milestone by signing the TFAIA, which imposes various obligations in the name of enhanced transparency. The law is also future-proof, as it requires reviews of definitions to take place periodically. With initiatives such as CalCompute, California demonstrates its commitment to not only provide AI services as such, but as well safe and innovative AI services.

Author

Shannen Verlee