Summary
In its National Policy Framework for Artificial Intelligence, the US federal government outlines its priorities for a national governance framework on the development and deployment of AI in the United States.
In its National Policy Framework for Artificial Intelligence, the US federal government outlines its priorities for a national governance framework on the development and deployment of AI in the United States.
What: Policy vision paper
For whom: Policy makers, diplomats, industry
URL:
Press release: https://www.whitehouse.gov/articles/2026/03/president-donald-j-trump-unveils-national-ai-legislative-framework/
On 20 March 2026, the US federal government unveiled its National Policy Framework for Artificial Intelligence. In this document, the US federal government describes the objectives to be achieved as part of a comprehensive American policy framework for artificial intelligence governance.
The main objective of the Policy Framework is to ensure that the US “wins the AI race” by implementing a commonsense national framework that allows all Americans to thrive, while still ensuring the American public’s trust in how AI is developed and used in their daily lives.
The National Policy Framework lists seven objectives that a US national policy framework for AI must achieve:
To achieve these objectives, the US federal government lists the following measures that should be taken by Congress:
1. To achieve Objective 1 Protecting children and empowering parents
Congress should:
2. To achieve Objective 2 Safeguarding and Strengthening American Communities
Congress should ensure that residences do not experience increased electricity costs as a result of new AI data center construction and operation. Congress should also streamline federal permitting for AI infrastructure to allow on-site power generation. Law enforcement to combat AI-enabled impersonation scams and fraud should be augmented. Moreover, appropriate national security agencies should possess sufficient technical capacity to understand frontier AI model capacities and mitigate potential concerns. Through grants, tax incentives and national assistance programs, wider deployment of AI tools in American industry should be encouraged.
3. To achieve Objective 3 Respecting Intellectual Property Rights and Supporting Creators
While the US federal government states that it does not believe that the training of AI models on copyrighted materials violates copyright laws, it “acknowledges arguments to the contrary” and supports allowing the courts to resolve this issue. Thus, Congress should not pre-emptively resolve this discussion and instead follow the development of precedents in the court system. Congress should, however, consider licensing frameworks or collective rights system for right holders to negotiate compensation from AI providers, without addressing whether such licensing is required.
Congress should, however, establish a federal framework protecting individuals from the unauthorized distribution or commercial use of AI-generated digital replicas of their voice, likeness or other identifiable attributes, with the exception of parody, satire, news reporting and other works protected by the First Amendment. Congress should also prevent abuse of such framework to stifle free speech online. This means that platforms should be given immunity for any decisions regarding the speech they generate or curate based on US law, in particular Section 230 of the Communications Decency Act.
4. To achieve Objective 4 Preventing Censorship and Protecting Free Speech
The Framework is succinct and clear on this particular matter: Congress should prevent the US government from coercing technology providers, including AI providers, to ban, compel, or alter content based on “partisan or ideological agendas’ and provide an effective means for Americans to seek redress from the federal government for any agency efforts to censor expression on AI platforms or dictate the information provided by an AI platform. The above matches prior reactions, in particular to EU regulations such as the Digital Services Act. The Administration has already expressed that these are considered censorship in prior executive orders, an opinion that a Republican Congressional committee has also expressed in its Foreign Censorship Threat report.
5. To achieve Objective 5 Enabling Innovation and American AI Dominance
To remove barriers to innovation, Congress should establish regulatory sandboxes for AI applications, provide resources to make federal datasets accessible to industry and academia in AI-ready formats for use in training AI models and systems. Congress should, however, not create any new federal regulator for AI and instead should support development and deployment of sector-specific AI applications through existing regulatory bodies with subject matter expertise and through industry-led standards.
6. To achieve Objective 6 Educating Americans and Developing an AI-Ready Workforce
The US federal government finds that Congress should use non-regulatory methods to ensure that existing education programs and workforce training and support programs should incorporate AI training. Congress should also expand federal efforts to study trends in task-level workforce realignment driven by AI in order to inform policies supporting the American workforce. Congress should also use capacities at grant institutions to provide assistance, demo projects and to develop AI youth development programs.
7. To achieve Objective 7 Establishing a Federal Policy Framework, Preempting Cumbersome State AI Laws
The US federal government should establish a single framework for AI governance that is applicable across the United States and prevent that AI growth is encumbered by a patchwork of state-based AI legislations. To this end, Congress should preempt state AI laws imposing undue burdens to ensure a minimally burdensome national standard. The Department of Justice already has a task force litigating against state AI laws based on another executive order; this objectives must be understood in the same line.
Congress’ national standard for AI policy should, however, respect key principles of US federalism and therefore not pre-empt a) traditional police powers of US States to enforce laws of general applicability, such as criminal laws, b) State zoning laws to determine the placement of AI infrastructure, and c) any requirements on the State’s own use of AI, either through procurement or services.
The preemption of national policy should ensure that State laws do not govern areas that are better suited to the federal level or that could otherwise act contrary to the United States’ national strategy to achieve global AI dominance. Therefore, States should not be permitted to regulate AI development, claiming it is inherently interstate. States should also not be permitted to “unduly burden” Americans’ use of AI for activities that, without AI, would be lawful. States should also not be permitted to penalize AI developers for a third party’s unlawful conduct involving their models, thus granting to
This document does not replace any existing legislation, but does clarify where the US federal government wants Congress to take action and where not to. Further details will likely follow from further Congressional actions. The document also reaffirms the US federal government’s commitment to reduce red tape for AI development, while accommodating some concerns, such as data centers taking up electricity from local communities and the publication of non-consensual deepfakes. The importance of preventing any form of “partisan bias” into AI systems is, however, likely to bring the US into collision course with other jurisdictions, such as the EU and the UK, where such tech laws are being adopted and implemented. Thus, EU-based industry actors and policy makers should continue to follow up on such initiatives in order to balance sovereignty with the ability to trade with US tech vendors.
Koen Vranckaert