Publication
Navigating international trade and tariffs
Impacts of evolving trade regulations and compliance risks
United States | Publication | December 2025
Technology evolves rapidly, and artificial intelligence (AI) governance is evolving faster than ever—at times, on a daily basis. As the year draws to a close and states like Texas prepare to enforce their state-level AI governance laws,1 the executive branch weighed in. Yesterday, President Trump signed Executive Order 14365 (the AI EO), which begins to explore AI regulation at the federal level and directs an investigation of current state-level laws for potential inconsistencies with federal law and policy.2
This article begins with a summary of the AI EO’s key directives. It then provides a more detailed discussion and concludes with key takeaways and a look toward the future.
The AI EO aims to “remove barriers to and encourage adoption of AI applications across sectors.” To advance this goal, the AI EO seeks to avoid a “patchwork of 50 different regulatory regimes” and moves toward a national policy standard for AI governance that is “minimally burdensome.”
The AI EO does not itself establish that national standard, nor does it preempt, repeal or otherwise invalidate any state-level AI law. Instead, the AI EO takes several actions—discussed below—that could lead to preemption of, changes in or agreements not to enforce state-level AI laws and may also lead to new federal legislation around AI.
For now, all state-level AI governance laws remain in effect (or will go into effect as scheduled), and companies should act accordingly. However, companies should monitor this space closely, as the AI EO’s incentives and penalties may quickly change which laws remain in effect or under enforcement.
Though the AI EO does not create a regulatory framework, it instructs the Attorney General to establish an “AI Litigation Task Force” within 30 days to “challenge state AI laws inconsistent with [US] policy[.]” The AI EO instructs the Attorney General to challenge laws that (1) are inconsistent with the policies expressed in the AI EO, (2) unconstitutionally regulate interstate commerce, (3) are preempted by federal regulation or (4) are otherwise unlawful in the Attorney General’s judgment. This directive is quite broad and could lead to a wide range of challenges. Given the short timeframe to establish the task force, the first challenges may emerge shortly.
The AI EO further directs the Secretary of Commerce to evaluate all existing state AI laws and publish an evaluation of those laws that the Secretary of Commerce determines are “onerous laws that conflict with the policy” set out in the AI EO. When publishing its evaluation, the Secretary of Commerce may also refer certain state laws to the AI Litigation Task Force. The Secretary’s evaluation must specifically identify state laws that (1) require AI models to alter their “truthful” outputs (the order does not currently define what constitutes a “truthful” output), or (2) compel AI developers or deployers to disclose or report information that might violate constitutional rights. The Secretary of Commerce is also encouraged to identify state AI laws that promote AI innovation consistent with the order’s policies.
The AI EO also contains two mechanisms to dissuade states from enacting, or to penalize states that enact, AI laws the Secretary of Commerce finds to be “onerous."
First, if the Secretary finds that a state AI law is “onerous,” the Secretary of Commerce must, within 90 days of the order, issue a notice specifying the conditions under which the state will be eligible (or ineligible) to receive funding from the Broadband Equity Access and Deployment (BEAD) Program.
Second, the order directs executive departments and agencies to assess their discretionary grants and determine whether to condition those grants on states either (1) not enacting laws that conflict with the AI EO or (2) entering a binding agreement to not enforce their state-level AI laws.
Companies should monitor this space closely to determine whether any states will decline to enact their own laws or enter agreements not to enforce them. Any such agreements would have significant ramifications for companies operating in those states.
Some states, like Texas and California, have enacted state-level AI governance laws that require AI developers and deployers to submit certain reports and disclosures regarding their AI models. The AI EO directs the FCC to initiate a proceeding to determine whether to adopt a federal reporting and disclosure standard that would preempt those state laws.
The AI EO directs the Chairman of the FTC to issue a policy statement on how the FTC’s prohibition on unfair and deceptive acts or practices applies to AI models. Specifically, the policy statement must explain whether state laws that “require alterations” to the “truthful” outputs of AI models are preempted by the FTC’s prohibition on deceptive acts or practices. The policy statement is to be issued within 90 days of the date of the order, or by March 11, 2026.
The AI EO directs executive advisors and assistants to prepare a legislative recommendation that establishes a uniform federal policy framework for AI that preempts state AI laws conflicting with the policy goals of the AI EO.
However, the AI EO does have a large carve-out for some provisions of state AI laws. The AI EO states that the recommendation shall not propose preempting lawful state AI laws relating to:
While the first three carveouts are relatively clear, the “other topics as shall be determined” carveout is future-looking and thus has uncertain parameters. This provision allows the executive’s legislative recommendation to carve-out (and potentially incorporate) as-yet-unidentified provisions of state AI laws that align with the AI EO’s policy goals.
The signing of this executive order marks a landmark step for the AI industry in a year of significant developments. While the order itself does not directly change any state-level AI laws, the framework it creates will likely produce significant changes in the coming weeks and months, whether through preemption, litigation instituted by the newly-created AI Litigation Task Force or agreements by states not to enforce their laws to avoid losing eligibility for certain federal funding, state and federal reactions to the forthcoming legislative recommendation.
The AI landscape is evolving once again. This is not unprecedented; rapid change is a fact of life in this sector. Companies should adopt a wait-and-see approach to this order for the time being, continue to comply and prepare for future compliance with relevant state-level AI governance laws and monitor this space closely to adjust quickly to new developments as they arise.
We will continue to monitor these developments closely. For advice regarding this order’s practical implications or for representation related to this order or other state AI laws, please contact us.
For information on Texas’s AI governance law, see our latest article: The Texas Responsible AI Governance Act.
Exec. Order No. 14365, Ensuring a National Policy Framework for Artificial Intelligence (Dec. 11, 2025) (unpublished).
Publication
Impacts of evolving trade regulations and compliance risks
Publication
In 2017, the Basel Committee on Banking Supervision published the final element of its post global financial crisis reforms.
Publication
The insurance industry is facing a rapidly changing litigation environment. Emerging risks, regulatory developments, and technological advancements are reshaping how insurers approach underwriting, claims, and risk management. Below is an overview of the most significant trends impacting the sector.
Subscribe and stay up to date with the latest legal news, information and events . . .
© Norton Rose Fulbright LLP 2025