Constitutional Collision: Questions to Assess the 2026 National AI Framework

Building on Section 8 of the December 2025 Executive Order,[1] the Trump administration’s “National Policy Framework for Artificial Intelligence,” (the “Framework”) released on March 20, 2026, provides a formal legislative blueprint for advancing the administration’s goal of a uniform, national policy for AI across the United States.[2] While the Executive Order provided only the outline of a national legislative scheme, the current Framework provides greater specificity regarding the statutory goals of an AI bill. Yet an omnibus statute incorporating provisions meant to achieve all these goals will likely face a gauntlet of constitutional challenges. This article explores a series of questions to assess the viability of several of these goals. While this discussion addresses only a subset of the potential friction points, they are where the rubber hits the road for any legislation seeking to achieve the Framework’s goals.

Question 1: Can Congress preempt AI “development” when the Framework acknowledges valid state regulation of AI “use”?

The Framework attempts to navigate the conflict between federal and state action by distinguishing AI “use” from AI “development.” The Framework provides states with greater latitude to regulate AI use than AI development. Under Section VII, states retain “police powers” to enforce generally applicable laws—such as those protecting children or preventing fraud[3]—provided they do not “unduly burden Americans’ use of AI for activity that would be lawful if performed without AI.”[4] By contrast, the Framework allows fewer carve-outs for AI development, which it defines as an “inherently interstate phenomenon with key foreign policy and national security implications.”[5]

Congressional authority to regulate AI in this manner is derived, at least in part, from the Commerce Clause, which grants Congress the power to regulate interstate and international commerce. State attempts to challenge congressional authority in this domain face an uphill battle because the Supreme Court has interpreted the reach of the Commerce Clause broadly. For example, in Gonzales v. Raich, the Court held that Congress could regulate the growth of marijuana for personal use because such activity involves the production and consumption of “a fungible commodity for which there is an established, albeit illegal, interstate market.”[6] Interacting with an AI model can involve sending data to and from servers, thereby consuming internet bandwidth and electricity—both of which involve the consumption of a commodity for which there is a legal interstate market. Thus, courts may be hesitant to grant states the power to regulate AI development and use that are separate from the enumerated carve-outs.

State efforts to prevent the preemption of state AI statutes may instead focus on recasting these statutes as governing “AI use” rather than “AI development.” By doing so, these statutes may leverage one of the carve-outs regarding the regulation of AI use. But the line between “development” and “use” is tenuous. For example, updating the weights in AI models like Gemini and ChatGPT appears to be “AI development” but these weights change based on interactions with—and use by—a user. As another example, companies developing AI models, like OpenAI and Anthropic, use information provided by a user and the responses from the AI model to update model weights. Therefore, without clear definitions of “AI use” and “AI development,” legislation seeking to advance the Framework’s goals may have limited preemptive effect.

Question 2: Can the statues that established federal agencies be read to cover the development and use of AI?

The Framework rejects the creation of a “new federal rulemaking body to regulate AI,” instead advocating for “sector-specific AI applications through existing regulatory bodies with subject matter expertise.”[7] Leveraging existing subject matter expertise is pragmatic. For example, the Consumer Financial Protection Bureau (CFPB) possesses expertise in mortgage regulation, so it is reasonable to try to leverage this expertise to address AI applications as they relate to mortgages. The expansion of agency rulemaking into AI regulation, aggregated across the multitude of federal agencies, may have the effect of leaving little room for meaningful state regulation of AI.

However, such federal regulatory expansion is limited by the major questions doctrine, which requires federal agencies to have “clear congressional authorization” when regulating matters of vast economic and political significance.[8] Congress authorized the creation of most federal agencies decades ago, while state regulation of AI has only arisen in the past few years. For example, the Federal Trade Commission (FTC) was created by the Federal Trade Commission Act of 1914.[9] Therefore, a court may be skeptical that an act signed into law over a century ago should be interpreted in a manner that preempts a state from regulating an AI tool developed in the last year.[10] Without a new, explicit grant of authority from Congress to each agency, federal agencies may struggle to combat increasing judicial skepticism of agency power in the absence of explicit statutory text.

Question 3: Can the proposed redress mechanism overcome Article III standing hurdles?

The Framework advances the creation of a redress mechanism for Americans based on “agency efforts to censor expression on AI platforms or dictate the information provided by an AI platform.”[11] However, such a mechanism runs head-first into the wall of Article III standing, which requires a plaintiff to allege an “injury-in-fact” that is traceable to the defendant’s conduct and redressable by the court. The Supreme Court recently held that neither states nor social media users have standing to enjoin government interactions with social media platforms without (1) a clear causal link between the government action and the effect on social media content and (2) a substantial risk of future injury traceable to federal government action.[12] Without a clear link between government action and the specific output from the AI model, prospective plaintiffs may have difficulty using this redress mechanism to combat alleged censorship.

Achieving the Framework’s goals

The Trump administration’s “National Policy Framework for Artificial Intelligence” is an attempt to harmonize a 50-state regulatory patchwork into a single national voice. While the blueprint is ambitious, achieving its goals depends on more than just political will; it requires threading a needle to (1) clearly differentiate “AI use” from “AI development,” (2) survive a major questions inquiry, and (3) provide a basis for Article III standing. As the Framework transitions from a white paper to an omnibus bill, the legal friction points identified here will determine the extent to which such legislation effectively advances the Framework’s goals.

 

[1] Executive Order, Ensuring a National Policy Framework for Artificial Intelligence (Dec. 11, 2025), https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/.

[2] Legislative Recommendation, A National Policy Framework for Artificial Intelligence (Mar. 20, 2026), https://www.whitehouse.gov/wp-content/uploads/2026/03/03.20.26-National-Policy-Framework-for-Artificial-Intelligence-Legislative-Recommendations.pdf.

[3] Ensuring a National Policy Framework for Artificial Intelligence (Dec. 11, 2025) at Sec. VII.

[4] Id.

[5] Id.

[6] 545 US 1, 19-20 (2005).

[7] A National Policy Framework for Artificial Intelligence (Mar. 20, 2026) at Sec. V.

[8] West Virginia v. EPA, 142 S. Ct. 2587, 2596 (2022). See also Brown & Williamson, 529 U.S. 120, 160 (2000) (“We are confident that Congress could not have intended to delegate a decision of such economic and political significance to an agency in so cryptic a fashion.”).

[9] President Woodrow Wilson signed the Act into law in 1914, https://www.ftc.gov/about-ftc/history.

[10] This is especially the case with the invalidation of Chevron Deference following Loper Bright Enterprises v. Raimondo and Relentless, Inc. v. Department of Commerce.

[11] A National Policy Framework for Artificial Intelligence (Mar. 20, 2026) at Sec. IV.

[12] Murthy v. Missouri, 144 S. Ct. 1972, 1987-89 (2024).