The AI Action Plan and What It Means for US Governance Going Forward
On July 23, 2025, the White House formally released America’s AI Action Plan,[1] the centerpiece of a broader policy shift that also includes three related executive orders. Unlike recent state frameworks and the EU AI Act, which emphasize risk classification, transparency, and enforceable governance requirements, the federal plan outlines a strategy seemingly focused on accelerating US-based innovation, improving and scaling domestic infrastructure, and reinforcing national security priorities. The Action Plan reflects a policy shift away from the Biden era’s regulation-forward approach, placing less emphasis on regulatory oversight and more on promoting and enabling private-sector adoption of AI, expanding US-based semiconductor manufacturing, and data center capabilities. The Action Plan also seeks to promote worldwide adoption of the US AI technology stack and related standards through a combination of assertive export controls, efforts to secure AI-related supply chains, and a fast track for “full-stack”[2] AI packages that integrate a program of trusted technology, security, access, and use controls.
Read the introduction to our series on the AI Action Plan and America’s Evolving AI Posture here.
The Action Plan’s strategy is organized around three core pillars that will guide future federal priorities:
- Accelerating AI innovation by removing regulatory barriers and supporting open-weight model development
- Expanding domestic infrastructure with investments in chip manufacturing, data centers, and grid capacity
- Advancing US leadership in international AI diplomacy and security through export controls, supply chain protections, and coordination with allied governments
The plan appears to suggest a reduced federal role in setting AI governance—with the possible exception of export control enforcement and promotion (which A&M will address in a subsequent white paper). The White House has334r directed various federal agencies to withdraw prior executive orders and to revise frameworks such as the NIST AI Risk Management Framework (previously viewed as a benchmark standard), including by removing references to misinformation, DEI, and climate considerations.[3]It has also directed the Federal Trade Commission to revisit prior investigations and rulemaking efforts, signaling a broader intent to unwind aspects of the prior administration’s AI enforcement agenda.[4]
Although the federal plan emphasizes deregulation, it does not prevent state regulation. Indeed, several states have already moved forward with developing their own laws. California’s proposed Safe and Secure AI Act would require companies to disclose high-risk AI use cases and to complete impact assessments.[5] Colorado’s SB 205, effective February 2026, requires companies to implement risk-management programs, public disclosures, and bias-mitigation protocols for high-risk systems.[6] Texas recently passed the Responsible Artificial Intelligence Governance Act, which takes effect in January 2026. It prohibits AI-driven discrimination in employment and education, requires transparency for public-facing tools, limits biometric data collection, and introduces regulatory sandboxes for testing without triggering enforcement.[7] Other states, including Illinois, New York, and Connecticut, have AI-related bills under active consideration, reflecting continued momentum at the state level. These state-level efforts reflect a growing focus on transparency, explainability, and system accountability—issues on which the federal framework is largely silent.
The administration has suggested that, although it will allow states to implement their own legislation, it will keep its hand on one lever: state funding.[8] The administration has suggested that federal agencies may reconsider funding for jurisdictions that impose what it deems “burdensome AI restrictions.” Although it remains to be seen whether the federal government will follow through on its funding-related guidance, several state laws are already in effect or scheduled to take effect in 2026.
In addition to federal and state regulations and enforcement, the judiciary will play an important role in framing America’s approach to the use of AI. Several cases are playing out in the courts, including matters relating to the proper use and ownership of datasets scraped from various sources, traditional copyright violations, focus on the “black box” nature of large language models, and product liability suits. Rulings in active and future legal battles could serve as another layer of requirements restricting and governing the use of AI.
The patchwork of state-level AI-related enforcement is not unique to the United States and exposes companies with a global footprint to significant and varied obligations. Countries around the world are developing their own AI-enforcement agendas. The EU AI Act, which came into force in August 2024, establishes a four-tier risk classification system, prohibits certain use cases (such as real-time biometric surveillance), and imposes extensive documentation, transparency, and post-market monitoring requirements on companies developing and deploying AI systems.[9] Under the EU AI Act, obligations for general-purpose AI models (like ChatGPT) begin in August 2025, while rules governing high-risk systems, such as those used in hiring, credit scoring, or public services, take force in August 2026. The EU law applies extraterritorially, meaning companies outside the EU may be subject to its requirements if their systems are offered in the EU market, used by EU residents, or impact individuals within the Union. Beyond the EU, other jurisdictions including Canada, the United Kingdom, China, Brazil, and Singapore are actively advancing their own AI regulations, adding further complexity to the global compliance landscape.
As we wrote in June, companies navigating these developments benefit from a structured, cross-functional approach to AI governance, awareness, and readiness. In today’s fragmented regulatory landscape, organizations should define and maintain internal governance standards that are clear, fact-based, transparent, explainable, and sufficient to withstand scrutiny from regulators across jurisdictions. A&M has developed a practical response framework to help legal departments, compliance leaders, and technical teams evaluate disclosures, mitigate litigation exposure, and respond to evolving enforcement priorities.[10]
Looking ahead, companies will need to reconcile relatively lax US federal AI governance with the increasingly assertive posture of state and international regulators. Conversely, AI providers, operators, and users may encounter more assertive export control enforcement by US authorities seeking to deny adversaries to access and use advanced AI technology and systems.[11] Companies must develop governance models that scale globally and adapt to the evolving legal and operational landscape.
[1] The White House, Winning the Race: America’s AI Action Plan, July 23, 2025, https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf.
[2] “Full-stack” includes “hardware, models, software, applications, and standards.” Winning the Race: America's AI Action Plan.
[3] “Trump administration to supercharge AI sales to allies, loosen environmental rules,” Reuters, July 23, 2025, https://www.reuters.com/legal/government/trump-administration-supercharge-ai-sales-allies-loosen-environmental-rules-2025-07-23
[4] Winning the Race: America’s AI Action Plan. “Trump administration to supercharge AI sales to allies, loosen environmental rules,” Reuters, July 23, 2025, https://www.reuters.com/legal/government/trump-administration-supercharge-ai-sales-allies-loosen-environmental-rules-2025-07-23
[5] Stanford Institute for Human-Centered Artificial Intelligence (HAI), 2025 AI Index Report – Chapter 6: Policy and Governance, May 2025, https://hai.stanford.edu/assets/files/hai_ai-index-report-2025_chapter6_final.pdf.
[6] Colorado General Assembly, SB24-205: Consumer Protections for Artificial Intelligence, Enacted May 2024, https://leg.colorado.gov/bills/sb24-205
[7] “Texas Responsible AI Governance Act Enacted,” Wiley Rein LLP, June 26, 2025, https://www.wiley.law/alert-Texas-Responsible-AI-Governance-Act-Enacted
[8] “States with strict AI laws could see federal dollars withheld under Trump’s new AI plan,” Business Insider, July 23, 2025, https://www.businessinsider.com/trump-admin-plans-block-funding-states-with-strict-ai-laws-2025-7
[9] European Commission, Regulatory framework proposal on Artificial Intelligence, Updated July 2025, https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
[10] Cameron Radis et al., “AI Litigation, Enforcement and Compliance Risk: A Structured Response Framework,” Alvarez & Marsal, June 4, 2025, https://www.alvarezandmarsal.com/thought-leadership/ai-litigation-enforcement-and-compliance-risk-a-structured-response-framework
[11] E.g., “Cadence Design Systems Agrees to Plead Guilty and Pay Over $140 Million for Unlawfully Exporting Semiconductor Design Tools to a Restricted PRC Military University,” U.S. Department of Justice, Office of Public Affairs, July 28, 2025, https://www.justice.gov/opa/pr/cadence-design-systems-agrees-plead-guilty-and-pay-over-140-million-unlawfully-exporting