June 4, 2025

AI Litigation, Enforcement and Compliance Risk: A Structured Response Framework

AI Compliance Risks Intensify Amid Shifting U.S. and Global Regulations

As companies integrate artificial intelligence (AI) more deeply into their operations, communications and product delivery, regulators and plaintiffs are paying closer attention to how companies describe, govern and deploy AI systems. From investor disclosures and marketing claims to backend model behavior, the gap between how companies describe AI tools and what those tools actually do has become a growing source of legal and regulatory scrutiny.

At the same time, the broader policy landscape is rapidly evolving. Companies that overstate their AI capabilities or maintain weak internal controls could face increased exposure due to potential violations of overlapping U.S. and international requirements. The EU Artificial Intelligence Act (EU AI Act),[1] adopted in 2024, begins to take effect this year. The U.S. legal landscape is also in flux: In early 2025, President Trump issued an executive order (EO) withdrawing the prior administration’s AI guidance, which primarily emphasized transparency, risk management and human oversight. The EO directs key U.S. federal advisors and agencies to develop a new national AI action plan, with formal guidance expected by mid-July. The new direction radically shifts the regulatory approach and introduces short-term uncertainty around compliance expectations for organizations marketing, using and selling AI.

Government-led and private litigation also reflects a shifting enforcement environment in which legal risks are materializing in real time. U.S. regulators such as the SEC and FTC have already asserted that inaccurate or misleading statements about AI may violate existing laws, including securities regulations, consumer protection statutes and unfair practices provisions. Meanwhile, consumer-filed class actions have begun targeting algorithmic bias, misuse of training data, lack of model transparency and unregulated outputs that could harm consumers or employees. In parallel, some state attorneys general have launched investigations and enforcement efforts aimed at deceptive AI practices and algorithmic harm. 

Companies can no longer afford vague intentions or loosely documented systems. A clear, defensible framework for AI oversight is no longer optional - it’s critical.

How Alvarez & Marsal Can Help Navigate AI-Related Risk

Alvarez & Marsal (A&M) provides end-to-end support for organizations navigating legal, regulatory and reputational risks tied to artificial intelligence. Our team brings together deep technical and investigative capabilities, combining the expertise of AI developers, forensic technologists, data scientists, former regulators and prosecutors and compliance professionals. We support clients in both proactive risk assessments and reactive responses — often operating under privilege in coordination with counsel.

Our services are structured around four core areas:

  • AI Claims and Disclosures Risk Reviews (Proactive)
  • Governance and Control Evaluations (Proactive)
  • Investigation and Regulatory Response Support (Reactive)
  • Litigation and Class Action Readiness (Reactive)

Click below to learn how A&M partners with clients to assess and respond to AI-related legal risks.

Read the Full Article Here


[1] “Up-to-date developments and analyses of the EU AI Act,” The EU Artificial Intelligence Act, Accessed May 22, 2025, https://artificialintelligenceact.eu/ 

FOLLOW & CONNECT WITH A&M