As artificial intelligence systems became increasingly integrated into consumer products and services in 2025, both regulators and private entities began exploring how established privacy and consumer-protection laws pertain to the gathering, utilization, and commercialization of personal data in AI development and usage.
Over the course of the year, federal and state enforcement agencies heightened their examination of AI practices, particularly targeting unverified marketing claims, unclear data usage disclosures, and potential risks to children. Concurrently, private litigants introduced a variety of innovative legal theories—addressing issues from AI training methods to the operation of automated chatbots under longstanding electronic-communication laws. Courts reacted with varying outcomes, signaling the initial but significant trends concerning obligations for disclosure, consent, and the difficulties of applying traditional privacy laws to new technologies.
This article reviews significant litigation trends related to privacy involving AI in 2025, highlighting regulatory actions, private lawsuits, and legislative and judicial developments that will shape the future of AI governance. To remain informed about these trends, subscribe to the WilmerHale Privacy and Cybersecurity Law Blog.
I. Consumer-Protection Actions
In 2025, governmental bodies continued to scrutinize the data-handling practices surrounding AI models and services. Though numerous enforcement actions did not rely solely on privacy-related legal theories, they nonetheless indicated an increasing concern about how companies manage, use, and represent data in connection with AI.
State Actions
State attorneys general ramped up their efforts to address consumer protection and privacy risks tied to AI throughout 2025. As discussed in a recent blog post, AGs from across the political spectrum have expressed particular concern regarding AI chatbots. For instance, in August 2025, a bipartisan group of state AGs jointly cautioned leading AI developers that they would be held responsible for harms arising from AI systems’ access to and use of consumer data—especially when such systems may impact children.1 Additionally, Texas’s attorney general announced an investigation into claims asserting that chatbots could provide therapeutic services.2 This trend suggests that, even without comprehensive federal AI legislation, state regulators are poised to leverage existing consumer-protection tools to shape AI product design and data governance practices.
Federal Actions
On the federal front, the Federal Trade Commission (FTC) continued to exercise its authority over consumer protection matters, scrutinizing companies involved in the development or implementation of AI tools, particularly those making allegedly false or unsubstantiated marketing claims. Toward the end of the Biden administration, the FTC initiated “Operation AI Comply,” a campaign aimed at curbing misleading claims about AI capabilities. While this effort largely continued during the Trump administration, last year the agency hinted at a willingness to revisit previous decisions in response to shifting executive priorities that emphasized minimizing regulations in the AI sector, reportedly finding that, in at least one instance, a prior consent order “unduly burden[ed] innovation in the nascent AI industry.”3
The FTC undertook several Section 5 actions against companies accused of exaggerating the abilities or benefits of their AI products, seeking injunctions, financial restitution, and, in one case, an enduring prohibition on offering AI-related services.4 Simultaneously, it allocated over $15 million in connection with allegations against a developer who purportedly stored, used, and sold consumer information without their consent.5 This action underscored the links between traditional privacy theories and consumer-protection enforcement for AI developers.
Moreover, the FTC leveraged its Section 6 investigatory authority to analyze the practices of tech companies offering AI-powered chatbots and companion tools.6 These investigations sought comprehensive information on data collection practices, model training methods, retention policies, and safeguards aimed at protecting minors, particularly regarding compliance with the Children’s Online Privacy Protection Act.7 Although these investigations have not yet produced sweeping public enforcement results, they indicate the FTC’s awareness of the privacy implications associated with AI chatbots and foreshadow areas likely to receive future scrutiny.
Private Actions
Private plaintiffs have also begun to explore increasingly creative consumer-protection theories in their challenges against AI development and usage. In one case, a plaintiff claimed that a company had unlawfully taken advantage of the “cognitive labor” produced through user interactions with its AI system by capturing and utilizing that data without recompense.8 Although the court ultimately dismissed the claims for lacking a legally viable theory, the case exemplifies the innovative—and occasionally broad—legal strategies plaintiffs are employing to characterize AI data practices as misleading or unfair.
II. Privacy Laws
A parallel—and increasingly significant—aspect of AI-related litigation in 2025 involved efforts to apply existing electronic-communications and privacy statutes to AI-enabled tools and data-collection practices. Courts were challenged to determine whether longstanding prohibitions against unauthorized interception, disclosure, or misuse of personal information can accommodate technologies that enhance or replace human interactions, gather data on a massive scale, and repurpose that data for model improvement.
AI Chatbots and Electronic-Communications Statutes
Several cases examined whether AI chatbots used in customer-service and consumer-interaction contexts might be viewed as unlawful interceptors under state and federal electronic-communications laws. In Taylor v. ConverseNow Technologies, for instance, a federal court permitted a class action claim under the California Invasion of Privacy Act (CIPA) against a SaaS company that uses an AI assistant to process customer calls, allowing the case to move beyond the motion-to-dismiss stage.9 The court focused on whether the chatbot provider could be seen as a “third party” interceptor, distinguishing between data benefiting the consumer and data used for the provider’s commercial advantage, including system enhancements. When consumer data purportedly served both purposes, the court identified reasonable grounds for liability under CIPA’s wiretapping provisions.10
Conversely, other courts have shown skepticism toward applying electronic-communications statutes to AI training practices. In Rodriguez v. ByteDance, for example, the court dismissed claims levied under CIPA and the federal Electronic Communications Privacy Act, concluding that the allegations regarding personal data being used for AI training were too speculative without concrete evidence of interception or disclosure.11
AI Training Data and Invasion-of-Privacy Claims
Some lawsuits also involved accusations that firms collected or repurposed consumer data without sufficient disclosure or consent. In Riganian v. LiveRamp, for example, a potential consumer class survived an early dismissal after alleging that a data broker utilized AI tools to gather, combine, and sell personal information from both online and offline sources.12 The court determined that the plaintiffs had adequately demonstrated invasive and non-consensual data practices justifying common-law privacy claims under California law, as well as under CIPA and the federal Wiretap Act.
III. Related Developments—State Legislative Action and the Courts
As privacy-related AI litigation continued to evolve in 2025, state legislatures and judicial systems also took steps that may influence the future of such litigation.
As our team at WilmerHale has noted, throughout 2025 state legislatures nationwide turned their focus to AI regulation, with California, Colorado, and Texas working on new legislation specifically tailored to AI systems. Additionally, more than half the states enacted laws addressing privacy issues associated with the creation and distribution of “deepfakes”—malicious digital alterations and misinformation regarding a person’s likeness or voice.13 Lawmakers targeted AI-related privacy and data transparency concerns more broadly, particularly regarding customer service bots and the potential for bias in AI model outputs.14 State legislators and AGs consistently opposed federal preemption of state AI laws to ensure that states maintain a role in AI governance.15
Courts have also emerged as vital players in the governance of AI. For example, the Arkansas Supreme Court adopted a regulation mandating that legal professionals verify that AI tools used in court do not retain or reuse sensitive data, warning that non-compliance could lead to professional misconduct. Similarly, jurisdictions like New York and Pennsylvania provided guidelines restricting the use of generative AI in ways that could jeopardize client confidentiality or the integrity of the judicial process.16
* * *
For companies developing or implementing AI technologies, it is crucial to keep an eye on this rapidly evolving landscape as courts, regulators, and legislators continue to refine the permissible bounds of data usage. WilmerHale offers extensive legal expertise in navigating AI-related challenges across litigation, regulatory, transactional, and intellectual property domains. The firm regularly collaborates with AI model developers, testers, and deployed applications, guiding clients through the complexities posed by emerging federal, state, and international AI legislation and regulation. WilmerHale will continue to monitor these trends and support clients in addressing the nuanced legal hurdles associated with AI.
We invite you to join us at one of our East Coast offices—DC, NYC, or Boston—for a practical update on what lies ahead in 2026, including new state privacy and AI laws, enforcement and litigation trends, breach risks, and actionable compliance strategies. Following the briefing, a networking reception will follow, and CLE credit will be available.
Full details and an RSVP link can be found here.
Footnotes:
- Joint Letter to AI Industry Leaders on Child Safety, Nat’l Assoc. Attys. Gen., https://www.naag.org/policy-letter/joint-letter-to-ai-industry-leaders-on-child-safety (Aug. 25, 2025).
- See Attorney General Ken Paxton Investigates Meta and Character.AI for Misleading Children with Deceptive AI-Generated Mental Health Services, https://www.texasattorneygeneral.gov/news/releases/attorney-general-ken-paxton-investigates-meta-and-characterai-misleading-children-deceptive-ai (Aug. 18, 2025).
- FTC Reopens and Sets Aside Rytr Final Order in Response to the Trump Administration’s AI Action Plan, Fed. Trade Comm’n, https://www.ftc.gov/news-events/news/press-releases/2025/12/ftc-reopens-sets-aside-rytr-final-order-response-trump-administrations-ai-action-plan (Dec. 22, 2025).
- See, e.g., FTC Sues to Stop Air AI from Using Deceptive Claims About Business Growth, Earnings Potential, and Refund Guarantees to Milk Millions from Small Businesses, Fed. Trade Comm’n, https://www.ftc.gov/news-events/news/press-releases/2025/08/ftc-sues-stop-air-ai-using-deceptive-claims-about-business-growth-earnings-potential-refund (Aug. 25, 2025); FTC Case Against E-Commerce Business Opportunity Scheme and Its Operators Results in Permanent Ban from Industry, Fed. Trade Comm’n, https://www.ftc.gov/news-events/news/press-releases/2025/08/ftc-case-against-e-commerce-business-opportunity-scheme-its-operators-results-permanent-ban-industry (Aug. 25, 2025); FTC Approves Final Order Against Workado, LLC, Which Misrepresented the Accuracy of Its Artificial Intelligence Content Detection Product, Fed. Trade Comm’n, https://www.ftc.gov/news-events/news/press-releases/2025/08/ftc-approves-final-order-against-workado-llc-which-misrepresented-accuracy-its-artificial (Aug. 28, 2025).
- FTC Sends Payments to Consumers Impacted by Avast’s Deceptive Privacy Claims, Fed. Trade Comm’n, https://www.ftc.gov/news-events/news/press-releases/2025/12/ftc-sends-payments-consumers-impacted-avasts-deceptive-privacy-claims (Dec. 2, 2025).
- FTC Launches Inquiry into AI Chatbots Acting as Companions, Fed. Trade Comm’n, https://www.ftc.gov/news-events/news/press-releases/2025/09/ftc-launches-inquiry-ai-chatbots-acting-companions (Sept. 11, 2025).
- 15 U.S.C. §§ 6501-6506 (2018).
- Small v. OpenAI, 2025 U.S. Dist. LEXIS 201648 (E.D.N.C. Oct. 10, 2025).
- 2025 WL 2308483 (N.D. Cal. Aug. 11, 2025).
- The court followed reasoning shaped by Yockey v. Salesforce, Inc., 745 F. Supp. 3d 945 (N.D. Cal. 2024).
- 2025 WL 2495865 (N.D. Ill. Aug. 25, 2025). For another example of a case dealing with party status under electronic-communications acts, see Q.J. v. PowerSchool Holdings, No. 1:2023cv05689 (N.D. Ill. 2025).
- 791 F. Supp. 3d 1075 (N.D. Cal. 2025).
- As AI Tools Become Commonplace, So Do Concerns, Nat’l Conf. State Leg., https://www.ncsl.org/state-legislatures-news/details/as-ai-tools-become-commonplace-so-do-concerns (Nov. 11, 2025).
- Id.
- State Attorneys General Urge Congress to Preserve Local Authority on AI Regulation, Nat’l Assoc. Attys. Gen., https://www.naag.org/policy-letter/state-attorneys-general-urge-congress-to-preserve-local-authority-on-ai-regulation (Nov. 25, 2025).
- Interim Policy on the Use of Artificial Intelligence, N.Y. Unif. Court Sys., https://www.nycourts.gov/LegacyPDFS/a.i.-policy.pdf (Oct. 2025); Interim Policy on the Use of Generative Artificial Intelligence by Judicial Officers and Court Personnel, Penn. Unif. Court Sys., https://www.pacourts.us/assets/opinions/Supreme/out/Attachment%20-%20106502825326188944.pdf?cb=1.