Categories AI

AI-Generated Documents: No Attorney-Client Privilege Protection

On February 10, 2026, Judge Jed Rakoff of the Southern District of New York delivered a significant ruling in United States v. Heppner. The judge concluded that documents produced via a consumer version of Anthropic’s Claude AI were not shielded by attorney-client privilege or the work-product doctrine in the specific situation presented. This landmark decision appears to be the first of its kind to directly address issues of privilege and work-product claims arising from a non-lawyer’s use of an unsecured consumer AI tool for legal research, as well as the ramifications of introducing privileged information—shared with an individual by their attorney—into an AI platform. Notably, despite the unique context involving AI, Judge Rakoff’s analysis was rooted in traditional principles of privilege. He emphasized that disclosing privileged communications to a third party in circumstances that undermine confidentiality (in this case, the corporation providing the AI tool) could lead to a waiver. Ultimately, this ruling underscores the necessity of using secure AI tools for confidential information and that the decision to employ AI in privileged contexts should rest with individuals who best understand the associated risks: namely, attorneys.

What Happened?

Following the issuance of a grand jury subpoena and the retention of legal counsel, the criminal defendant, Heppner, utilized a consumer version of Anthropic’s Claude to explore legal issues linked to the government’s investigation. Operating without guidance or involvement from his attorneys, Heppner input information he received from them into the AI tool, thereby generating “reports outlining his defense strategy and potential arguments concerning the facts and the law.” He later shared these reports with his legal team, which claimed attorney-client privilege and work-product protection over the AI-generated documents, asserting that Heppner created them to facilitate legal consultation. In response, the government sought a ruling declaring that the AI-generated documents fell outside both protections, a request Judge Rakoff approved.

Key Takeaways

No Reasonable Expectation of Confidentiality

The court noted that the terms of service for the AI tool permitted the provider—Anthropic—to disclose user data to regulators and use users’ inputs and outputs for model training. In essence, the terms clearly indicated that using this specific tool involved disclosing information to a third party. Consequently, the court concluded that users lack a reasonable expectation of confidentiality regarding their inputs and outputs. Though this reasoning generally applies to consumer AI offerings, it remains to be seen whether enterprise-level products—with stricter data usage limits and contractual confidentiality protections—might allow for a different analysis of confidentiality expectations. Importantly, such protections alone do not automatically confer attorney-client privilege. Even when an enterprise AI product restricts data use and includes confidentiality clauses, courts will still evaluate whether communications were made with the intention of obtaining legal advice and whether confidentiality was sufficiently maintained to uphold privilege according to legal standards.

Use of Unsecured Consumer AI Tools May Undermine Privilege

The court articulated that discussions involving a non-enterprise AI platform equate to discussing legal matters with a third party, especially since the tool expressly disclaimed providing any legal advice. Therefore, employees utilizing consumer-grade AI tools to assess legal exposure, investigate complaints, research regulatory issues, or prepare for litigation risk generating documents that could be accessed by adversaries. This ruling aligns with legal ethics opinions that caution against using privileged information with certain unsecured AI tools, which could be regarded as a disclosure to the third-party operator of those tools. For example, a related American Bar Association opinion warned that using self-learning generative AI tools raises the danger of improper client representation disclosures.

Lack of Attorney Direction Weakens Work-Product Claim

The court ruled that since Heppner performed the AI research independently and not under counsel’s guidance, work-product protection did not apply. The court indicated that outcomes might differ if the AI tool had been employed under the direction of counsel in a Kovel-type arrangement. In such a case, the tool might arguably function similarly to a lawyer’s agent within the bounds of attorney-client privilege. The court further illustrated that the crucial point for privilege is whether Heppner intended to obtain legal advice from Claude, not simply whether he later shared Claude’s outputs with his attorneys. Claude, however, is neither a person nor a legal professional. Future cases are likely to address how consumer AI tools can meaningfully differ from AI technologies specifically tailored for legal applications.

Recommended Next Steps

  • Be Intentional: Maintaining reasonable expectations of privacy is crucial when selecting tools for handling confidential or privileged information. Ensure your organization performs diligent assessments of potential tools and permissible applications.
  • Audit AI Usage Policies: Verify whether the organization permits the use of consumer-grade (unsecured) AI tools, ensuring only appropriate applications are permitted—specifically, those not involving confidential or privileged information.
  • Implement Guardrails: Prohibit the entry of privileged, confidential, or investigation-related information into consumer AI systems unless protected by a vetted enterprise agreement and clear internal guidelines. Recognize that even using a secure AI tool involves some privilege waiver risks, depending on context. Therefore, require that decisions regarding privilege be made by those with the appropriate understanding of the associated risks—namely, attorneys.
  • Train Personnel: Educate employees about the factors influencing the appropriateness of specific AI tools for particular applications.
  • Review Kovel Agreements: Kovel agreements between tax advisors and attorneys remain standard practice. Assess existing agreements to ensure that they address AI usage and update Kovel templates accordingly.

What the Decision Does Not Address

The district court’s ruling was strictly confined to a criminal defendant’s engagement with a consumer, non-enterprise AI platform without attorney direction and under terms allowing provider access to user data, leaving several important questions unresolved.

  • Notably, the ruling does not consider whether using an enterprise-tier (i.e., secure) AI product could permit a different confidentiality expectation analysis.
  • The court also did not rule on whether AI research conducted under counsel’s direction, for example, in a Kovel arrangement or integrated into a formal legal process, might qualify for work-product protection.
  • The determination of whether the same ruling would universally apply in civil contexts remains unanswered. The court referenced a Second Circuit case concerning protections for tax advice in a corporate merger scenario, noting that protections for certain civil tax advice differ from those applicable in criminal contexts.
  • The opinion does not form an overarching rule that all AI-assisted legal work lacks protection; instead, it applies established privilege principles to the specific facts presented to the court.

As the adoption of AI technologies continues to rise, courts will likely intensify their scrutiny of how these tools interact with legal principles of privilege, confidentiality, and waiver. Organizations should proactively reevaluate their AI governance frameworks to minimize the risk of litigation and regulatory repercussions.

This article includes contributions from Peter Cramer and Edward Wang.

Leave a Reply

您的邮箱地址不会被公开。 必填项已用 * 标注

You May Also Like