In the recent case of United States v Heppner, a Federal Judge determined that documents generated by AI and sent to the Defendant’s attorneys are not entitled to legal privilege.
According to a report by Law360 , Judge Jed Rakoff evaluated whether the Defendant could claim privilege over 31 AI-generated documents, created using Anthropic’s Claude LLM tool, in relation to his legal matter. The Defendant aimed to invoke both attorney-client privilege and the work-product doctrine to prevent the US Government from utilizing these documents in court.
Regarding the question of attorney-client privilege, the Judge firmly dismissed the Defendant’s claims and upheld the US Government’s stance based on the following reasons:
-
The documents were not created by a licensed attorney. Since Claude is not a qualified attorney, it cannot serve as a representative for legal advice.
-
The documents were not generated with the intent of receiving legal advice. Anthropic’s guidelines indicate that Claude aims to provide responses that “least give the impression of legal advice” and often recommends consulting with a lawyer instead. If prompted for legal advice, Claude will inform the user that it is unable to provide such guidance.
-
The documents were not considered confidential. Anthropic’s privacy policy indicates that prompts and outputs are utilized to enhance Claude and may be shared with government regulatory authorities and third parties. Therefore, users could not reasonably expect their interactions with Claude to be kept confidential.
-
Sharing the documents with the Defendant’s legal team after their creation does not retroactively confer privilege upon them.
As for the work-product doctrine, the Judge concluded that the documents did not qualify for this protection either. To be covered under this doctrine, documents should have been created in anticipation of litigation by or for a party or their representative. In this case, the Defendant generated the documents independently.
The Judge referenced Anthropic’s terms and conditions for Claude, which specifically state that the tool does not provide legal advice and does not guarantee confidentiality. Similarly, several other widely-used AI tools, including ChatGPT and Gemini, have similar terms, advising users against treating outputs as legal advice and discouraging expectations of confidentiality. Consequently, it is plausible that legal circumstances would align for documents generated by these tools in similar situations.
Although this ruling originates from the US, it sheds light on potential judicial perspectives regarding AI tools and their relation to equivalent legal concepts within English law, specifically legal advice privilege and litigation privilege:
-
To qualify for legal advice privilege, a document must (i) be confidential, (ii) be exchanged between a client and their attorney, and (iii) exist primarily for the purpose of providing or receiving legal advice.
-
For a document to be protected by litigation privilege, it must be (i) confidential, (ii) a communication between a lawyer and client or between either party and a third party, (iii) pertain to litigation that is ongoing or reasonably anticipated, and (iv) created primarily for the purpose of litigation.
Considering the reasoning applied in this case, it is likely that documents generated by clients using public AI tools in relation to their legal matters would similarly fail to achieve privilege under English law.