A federal court has now made clear that using a public-facing large language model (“LLM”) to assess legal risk can create discoverable evidence. In a criminal securities and wire fraud prosecution in the Southern District of New York, Judge Jed S. Rakoff held that documents a defendant generated through Anthropic’s Claude Large Language Model were not protected by the attorney — client privilege or the work-product doctrine — even though they were later shared with counsel. The court did not alter privilege doctrine. It applied it.
In United States v. Heppner,[1] the defendant used the Claude platform to generate analyses and potential defense arguments after learning of the government’s investigation. He argued that the materials were intended to organize his thinking for discussions with counsel. The government moved for a ruling that the materials were not privileged, emphasizing that the AI tool was not an attorney, that the communications were not confidential, and that privilege cannot be retroactively created by forwarding unprotected materials to counsel.
Judge Rakoff agreed. Sharing information with a third-party AI platform defeated the confidentiality required for attorney–client privilege. And because the materials were created independently by the client, not by or at the direction of counsel, they did not qualify as protected work product. The court did not expand or narrow privilege doctrine. It demonstrated how quickly traditional rules can transform modern efficiency tools into evidentiary liabilities.
The shift is not doctrinal. It is operational.
Companies and executives increasingly use LLMs as a form of legal triage. Executives test exposure scenarios. Employees ask whether conduct could constitute fraud. Finance teams model regulatory risk. These practices are often driven by speed and cost control. AI is viewed as a way to reduce legal spend before formal legal engagement.
The practical reality is straightforward: using public-facing LLMs for legal self-assessment may create discoverable evidence. Regulators and prosecutors now know where to look. AI prompt histories and usage logs are becoming part of the modern evidentiary footprint. When sensitive facts are entered into a third-party AI system, prompts and outputs may not be treated as confidential communications. Confidentiality is a prerequisite to privilege; where it is compromised, protection may be lost.
Nor does later transmission to counsel cure the problem. Forwarding preexisting, unprivileged materials to a lawyer does not retroactively cloak them in privilege. That rule is longstanding and was simply applied to AI-generated content.
The work-product analysis reinforces the same principle. Because the AI materials were not created at counsel’s direction, the court rejected the argument that a client’s independent preparation for anticipated litigation was sufficient. Materials generated outside counsel’s supervision may fall outside work-product protection.
For companies, the implications are immediate.
Independent AI “legal research” by executives or employees may be discoverable in civil litigation and accessible in criminal investigations. Prompts exploring exposure, strategy, or intent can become exhibits. AI-generated summaries of internal facts or timelines may likewise be obtained by regulators or adversaries. Uploading internal documents (e.g., board decks, draft financials, investigative summaries, emails) into a public AI platform may constitute disclosure to a third party, with attendant waiver risk. And cost-saving measures that substitute AI for early legal engagement may increase downstream litigation exposure. The evidentiary consequences may outweigh the marginal savings.
The Heppner decision does not mean all AI use forfeits privilege. Context matters. Enterprise platforms with contractual confidentiality protections — used under counsel’s direction — present a materially stronger privilege posture. But absent clear structure and legal supervision, independent public-facing use should be assumed discoverable.
A prudent path forward requires structure, not avoidance.
Companies should adopt AI-use policies that distinguish general business use from legal-risk analysis. Employees should not input non-public facts relating to potential misconduct, regulatory exposure, or litigation strategy into public AI platforms without legal approval. AI logs and prompt histories should be treated as discoverable data sources and incorporated into litigation hold protocols.
Most importantly, when AI tools are used in connection with legal matters, counsel should be involved early and explicitly. If AI is deployed as part of legal strategy, counsel should direct and supervise the process and, where possible, operate within environments that provide contractual confidentiality protections. While no structure guarantees privilege, alignment with established doctrine materially strengthens the argument.
Generative AI is a powerful analytical tool. It is not legal counsel. Courts will continue to apply settled privilege doctrine to new technologies without accommodation for convenience or cost control. Organizations that treat AI platforms as informal advisors, particularly during investigations or crises, risk manufacturing their own evidence.
[1] No. 25-cr-00503-JSR (S.D.N.Y. Feb. 6, 2026), Dkt. No. 22.
This memorandum is provided by Bradford Edwards LLP for educational and informational purposes only and is not intended and should not be construed as legal advice. This memorandum is considered to be advertising under applicable state laws.
