Corporate Governance, Intellectual Property

You Might Be Destroying Your Own Attorney-Client Privilege

How AI Tools Can Waive Attorney-Client Protection Without You Realizing It Last week, a federal judge in Manhattan ruled that documents a...

Written by Amit Singh · 8 min read >

How AI Tools Can Waive Attorney-Client Protection Without You Realizing It

Last week, a federal judge in Manhattan ruled that documents a client prepared using an AI chatbot were not protected by attorney-client privilege. The client, not the lawyer, used the tool. He did it to help his own defense. And the court said privilege was gone because he had disclosed confidential information to a third party: the AI provider.

If you’re a founder or startup executive who uses ChatGPT, Claude, or any AI tool to think through legal issues, draft strategy documents, or summarize advice from your lawyers, or if anyone on your team is using an AI meeting note taker on calls with counsel, this ruling directly affects you.

I. The Ruling

On February 10, 2026, Judge Jed Rakoff of the U.S. District Court for the Southern District of New York ruled from the bench in United States v. Heppner that the defendant had waived attorney-client privilege by using Anthropic’s Claude chatbot on its consumer plan to prepare legal strategy documents.

Bradley Heppner, arrested in November 2025 on securities and wire fraud charges, used Claude to prepare approximately thirty-one defense strategy documents that he then shared with counsel. The government seized these documents and moved to compel their production, citing Anthropic’s consumer privacy policy, which permitted the company to use inputs for model training and to disclose data to governmental authorities.

Judge Rakoff agreed: the defendant “disclosed it to a third-party, in effect, AI, which had an express provision that what was submitted was not confidential.” This is the first major federal ruling directly holding that a client’s use of consumer AI tools destroys privilege.

The client used AI to prepare his defense. The court said: that’s a disclosure to a third party. Privilege waived.

II. Why Founders Should Care

If you’re running a startup, you’re almost certainly using AI somewhere in your workflow. Many founders use AI to summarize legal advice, draft contracts before sending to counsel, analyze regulatory questions, prepare for board meetings by uploading counsel’s memos, or process due diligence materials during fundraising or M&A. Every one of these activities may involve feeding privileged information into an AI system. After Heppner, every one is a potential privilege waiver if done on the wrong platform.

Privilege protects your communications with your lawyers from being forced into the open during litigation, investigations, or regulatory proceedings. If you waive it—even accidentally—the other side gets to see your legal strategy, risk assessments, and counsel’s candid advice. For a startup facing an IP dispute, an SEC inquiry, or an M&A earnout claim, that can be catastrophic.

III. Consumer AI Plans: The Core Problem

Attorney-client privilege requires that communications be made in confidence. Disclosing privileged material to a third party who is not bound by a duty of confidentiality can destroy the protection. Consumer AI providers are not your agents, owe you no fiduciary duties, and operate under terms of service that often reserve broad rights over your data:

Training on your inputs. Anthropic’s consumer plans (Free, Pro, and Max) permit the company to use your conversations for model training. The default setting is on, users must affirmatively opt out. OpenAI has similar provisions for ChatGPT. When your data is used for training, it may influence responses given to other users, meaning your privileged information has been functionally disclosed to the provider and potentially beyond.

Extended data retention. If training is enabled on Anthropic’s consumer plans, data retention extends to five years. Even with training disabled, the standard 30-day retention period means your data sits on third-party servers, accessible to the provider and potentially to law enforcement.

Disclosure provisions. Consumer terms typically permit the provider to disclose data to governmental authorities and in response to legal process. In Heppner, this was the critical factor, the terms made clear that submissions were not confidential.

The bottom line: if you’re using a consumer AI plan and you input anything related to legal advice, strategy, or communications with your attorney, you are taking a real and now court-validated risk of waiving privilege.

IV. AI Meeting Note Takers: The Bigger Blind Spot

AI chat tools require a deliberate choice to input information. AI meeting note takers are more insidious—they capture everything, often automatically, and create permanent records of conversations that would otherwise be ephemeral.

If you’re using Otter.ai, Fireflies.ai, Fathom, or a platform-native assistant like Zoom AI Companion or Microsoft Copilot during calls with your lawyer, here is what’s happening: the entire conversation—your attorney’s legal advice, your candid questions, your discussion of litigation strategy—is being recorded, transcribed, and transmitted to a third-party cloud service under terms of service that may permit training, broad retention, and disclosure.

A verbatim, time-stamped, searchable record of a privileged conversation now exists on a server you don’t control, accessible to a company that owes you no duty of confidentiality. That record is almost certainly discoverable in litigation.

The scenario most founders miss: You’re on a Zoom call with your lawyers discussing a potential acquisition. You didn’t turn on a note taker. But your VP of Corporate Development has Zoom AI Companion enabled by default, or a board member dialed in from an account with Fireflies auto-joining meetings. You didn’t consent. Your lawyer didn’t consent. But the AI was listening to every word. In states like Massachusetts and California that require all-party consent to record, the person who enabled the tool without everyone’s consent may also face wiretapping liability.

V. The Kovel Doctrine: Why Some Third-Party Sharing Doesn’t Result in Loss of Privilege

Here’s something that may seem contradictory: lawyers share your confidential information with outside parties all the time, and privilege survives. Understanding why is the key to understanding when AI use can be safe.

In United States v. Kovel, 296 F.2d 918 (2d Cir. 1961), the Second Circuit held that an accountant hired by a law firm to help attorneys understand complex financial matters was protected by privilege. The reasoning: just as a client who speaks only Spanish can communicate through an interpreter without waiving privilege, a client with complex financial issues can communicate through an accountant retained by counsel. The Kovel doctrine has since been expanded to cover paralegals, forensic consultants, litigation support vendors, court reporters, e-discovery platform providers, and many others.

Privilege holds in all these cases because three conditions are met: (1) the outside party is engaged to facilitate legal advice; (2) the party operates under attorney direction or supervision; and (3) the party is bound by a duty of confidentiality, typically through an NDA, engagement letter, or vendor agreement.

Your lawyer shares your confidential information with accountants, consultants, e-discovery vendors, and court reporters every day, and privilege holds. The question is whether AI tools can meet the same standard.

Why consumer AI fails this test. When a lawyer or client uses ChatGPT Plus or Claude Pro, none of the three Kovel conditions are satisfied. The provider is not retained as counsel’s agent—it’s a general-purpose tool sold to millions. The lawyer has no control over how data is processed or stored. And consumer terms impose no confidentiality obligation—they expressly permit training, third-party disclosure, and governmental access. That is why Heppner came out the way it did.

How to use AI and preserve privilege. The Heppner ruling doesn’t say AI use destroys privilege. It says AI use under consumer terms that negate confidentiality destroys privilege. Enterprise and commercial-tier AI access, where the terms prohibit training on inputs, restrict access, and impose contractual confidentiality, is the AI equivalent of the vendor agreement your lawyer signs with every e-discovery provider and outside consultant. The path forward is to structure AI vendor relationships with the same contractual protections that preserve privilege in every other outside-party context.

VI. What “Safe” AI Use Looks Like

An important threshold point: under the Kovel framework, privilege extends to third parties who are facilitating legal advice under attorney direction. That means the strongest privilege protection exists when your lawyer is the one using the AI tool, on a commercial or enterprise plan, with proper contractual protections, as part of providing legal services to you. In that scenario, the AI vendor likely occupies the same role as an e-discovery provider or outside consultant retained by counsel: engaged to facilitate legal advice, operating under the attorney’s supervision, and bound by a contractual duty of confidentiality.

If you as the client are using AI independently to work through legal issues, even on an enterprise plan, the privilege argument is weaker. The AI provider wasn’t retained by your lawyer, isn’t operating under attorney direction, and the Kovel conditions likely aren’t met. Enterprise terms with a no-training commitment and contractual confidentiality put you in a materially better position than Heppner (where the consumer terms effectively negated confidentiality), but no court has yet confirmed that a client’s independent use of even an enterprise AI tool fully preserves privilege. Until one does, the safest course is to use AI for privileged work only through your attorney’s supervised workflow, or at a minimum, at your attorney’s direction and on a platform your attorney has vetted.

For AI Chat Tools

Assuming the AI tool is being used by or at the direction of counsel, the critical variable is the contractual framework governing the tool:

What You’re UsingData Training?Privilege RiskWhat to Do
Consumer plan, training ON (default)Yes; data retained up to 5 years; consumer ToSHIGHNever input privileged information
Consumer plan, training opted OUTNo training, but 30-day retention; consumer ToS still applyHIGHStill consumer ToS, no contractual confidentiality; don’t rely on this
API access under commercial termsNo; commercial ToS prohibits trainingMODERATEAcceptable with care; verify ToS
Enterprise plan with negotiated DPANo; contractual prohibition + audit controlsLOWBest available option for sensitive work

A note on the second row: many users assume that opting out of training solves the problem. It likely doesn’t. Opting out reduces data retention from five years to 30 days, but the underlying consumer terms of service remain unchanged. They still lack any contractual duty of confidentiality running to you, still permit disclosure to governmental authorities and in response to legal process, and still give the provider access to your data during the retention period. In Heppner, the court focused on the fact that the terms “had an express provision that what was submitted was not confidential.” Opting out of training does not change that provision. You are still governed by consumer terms that treat your data as non-confidential.

For AI Meeting Note Takers

The safest approach for privileged calls is straightforward: turn them off. Disable Zoom AI Companion, Teams Copilot, and any third-party note takers before any conversation involving legal advice.

If you want AI transcription for privileged meetings, the same attorney-involvement principle applies. A court reporter transcribing a privileged meeting does not destroy privilege because the reporter is retained by counsel, bound by confidentiality, and operating under the attorney’s direction. An AI transcription tool can occupy a similar position, but only if your attorney is the one selecting, deploying, and supervising the tool as part of providing legal services. This means:

  1. Proper contractual framework. The tool should process audio on local infrastructure or under an enterprise agreement with contractual confidentiality obligations, not a consumer cloud service under standard consumer terms.
  2. Attorney direction and supervision. Your attorney, not a team member or assistant, should be the one deciding to enable transcription and should have vetted the tool’s terms.
  3. Meeting protocol. At the start of every privileged call, confirm with all participants that no unauthorized AI tools are active. If your attorney is using a vetted transcription tool, they should disclose that to you.

If a non-lawyer (a founder, a VP, a board member) independently activates an AI note taker on a privileged call, even one running on enterprise terms, the Kovel conditions are not met. The tool was not retained by counsel, is not operating under attorney supervision, and the privilege argument is significantly weakened. This is why the “no AI” protocol for privileged meetings matters: it prevents well-intentioned team members from inadvertently creating the same problem that sank Heppner.

VII. Action Items

Based on Heppner and current bar association guidance, here is what founders and startup executives should do now:

  1. Audit your AI usage. Identify every AI tool you personally use that has ever touched legal advice, strategy documents, or communications with counsel. If any of that happened on a consumer plan, understand that the privilege for those communications may already be compromised.
  2. Stop inputting privileged content into consumer AI tools. This applies to ChatGPT Free, Plus, and Pro; Claude Free, Pro, and Max; and any other consumer-tier service. If you’re not sure which plan you’re on, assume it’s consumer.
  3. Opt out of training—but understand its limits. On Claude, go to Settings → Privacy and verify “Help improve Claude” is toggled off. On ChatGPT, check Settings → Data Controls. Opting out does not convert a consumer plan into an enterprise one, the underlying terms still lack the contractual protections that courts will look for.
  4. Disable AI note takers on privileged calls. Audit your Zoom, Teams, and Google Meet settings. Disable AI Companion, Copilot, and any auto-join transcription tools. Check whether team members have third-party note takers that auto-join meetings.
  5. Establish a “no AI” protocol for privileged meetings. At the start of every call involving legal counsel, confirm: “Is anyone using an AI recording or transcription tool on this call?” Make this as routine as asking who else is in the room.

Leave a Reply

Your email address will not be published. Required fields are marked *