justNICE Bee Logo justNICE
Technology & Privacy

Your AI Chats Can Be Used Against You: What You Need to Know in 2026

A federal court has ruled that your conversations with AI chatbots have no legal privilege. Whether you use ChatGPT, Claude, Gemini, or DeepSeek, here is what is actually happening with your data — and what you can do about it.

Courtroom gavel with digital data streams representing the intersection of law and AI technology

The Key Finding

In February 2026, a federal judge ruled that conversations between a person and an AI chatbot are not protected by attorney-client privilege and can be compelled as evidence in court. This applies to every major AI platform — ChatGPT, Claude, Gemini, and others.[1][2]


1. The Courts Have Spoken: AI Chats Are Evidence

If you have ever used an AI chatbot to talk through a legal question, process difficult emotions, draft sensitive documents, or discuss private matters, you should know: those conversations can be obtained by courts, law enforcement, and opposing attorneys just like your text messages or emails.

Here are the key rulings from 2025–2026:

US v. Heppner (February 2026)

Judge Jed Rakoff of the Southern District of New York ruled that conversations between a criminal defendant and Anthropic's Claude chatbot are not protected by attorney-client privilege or work product doctrine. The reasoning: an AI is not a lawyer, and "discussion of legal issues between two non-attorneys is not protected." The defendant had no reasonable expectation of confidentiality.[1][2]

NYT v. OpenAI: 20 Million Chat Logs (January 2026)

In the ongoing New York Times copyright lawsuit, a judge ordered OpenAI to turn over its entire 20-million-log sample of ChatGPT conversations. OpenAI tried to provide only cherry-picked logs; the court rejected that. Only Enterprise, Education, and Zero Data Retention customers are excluded from the preservation order.[3]

If you have ever used ChatGPT's consumer interface, your conversations may be in that dataset.

FBI Seizure of a CEO's AI Chats (2026)

The FBI seized a CEO's computer and sought access to his AI chat history. The court left only a narrow, untested exception: if an attorney specifically directed the client to conduct AI research as part of attorney work product.[4]

Criminal Prosecutions and Family Court

In 2025, three criminal cases cited ChatGPT conversations as evidence — involving child exploitation, arson, and vandalism. Multiple divorce and custody cases in 2024–2026 have used ChatGPT and Claude chat histories as evidence. California courts are now implementing AI document review for family law cases.[5][6]

Sam Altman, CEO of OpenAI, has publicly warned that people treating ChatGPT like a therapist or lawyer should understand those conversations can be subpoenaed.[7]

A 2025 survey found that half of AI users are unaware their chats can be compelled as evidence.[8]


2. What Each AI Company Does with Your Data

Not all AI services handle your data the same way. Here is what you need to know about the major providers as of March 2026.

OpenAI (ChatGPT, GPT-5.4)

SettingWhat Happens to Your Data
Chat history ON (default)Retained indefinitely; used for model training
Chat history OFFRetained 30 days for abuse monitoring, then deleted; not used for training
API (developer)30-day retention for safety; not used for training by default
Enterprise/TeamNever used for training; Zero Data Retention available for qualifying customers

Critical risks: OpenAI operates a monitoring system that scans all ChatGPT conversations for potentially harmful content, escalating concerning conversations to human reviewers who can report to law enforcement.[9] Additionally, the NYT litigation hold requires preservation of all consumer data regardless of your privacy settings.[3]

New development (Feb–Mar 2026): OpenAI has contracted to provide AI for classified Pentagon networks. While OpenAI claims three "red lines" (no mass domestic surveillance, no autonomous weapons), the Electronic Frontier Foundation has criticized the contractual language as insufficient to prevent misuse.[10][11]

In the first half of 2025 alone, OpenAI received 119 government requests for user account information and 26 requests for chat content.[9]

Anthropic (Claude)

SettingWhat Happens to Your Data
"Help improve Claude" ONRetained 5 years (de-identified); used for training
"Help improve Claude" OFFRetained 30 days, then deleted; not used for training
Trust & safety flaggedUp to 2 years (inputs/outputs); classification scores up to 7 years
API / Business / EnterpriseAnthropic acts as data processor; never used for training

Notable: Anthropic refused a Pentagon contract for weapons and surveillance use, and was subsequently blacklisted by the current administration. This is a meaningful values signal for those who care about how their AI provider relates to military and surveillance uses.[12]

However, Anthropic is still US-based (San Francisco) and subject to the same legal framework as OpenAI — including National Security Letters, FISA courts, and subpoenas. The US v. Heppner ruling specifically addressed Claude conversations.[1]

Google (Gemini)

SettingWhat Happens to Your Data
Gemini app (default)Stored up to 30 days for debugging and abuse detection
Gemini Apps Activity OFFNot saved, not used for training; metadata still logged
Vertex AI (business)Zero Data Retention available per API call; EU data residency configurable
Google WorkspaceCustomer data not used for AI training without permission

Google offers the best EU data residency options among major US providers, including configurable data regions on Vertex AI (Belgium, Frankfurt, etc.).[13]

Watch out: A class action lawsuit (Oct 2025) alleges Google activated Gemini AI features across Gmail, Chat, and Meet without user consent, enabling AI to read private communications.[14] A separate federal jury awarded $425.7 million to ~98 million Google users misled about data collection opt-outs.[15]

DeepSeek

All data stored in the People's Republic of China. No specific retention periods disclosed. Third-party analysis found the DeepSeek app can capture login information and share it with China Mobile, a state-owned entity.[16]

DeepSeek has been banned in Italy, South Korea, Texas, New York, Tennessee, and Virginia (on government devices), with pending federal legislation (HR 1121) to ban it on all US government devices. Belgium, France, Ireland, and Germany have active regulatory inquiries.[17][18]

Under China's National Security Law, Chinese companies are required to provide data to the government on request — with no judicial oversight.


3. The Privacy Scorecard: Comparing Your Options

Here is how the major AI providers compare on privacy risk as of March 2026:

ProviderUS JurisdictionData RetentionTraining Opt-OutOverall Risk
ChatGPT (consumer)Yes (CA)Indefinite (30d if history off)AvailableHIGHEST
Claude (consumer)Yes (CA)5yr if opted in; 30d if notAvailableHIGHEST
Gemini (consumer)Yes (CA)30 daysAvailableHIGH
DeepSeek (API)ChinaUnspecified; stored in PRCUnclearHIGHEST
Self-hosted (Llama 4, Mistral 3, etc.)Your choiceYou controlN/ALOWEST

4. Regulations Are Catching Up — But Slowly

GDPR Enforcement Is Real

  • Italy fined OpenAI €15 million (December 2024)[19]
  • Italy fined Replika €5 million (May 2025)[20]
  • Italy banned DeepSeek entirely (January 2025)[17]
  • Over $3.5 billion in AI governance fines and settlements paid by Big Tech in 2025 alone[21]

New US State Laws (Effective January 1, 2026)

StateLawWhat It Does
CaliforniaSB 53Risk management disclosure for frontier models (companies with revenue >$500M)
CaliforniaAB 2013Training data transparency requirements
TexasRAIGAProhibits AI systems that incite harm, deepfakes, CSAM
ColoradoComprehensive AI ActComing June 2026
IllinoisEmployer AI DisclosureAI disclosure requirements for employers

EU AI Act Timeline

  • Already in force: Banned practices (manipulative AI, mass surveillance) since February 2025
  • August 2026: High-risk AI requirements become enforceable (employment, credit, education, law enforcement)
  • Penalties: Up to €35 million or 7% of global turnover[22]

5. How to Protect Yourself

You cannot make AI conversations legally privileged — no court has recognized that protection. But you can significantly reduce your risk.

Immediate Steps (Do These Today)

  1. Turn off training data sharing on every AI service you use:
    • ChatGPT: Settings → Data Controls → "Improve the model for everyone" → OFF
    • Claude: Settings → Privacy Settings → "Help improve Claude" → OFF
    • Gemini: Gemini Apps Activity → Toggle OFF "Save Gemini Apps Activity"
  2. Never put these things in an AI chat:
    • Real names of people in vulnerable situations
    • Immigration status information
    • Legal strategy (there is no attorney-client privilege)
    • Medical records or health details
    • Financial account information
  3. Delete your AI chat history regularly — but know that deletion does not affect data already preserved under litigation holds or law enforcement requests.
  4. Do not use DeepSeek's API if you or anyone in your conversations could be at risk from the Chinese government.

Stronger Protections (If Privacy Is Critical)

  1. Use API access instead of consumer interfaces. When you use ChatGPT.com or Claude.ai directly, your data is subject to consumer privacy policies with long retention periods and litigation holds. API access with business agreements (Data Processing Agreements) provides stronger protections — your data is not used for training, and the provider acts as a data processor, not a data owner.
  2. Consider EU-hosted AI services. Google's Vertex AI offers configurable EU data residency, meaning your data can stay in European data centers subject to GDPR protections. This is the strongest GDPR posture among the major US providers.[13]
  3. Explore self-hosted open-source models. For maximum privacy, you can run AI models on your own hardware where no data ever leaves your control:
    • Llama 4 Scout (Meta, open-weight) — fits on a single GPU, 10 million token context window
    • Mistral Large 3 (Apache 2.0 license) — fully open-source, 256K context
    • DeepSeek V3.1 (MIT License) — self-hosted version has zero external data transmission

The safest option: Self-hosted, open-source AI models on servers in privacy-friendly jurisdictions (like Iceland or Switzerland). No company can be subpoenaed because no company has your data.


6. Special Concerns for Vulnerable Communities

If you work with or belong to communities that face heightened surveillance risk — immigrants, activists, journalists, domestic violence survivors, LGBTQ+ individuals in hostile jurisdictions — the stakes are even higher.

  • National Security Letters (NSLs) can be served directly to US-based AI companies with a gag order that prevents the company from telling you. All major AI providers (OpenAI, Anthropic, Google) are US-based and subject to this.[23]
  • FISA Section 702 is due to expire in April 2026 — its renewal could expand or limit surveillance powers.
  • The US CLOUD Act allows US authorities to compel data from US providers even if stored abroad.
  • OpenAI's content monitoring system scans all conversations and can escalate to law enforcement without your knowledge.[9]

“You can't encrypt your way out of bad infrastructure. You have to design privacy into the infrastructure itself.” — Nicholas Merrill, founder of the first ISP to challenge an FBI National Security Letter


7. The Bottom Line

The legal landscape around AI chat privacy changed dramatically in early 2026. Here is the reality:

  • AI conversations have no legal privilege. A federal court has explicitly ruled this for Claude, and the principle applies to all AI chatbots.
  • 20 million ChatGPT logs were ordered produced in a single court case. Your data may be included.
  • OpenAI scans all conversations and can report to law enforcement.
  • OpenAI now contracts with the Pentagon for classified AI networks.
  • DeepSeek data is stored in China and shared with state-owned entities.
  • GDPR enforcement is real — over $3.5 billion in fines and settlements in 2025.

Treat every AI conversation as a permanent, searchable record that could be read by a judge, a lawyer, a law enforcement agent, or a foreign government. Because in 2026, that is exactly what it is.


Related Reading

References

  1. HSF Kramer: New York court finds client chats with generative AI tool Claude are not privileged. Link.
  2. Crowell: Federal court rules some AI chats are not protected by legal privilege. Link.
  3. National Law Review: OpenAI loses privacy gambit — 20 million ChatGPT logs likely headed to copyright trial. Link.
  4. California Employment Law Report: The FBI seized a CEO's AI chats. Link.
  5. Kolmogorov Law: The ChatGPT subpoena revolution. Link.
  6. Van Ho Law: When chatbots become witnesses. Link.
  7. Bressler: The new frontier of evidence — AI chat histories. Link.
  8. Bressler: Survey on AI chat subpoena awareness. Link.
  9. OpenAI H1 2025 Transparency Report. Link.
  10. MIT Technology Review: OpenAI's compromise with the Pentagon. Link.
  11. EFF: Weasel words — OpenAI's Pentagon deal won't stop AI-powered surveillance. Link.
  12. NPR: Trump administration blacklists Anthropic after Pentagon AI refusal. Link.
  13. Google Cloud: Vertex AI data residency. Link.
  14. National Law Review: New lawsuit alleges Google uses Gemini AI to secretly read Gmail, Chat, and Meet. Link.
  15. ForThePeople: Google users win $425.7M verdict in data privacy lawsuit. Link.
  16. Conference Board: State and federal governments consider DeepSeek bans. Link.
  17. Two Birds: Italy's Garante imposes limitation on DeepSeek. Link.
  18. DemandSage: DeepSeek banned countries. Link.
  19. Euronews: Italy fines OpenAI €15 million. Link.
  20. EDPB: Italy fines Replika €5 million. Link.
  21. AI Governance Lead: $3.5B in AI governance fines (2025). Link.
  22. Secure Privacy: EU AI Act 2026 compliance. Link.
  23. EFF: AI chatbot companies should protect your conversations from bulk surveillance. Link.