Does the model train on your conversations? Is data retained? Can you turn it off? Covers consumer GUI, paid plans, and direct API access. Policies shifted significantly in late 2025 — most providers moved to opt-out defaults for consumer users.
| Model / Lab | GUI — Trains on chats? | GUI — On by default? | GUI — User can disable? | API — Trains on data? | API — On by default? | API — User can disable? | Notes |
|---|---|---|---|---|---|---|---|
| ChatGPT OpenAI | Yes (opt-out) | On by default | Yes — free & Plus | No | Off by default | N/A — not trained | Consumer GUI defaults to training. Disable via Settings → Data Controls → "Improve the model for everyone." Disabling also disables chat history. Team/Enterprise plans: never trained. API never trained by default; 30-day abuse-monitoring retention only. |
| Claude Anthropic | Yes (opt-out) | On by default | Yes — Free, Pro, Max | No | Off by default | N/A — not trained | Policy shifted Aug 2025. Free/Pro/Max GUI: trains by default; opt out via Settings → Privacy → "Help improve Claude." Opt-in = 5-year retention; opt-out = 30-day retention. Deleted chats excluded always. API, Claude for Work/Enterprise, Bedrock, Vertex AI: never trained. |
| Gemini Google | Yes (opt-out) | On by default | Yes (free & Adv.) | No | Off by default | N/A — not trained | As of Sep 2, 2025, GUI defaults to using a sample of uploads/chats for training ("Keep Activity"). Disable via Gemini Apps Activity / Keep Activity toggle. Paid Gemini Advanced still defaults on — not enterprise-safe. Workspace/enterprise: not trained. Vertex AI API: not trained. Chats kept 72 hrs even with activity off. |
| Copilot Microsoft | Yes (opt-out) | On by default | Requires M365/Enterprise | No (Azure) | Off by default | N/A — not trained | Consumer Copilot is among the least transparent on opt-out options. Enterprise/M365: governed by Microsoft Purview; data never used for training; full admin control via Customer Lockbox. Azure OpenAI API: not trained; 30-day abuse monitoring only. |
| Grok xAI | Yes (limited opt-out) | On by default | Cannot fully disable | No | Off by default | N/A — not trained | Deeply tied to X (Twitter). xAI trains on public X posts and Grok conversations. Users can delete threads but cannot fully disable training use of chat data. Unauthenticated users have almost no privacy controls. API data not trained by default. |
| Meta AI Meta | Yes (no opt-out) | On by default | No opt-out available | Open weights | N/A | N/A | Ranked worst for privacy in multiple 2025 studies. No dedicated opt-out mechanism for training on Meta AI chats. Governed by Meta's broad platform Data Policy — also trains on public Facebook/Instagram posts. Llama model weights are open-source; self-hosting means you control your own data entirely. |
| Le Chat Mistral AI | Yes (opt-out) | Opt-in framing | Yes — all tiers | No | Off by default | Zero retention option | Ranked most privacy-friendly consumer AI in 2025 Incogni study. GUI chat stored until account deletion or manual deletion; API inputs retained 30 rolling days for abuse monitoring unless zero data retention (ZDR) is enabled. GDPR-aligned; EU-based. |
| DeepSeek DeepSeek (China) | Yes (no opt-out) | On by default | No opt-out | Opaque | Opaque | No controls published | Significant privacy concerns. Collects account info, prompts, chat history. No published opt-out mechanism. Data subject to PRC law (including government access). Ranked near bottom in all 2025 privacy analyses. Self-hosting open-weight models avoids cloud data risks. |
| Llama (self-hosted) Meta / open weights | N/A | N/A | Full control | N/A | N/A | Full control | Open weights — no data leaves your infrastructure when self-hosted. Training, retention, and logging are entirely your responsibility. No usage data sent to Meta. Gold standard for sensitive/regulated workloads. |
| Sources: Anthropic, OpenAI, Google, Mistral, xAI privacy policies and help documentation; Incogni LLM Privacy Ranking 2025; Drainpipe.io AI Privacy 2026; Bonfireci Shadow AI report Sep 2025; DigitalInformationWorld, WinBuzzer Aug 2025. Policies are subject to change — always verify current terms before sharing sensitive data. Last verified: March 2026. | |||||||
⚠ Note: OpenAI, Google, and Anthropic all made significant privacy policy reversals in Aug–Sep 2025, moving consumer products to opt-out training defaults. Even paid consumer tiers (ChatGPT Plus, Claude Pro, Gemini Advanced) are not enterprise-safe without explicit opt-out.