FAQ

Q1. With my personal Pro account (ChatGPT Plus / Claude Pro), really can't I send sensitive data?

Technically OpenAI / Anthropic's personal paid accounts default to "won't use conversations to train models", but terms reserve exceptions (abuse detection / legal subpoenas / system debugging may surface content). Also, "sensitive data" is defined by your company. Company security policy may forbid personal accounts touching customer data regardless of vendor terms — violating it is violating it.

Recommendations:

  • Low-sensitivity data (market news / aggregated analysis / public copy) → personal paid OK
  • Medium-sensitivity (internal but non-confidential) → use enterprise account or company-approved internal GenAI
  • High-sensitivity (PII / orders / financial / HR / business secrets) → don't send to public LLMs at all
Q2. Why are "Regional-Compliance-Restricted Models" always flagged red?

Not the model's fault — the issue is legal geography of data flow. DeepSeek / Qwen / ERNIE / Doubao / Kimi services are operated by Chinese companies. Under the Data Security Law, Personal Information Protection Law, and Cybersecurity Law, these companies may be compelled to provide user data to local regulatory authorities.

For enterprise users, this means your prompt content theoretically falls under that jurisdiction. Without explicit cross-border data handling in your contract, sending data is a compliance risk.

Exception: Self-hosting an open-weights version (e.g., running Qwen 2.5 locally) means inference happens entirely within your infrastructure — that's not "sending to a Chinese company" and falls under "Local Open-Source Models" classification.

Q3. How do you define "Enterprise" (Enterprise / Team / Workspace)?

Paid enterprise plans with a Data Processing Agreement (DPA) or "no-training" clause:

  • OpenAI: ChatGPT Enterprise / Team, API + Zero Data Retention setting
  • Anthropic: Claude Enterprise / Team, API (Commercial Terms)
  • Google: Gemini for Workspace (built into Workspace Business / Enterprise plans)
  • Microsoft: Copilot for Business / Microsoft 365 Copilot

Personal Pro / Plus / Premium does not count as Enterprise. Free tier definitely doesn't.

Q4. The verdict says ❌ but my company has Azure OpenAI Private Deployment / Vertex AI Private Endpoint. Now what?

This tool's scope is "public-internet LLM services". If your company has procured / built a private deployment where inference happens within your controlled cloud project or datacenter (data doesn't leave the company), most "do-not-send" restrictions don't apply — defer to your company's security policy.

All ❌ result cards include an escape-hatch note about this.

Q5. Does this tool store my selections?

No.

  • 3-axis wizard runs entirely in your browser
  • Share URLs are base64-obfuscated (URLs don't expose your selections in plain text)
  • The LLM clarification (textarea) sends your text to Anthropic for processing, but we don't store the content — only anonymous metrics
  • No cookies, no analytics (v1.0 has zero tracking)
Q6. References will expire / LLM vendors change terms. What then?

Every reference carries a snapshot_date (currently 2026-05). The author manually reviews key links every 3-6 months and updates the site. If you find a 404 link or outdated terms, please open an issue on GitHub.

Q7. Why isn't [my tool] listed?

v1.0 covers 7 categories: ChatGPT / Claude / Gemini / Microsoft Copilot / Perplexity / Regional-Compliance-Restricted Models / Local Open-Source Models. Other tools (Mistral / Cohere / xAI Grok / various RAG tools) are not yet covered.

To request a tool, open a GitHub issue or PR with: tool name + official data handling terms link + account tier breakdown. Will evaluate and add.

Q8. Are there any exceptions for free tier?

Free tier terms almost always allow the service to use conversations to improve models / services (OpenAI Free, Gemini Apps personal, Claude Free, etc.). Exceptions:

  • API calls (even on free tier) usually default to "no training" — different from ChatGPT.com Free
  • Some services have opt-out settings to disable "data used for training"
  • Local inference (Ollama with open-source models) is completely exempt

Recommendation: If you must use free tier, at least enable opt-out (e.g., turn off ChatGPT's "Improve the model for everyone"), but still avoid sending sensitive data.

Q9. How does this differ from ChatGPT Enterprise's own trust center?

OpenAI's trust center only explains "how safe ChatGPT Enterprise is". This site's role:

  • Cross-tool comparison: When choosing between Claude / Gemini, this site offers neutral judgment
  • Cross-tier visibility: Emphasizes "same Claude — but enterprise vs personal vs free differ"
  • User-situation-driven: Not selling a product, just answering "can I send this data?"
Q10. I'm here to fight — how dare you give legal advice?

We don't give legal advice. Every result card + About page + this page carries the disclaimer: "This tool provides information for educational purposes only and does not constitute legal advice." This is a technical framework — consolidating facts scattered across vendor terms into a decision tree. For specific contracts / personal data liability / cross-border compliance, consult a legal professional.

Last updated: 2026-05-06