Methodology (Transparent Version)

This page is for legal / IT / detail-curious users to see exactly how the verdicts are derived. All source code is open at GitHub.

The 3-Axis Framework

Users select 3 variables; the combination determines the verdict:

  • Tool: ChatGPT / Claude / Gemini / Microsoft Copilot / Perplexity / Regional-Compliance-Restricted Models / Local Open-Source Models
  • Account Tier: Enterprise / Paid Personal / Free / Unknown
  • Data Type: Public / Anonymized / Internal Marketing / Customer PII / Order Records / Financial / Business Secret / HR / Code

3-Layer Decision Logic

  1. Layer 1 — Tool Override (most decisive)
    • "Regional-Compliance-Restricted Models" (DeepSeek / Qwen / ERNIE / Doubao / Kimi etc.): Sensitive enterprise data is not recommended due to specific regional data security regulations
    • "Local Open-Source Models" (Ollama / LM Studio / llama.cpp): Fully local inference, data never leaves your device — green light across the board
  2. Layer 2 — Data Sensitivity Override
    • Customer PII / Order Records / Financial / Business Secret / HR: Default ❌ RISK regardless of tool/tier
    • Code: Enterprise tier ✅, personal tier ⚠️
  3. Layer 3 — Account Tier (for low-sensitivity data)
    • Enterprise (Enterprise / Team / Workspace): ✅ Has data isolation contract
    • Paid Personal: ⚠️ Default no-training, but terms reserve exceptions
    • Free / Unknown: ⚠️ Conversations typically used to improve services

Why "Data Type" is the Most Decisive Axis

Traditional "LLM security concern" discussions focus on "which model is safer", but this framing is wrong. OpenAI / Anthropic / Google's enterprise DPAs are largely similar in substance. What truly determines risk is what the user puts in. Putting customer ID numbers into "the safest" ChatGPT Enterprise still violates GDPR / PDPA. Pasting public market news into free ChatGPT carries low actual risk.

Why "Regional-Compliance-Restricted Models" is a Separate Category

DeepSeek / Qwen / ERNIE / Doubao / Kimi and similar services operated by Chinese companies are not flagged because of model quality issues, but because of the legal geography of data flow. Under regulations like the Data Security Law, Personal Information Protection Law, and Cybersecurity Law, these companies may be required to provide user data to local regulatory authorities. If your enterprise contract doesn't explicitly handle cross-border data flow, sending data to these services creates compliance risk.

To use these models safely, alternatives include:

  • Self-hosting open-weights versions (many have Apache / MIT licenses)
  • Switching to services with EU / US / TW data sovereignty guarantees (Anthropic / OpenAI / Google)

The ❌ Verdict's Escape Hatch

All ❌ RISK results include this note: "Unless you're using an enterprise private instance (Azure OpenAI Private Deployment / Vertex AI Private Endpoint / self-hosted inference), this verdict does not apply."

Reason: This tool's scope is "public-internet LLM services". If your company has procured / built a private deployment where inference happens within infrastructure you control (data doesn't leave your perimeter), many "do-not-send" restrictions don't apply. In that case, your company's GenAI policy supersedes our verdict.

Reference Citations + Timestamps

Every reference link in result cards carries a snapshot_date (currently 2026-05), meaning "we judged based on the terms as of that date". LLM vendor terms change over time; we recommend checking key links every 6 months. We periodically review and re-publish.

Limitations (What This Tool Cannot Do)

  • This tool is not legal advice. For specific contract / cross-border compliance / personal data liability questions, consult a legal professional.
  • The 3 axes cover most common scenarios but edge cases exist (e.g., "I'm using ChatGPT to summarize anonymized but unpublished operational trends") that require human judgment.
  • The decision logic is rule-based; it doesn't change based on individual context. For personalized clarification, use the textarea on the result page — Claude Haiku will provide context (without overturning the rule-based verdict).
  • References use LLM vendors' publicly available documents. If your company has a separate procurement contract (OpenAI Enterprise Agreement, Google Workspace Enterprise Plus, etc.), actual terms may be more lenient or stricter than the public version.

Source Code

Decision logic lives in src/data/decision.ts, tool reference links in src/data/tools.ts, and full unit tests in tests/decision.test.ts. PRs welcome to add tools, fix references, or add edge-case handling.

Last updated: 2026-05-06. Verdicts based on 2026-05 publicly available terms from each LLM vendor.