LLM Security Toolbox

Client-side tools to help keep your prompts and outputs safe before sending them to an LLM. Nothing is uploaded; all processing happens in your browser.

Tip: Use this before pasting into ChatGPT or any LLM.

1. PII Detector & Highlighter

Paste text and quickly highlight emails, phone numbers, card numbers, and IDs.

EmailsPhone numbersCard-like numbersID-like tokens

2. Prompt Scrubber (Redact PII)

Automatically replace sensitive details with placeholders before sending to an LLM.

Placeholders: {EMAIL}, {PHONE}, {CARD}, {ID}

3. Sensitive Keywords Scanner

Detect secrets or sensitive business terms that shouldn't be in prompts.

passwordapiKeysecrettoken ibanpassportsalaryotp

4. Token / Size Estimator

Estimate roughly how big your prompt is before sending it to an LLM.

Approx tokens ~= max(words × 1.3, characters ÷ 4). This is a rough estimate, not model-accurate.

5. Prompt Diff Viewer

See what changed between two prompt versions to catch hidden additions or prompt injection.

This is a simple line-based diff meant for quick checks. For deep reviews, use a proper diff tool.

6. Data Classification Helper

Decide how sensitive your content is and whether it's safe to send to a cloud LLM.

7. Policy Checklist

Quick pre-flight checklist based on your internal LLM usage policy.

8. Output Risk Evaluator

Paste an AI-generated answer and quickly assess sharing risk.

9. URL & ID Extractor

Find links and ID-like tokens in your text that might leak internal systems.

IDs considered: patterns like ABC-1234, TKT-2024-001, etc.

10. Ephemeral Secure Scratchpad

A local-only note area that auto-clears after a short time so you don't leave sensitive data lying around.

This only runs in your browser. For extra safety, avoid reusing this device if others have access to it.