AI Privacy Tool Playground

Test prompt sanitization before production traffic.

Simulate Kamora Privacy Guard behavior: detect sensitive values, map them to safe tokens, and return a clean payload ready for LLM processing.

Raw Prompt
Sanitized Output
Create a support escalation for [PERSON_1], email [EMAIL_1] and call [PHONE_1]. Use employee id [ID_1] and include summary for [PERSON_2].

PII Detections

5

Policy State

Safe to forward

Redaction Map

PERSON

2

EMAIL

1

PHONE

1

ID

1

TypeTokenOriginal
EMAIL[EMAIL_1][email protected]
PHONE[PHONE_1]+1 (415) 555-9123
ID[ID_1]AB-938271
PERSON[PERSON_1]Dr. Elif Demir
PERSON[PERSON_2]Sarah Lee