AI Privacy Tool Playground
Test prompt sanitization before production traffic.
Simulate Kamora Privacy Guard behavior: detect sensitive values, map them to safe tokens, and return a clean payload ready for LLM processing.
Raw Prompt
Sanitized Output
PII Detections
0
Policy State
Safe to forward
Redaction Map
No sensitive pattern detected with current rules.