Capability · Framework — orchestration
LLM Guard
LLM Guard bundles dozens of input and output scanners — PromptInjection, PII, Secrets, Toxicity, Bias, Regex, Sensitive — behind a simple Python API. You wrap user prompts and model outputs with sanitize() and scan() calls; the library returns a score, a risk flag, and a sanitised string you can forward.
Framework facts
- Category
- orchestration
- Language
- Python
- License
- MIT
- Repository
- https://github.com/protectai/llm-guard
Install
pip install llm-guard Quickstart
from llm_guard import scan_prompt
from llm_guard.input_scanners import Anonymize, PromptInjection, Secrets
scanners = [Anonymize(), PromptInjection(), Secrets()]
safe_prompt, results, scores = scan_prompt(scanners, 'my email is [email protected] and password hunter2')
print(safe_prompt) # my email is [REDACTED_EMAIL] ... Alternatives
- NeMo Guardrails — programmable rails
- Guardrails AI — RAIL spec
- Presidio — Microsoft PII engine
- Prompt Shield (Azure AI Content Safety)
Frequently asked questions
How heavy are the scanners?
Lightweight scanners (regex, secrets) run in milliseconds. ML-based scanners (prompt injection, toxicity) load transformer models and benefit from GPU.
Can I deploy LLM Guard as a service?
Yes. The repo ships a FastAPI wrapper and Docker image so you can run it as an HTTP microservice in front of your model gateway.
Sources
- LLM Guard — GitHub — accessed 2026-04-20
- LLM Guard — docs — accessed 2026-04-20