Capability · Framework — orchestration
Portkey
Portkey is a production AI gateway aimed at enterprises running many LLM calls. It adds semantic caching, automatic retries with fallback providers, PII redaction and policy guardrails, cost budgets per team, and a prompt library — all fronted by an OpenAI-compatible endpoint so existing code requires minimal changes. Deployable as SaaS or self-hosted.
Framework facts
- Category
- orchestration
- Language
- API + Python / Node SDKs
- License
- MIT + commercial
- Repository
- https://github.com/Portkey-AI/gateway
Install
pip install portkey-ai
# or
npm install portkey-ai Quickstart
from portkey_ai import Portkey
pk = Portkey(
api_key='pk-...',
virtual_key='anthropic-prod'
)
resp = pk.chat.completions.create(
model='claude-opus-4-7',
messages=[{'role': 'user', 'content': 'Summarise MCP.'}]
)
print(resp.choices[0].message.content) Alternatives
- LiteLLM — open-source self-hosted alternative
- Helicone — observability-led gateway
- OpenRouter — hosted router without guardrails
- Cloudflare AI Gateway — edge-deployed alternative
Frequently asked questions
How is Portkey different from LiteLLM?
Both proxy LLM calls. Portkey leans commercial with a polished UI, prompt library, and guardrail policies out of the box. LiteLLM is MIT-licensed and more DIY. Many teams use LiteLLM for the transport layer and Portkey for prompt management, or pick one and commit.
Do guardrails actually block bad outputs?
Portkey's guardrails run checks (PII regex, LLM-as-judge, schema validation) and can retry, rewrite, or reject responses before they reach your user. They reduce risk but are not a replacement for testing — use them with offline evals.
Sources
- Portkey — docs — accessed 2026-04-20
- Portkey Gateway (OSS) — accessed 2026-04-20