Capability · Framework — observability

Helicone

Helicone started as a one-line proxy for OpenAI logging and has grown into a full observability platform for LLM apps. You change one base URL and get per-user cost tracking, request-level traces, prompt versioning, rate limit monitoring, and eval runs. The open-source edition is fully self-hostable, which is why Helicone shows up frequently in startups that need logging without sending prompts to a third party.

Framework facts

Category
observability
Language
TypeScript + Python/Node SDKs
License
Apache 2.0 + commercial
Repository
https://github.com/Helicone/helicone

Install

# No SDK needed — just change base URL
pip install openai
# or use the helper SDK:
pip install helicone

Quickstart

from openai import OpenAI

client = OpenAI(
    base_url='https://oai.helicone.ai/v1',
    api_key='sk-...',
    default_headers={'Helicone-Auth': 'Bearer sk-helicone-...'}
)
resp = client.chat.completions.create(
    model='gpt-4o',
    messages=[{'role': 'user', 'content': 'hi'}]
)

Alternatives

  • LangSmith — LangChain's observability suite
  • Langfuse — open-source alternative, tighter framework integration
  • Portkey — gateway with more guardrail features
  • Braintrust — eval-focused platform

Frequently asked questions

Proxy logging or SDK instrumentation — which is better?

Proxy logging (Helicone's default) is a one-line install and captures everything, but adds a hop. SDK/OTel instrumentation (Langfuse, LangSmith) gives richer traces across tools and chains. Many teams use both.

Will self-hosting work for compliance?

Yes — Helicone self-hosted runs on your infra, so prompts never leave. It's a common pick for healthcare and finance teams who need request logging but can't send prompts to a SaaS.

Sources

  1. Helicone — docs — accessed 2026-04-20
  2. Helicone on GitHub — accessed 2026-04-20