Capability · Framework — observability
OpenLLMetry (Traceloop)
OpenLLMetry is an OSS project by Traceloop that extends OpenTelemetry's semantic conventions to cover LLM calls, vector stores, and agent traces. You drop in one SDK and get structured spans for OpenAI, Anthropic, Bedrock, LangChain, LlamaIndex, Pinecone, and dozens more — exported to any OTLP collector (Datadog, Honeycomb, Grafana Tempo, Traceloop's own UI, or Langfuse/Phoenix). It's becoming the default instrumentation layer for LLM observability that's vendor-neutral.
Framework facts
- Category
- observability
- Language
- Python / TypeScript / Go
- License
- Apache-2.0
- Repository
- https://github.com/traceloop/openllmetry
Install
pip install traceloop-sdk Quickstart
from traceloop.sdk import Traceloop
Traceloop.init(app_name='my-agent') # auto-exports via OTLP
from openai import OpenAI
client = OpenAI()
client.chat.completions.create(model='gpt-4o', messages=[{'role':'user','content':'hi'}])
# → trace appears in Traceloop, Datadog, Honeycomb, etc. Alternatives
- Langfuse — OSS but Langfuse-specific format
- Arize Phoenix — OSS
- Helicone
Frequently asked questions
OpenLLMetry or Langfuse?
Use OpenLLMetry when you already have an OpenTelemetry stack or want vendor-portable data. Use Langfuse when you want an opinionated LLM-specific UI out of the box. Many teams use both — OpenLLMetry for ingestion, Langfuse or Traceloop Cloud for the UI.
Is this affiliated with OpenTelemetry the project?
OpenLLMetry aligns with OpenTelemetry's GenAI semantic conventions working group; Traceloop actively contributes, and the conventions are converging.
Sources
- OpenLLMetry GitHub — accessed 2026-04-20
- Traceloop docs — accessed 2026-04-20