Capability · Framework — observability
Langtrace
Langtrace (by Scale3 Labs) is an open-source LLM observability system that speaks OpenTelemetry end-to-end. Its SDK captures spans for LLM calls, vector DBs, and frameworks like LangChain, LlamaIndex, and CrewAI, and sends them to either Langtrace Cloud or any OTel collector. Differentiators include built-in evals, prompt playground, and Anthropic computer-use support.
Framework facts
- Category
- observability
- Language
- Python / TypeScript
- License
- AGPL-3.0
- Repository
- https://github.com/Scale3-Labs/langtrace
Install
pip install langtrace-python-sdk
# Self-host the UI
git clone https://github.com/Scale3-Labs/langtrace && cd langtrace && docker compose up Quickstart
from langtrace_python_sdk import langtrace
langtrace.init(api_key='LANGTRACE_KEY')
from openai import OpenAI
OpenAI().chat.completions.create(model='gpt-4o', messages=[{'role':'user','content':'hi'}]) Alternatives
- Langfuse — OSS
- Arize Phoenix — OSS
- OpenLLMetry — OTel SDK
Frequently asked questions
Langtrace or Langfuse?
Both are OSS and solid. Langtrace is OTel-native (any OTel backend works), Langfuse has a slightly richer evals UI. If you're already on OpenTelemetry, Langtrace slots in more cleanly.
Is AGPL a problem?
Only if you host Langtrace as a public SaaS on top of it; normal in-company or self-hosted use is fine. Enterprise license is available.
Sources
- Langtrace docs — accessed 2026-04-20
- Langtrace GitHub — accessed 2026-04-20