Capability · Framework — observability

Laminar

Laminar (lmnr.ai) is a YC-backed open-source LLM engineering platform that combines tracing, online evaluation, prompt management, and dataset building in one tool. The data plane is written in Rust and runs on Postgres + Clickhouse + Qdrant, so self-hosted deployments scale well. It emphasises online evals (judge LLMs scoring production traffic) and agent-graph visualisation for LangGraph-style agents.

Framework facts

Category
observability
Language
Rust (server) / Python / TS (SDK)
License
Apache-2.0
Repository
https://github.com/lmnr-ai/lmnr

Install

pip install lmnr

Quickstart

from lmnr import Laminar
Laminar.initialize(project_api_key='LMNR_KEY')

from openai import OpenAI
OpenAI().chat.completions.create(model='gpt-4o', messages=[{'role':'user','content':'hi'}])

Alternatives

  • Langfuse — OSS
  • Langtrace — OTel-native OSS
  • Arize Phoenix — OSS

Frequently asked questions

Why a Rust server?

High ingestion throughput for production traces and low-latency dashboards. Laminar can sustain tens of thousands of spans/sec on modest hardware, which is painful on Python-based alternatives.

Does Laminar support online evaluators?

Yes — you write Python eval functions (LLM-as-judge or custom) that run asynchronously on every trace, and Laminar charts the results over time so you can alert on regressions.

Sources

  1. Laminar docs — accessed 2026-04-20
  2. Laminar GitHub — accessed 2026-04-20