Capability · Framework — observability

W&B Weave

Weave is W&B's purpose-built product for generative-AI workflows, alongside their classic ML experiment-tracking tools. A single `@weave.op` decorator is enough to capture inputs, outputs, and latency for every call in your chain — whether it's an LLM, a tool, or a retrieval step. Evaluations and datasets layer on top, tightly integrated with W&B's experiment dashboard.

Framework facts

Category
observability
Language
Python / TypeScript
License
Apache-2.0 (SDK) / SaaS
Repository
https://github.com/wandb/weave

Install

pip install weave
wandb login

Quickstart

import weave
from openai import OpenAI

weave.init('my-llm-app')

@weave.op
def answer(q: str) -> str:
    resp = OpenAI().chat.completions.create(
        model='gpt-4o-mini',
        messages=[{'role':'user','content': q}]
    )
    return resp.choices[0].message.content

print(answer('hello'))
# see trace at https://wandb.ai/your-entity/my-llm-app/weave

Alternatives

  • Arize Phoenix — open-source equivalent
  • LangSmith — LangChain-native alternative
  • Langfuse — self-hostable open-source
  • Humanloop — prompt-IDE first

Frequently asked questions

Is Weave only for LLM apps?

It's optimised for LLMs and agents but tracks any Python function. Teams frequently use Weave to observe traditional ML scoring pipelines alongside generative calls.

Is the SDK open-source?

Yes — the SDK is Apache-2.0 on GitHub. The storage, dashboards, and collaboration features live in the hosted W&B platform (free and paid tiers available).

Sources

  1. Weave — docs — accessed 2026-04-20
  2. Weave GitHub — accessed 2026-04-20