Capability · Framework — orchestration
Burr
Burr treats an LLM app as a finite state machine — each Action is a node that reads state, runs logic, and writes state. That makes the flow explicit, testable, and resumable: the same code runs in dev, in prod, and inside the Burr UI that renders the graph and traces state transitions. It's a lightweight alternative to LangGraph for teams who prefer pure-Python ergonomics.
Framework facts
- Category
- orchestration
- Language
- Python
- License
- BSD-3-Clause
- Repository
- https://github.com/dagworks-inc/burr
Install
pip install 'burr[start]'
burr # launch the local UI at http://localhost:7241 Quickstart
from burr.core import ApplicationBuilder, State, action
@action(reads=['prompt'], writes=['response'])
def answer(state: State) -> State:
return state.update(response=f'echo: {state["prompt"]}')
app = (
ApplicationBuilder()
.with_actions(answer)
.with_transitions(('answer', 'answer'))
.with_entrypoint('answer')
.with_state(prompt='hello')
.build()
)
action_, result, state = app.run(halt_after=['answer'])
print(state['response']) Alternatives
- LangGraph — larger ecosystem, LangChain-tied
- Prefect — general workflow engine with LLM support
- Hamilton — DAGWorks sibling for dataflow
- Temporal — durable workflow alternative
Frequently asked questions
Burr vs LangGraph?
Both model LLM apps as graphs/state machines. Burr is pure Python with a lighter abstraction and a polished local UI; LangGraph is tied to the LangChain ecosystem with deeper built-in agent primitives. Choose based on how invested you already are in LangChain.
Does Burr replace my orchestration framework?
Usually not — it complements them. Burr owns the 'in-request state machine', while something like Prefect or Airflow owns 'scheduled pipelines'. They compose well.
Sources
- Burr — docs — accessed 2026-04-20
- Burr GitHub — accessed 2026-04-20