Capability · Framework — orchestration
DSPy
DSPy (Declarative Self-improving Python) from Stanford NLP treats LLM calls as programs rather than prompts. You specify behaviour with signatures and modules (Predict, ChainOfThought, ReAct), then DSPy's optimisers (MIPROv2, BootstrapFewShot, COPRO) compile the pipeline — tuning instructions and demonstrations against your metric. It's a compiler for LLM programs.
Framework facts
- Category
- orchestration
- Language
- Python
- License
- MIT
- Repository
- https://github.com/stanfordnlp/dspy
Install
pip install dspy Quickstart
import dspy
dspy.configure(lm=dspy.LM('anthropic/claude-opus-4-7'))
class QA(dspy.Signature):
'''Answer a question with a short, factual response.'''
question: str = dspy.InputField()
answer: str = dspy.OutputField()
predict = dspy.ChainOfThought(QA)
print(predict(question='Capital of India?').answer) Alternatives
- LangChain — prompt templates instead of optimisation
- Instructor — structured outputs without optimisation
- Outlines — grammar-constrained generation
- Mirascope — typed prompt framework
Frequently asked questions
How is DSPy different from LangChain?
LangChain composes hand-written prompts into chains. DSPy treats prompts as learnable — you write a Python program with declared I/O and DSPy's optimisers search for the best instructions and few-shot examples given a metric and dataset.
Do I need training data to use DSPy?
To run DSPy modules you don't. To use the optimisers effectively you need a small set of examples (often tens to a few hundred) and a metric that scores outputs — exact match, LLM-as-judge, or custom.
Sources
- DSPy — docs — accessed 2026-04-20
- DSPy — GitHub — accessed 2026-04-20