Capability · Framework — orchestration
Instructor
Instructor is a focused library that makes structured outputs from LLMs a one-line change. You decorate an OpenAI-compatible client and pass a Pydantic response_model — Instructor handles function calling, JSON mode, validation, retries with re-asks on validation errors, and streaming of partial objects. It supports OpenAI, Anthropic, Gemini, Cohere, Ollama, LiteLLM and has ports in TypeScript, Go, Elixir and others.
Framework facts
- Category
- orchestration
- Language
- Python
- License
- MIT
- Repository
- https://github.com/567-labs/instructor
Install
pip install instructor Quickstart
import instructor
from anthropic import Anthropic
from pydantic import BaseModel
class User(BaseModel):
name: str
age: int
client = instructor.from_anthropic(Anthropic())
user = client.messages.create(
model='claude-opus-4-7',
response_model=User,
max_tokens=512,
messages=[{'role': 'user', 'content': 'Ada is 36.'}]
) Alternatives
- Outlines — grammar-constrained generation
- Pydantic AI — full agent framework with the same philosophy
- Marvin — simpler decorators for common patterns
- Native Structured Outputs (OpenAI/Anthropic)
Frequently asked questions
When should I use Instructor vs native structured outputs?
Native structured outputs (OpenAI Structured Outputs, Anthropic tool use) are great for single providers. Instructor unifies the API across providers, adds retry-on-validation-error, and handles streaming of partial objects — useful once you work with more than one model or care about resilience.
Does it work with Anthropic Claude?
Yes — via instructor.from_anthropic(). Instructor uses Anthropic tool use under the hood. It also supports Gemini, Cohere, Groq, Mistral, Ollama, and anything LiteLLM can reach.
Sources
- Instructor — docs — accessed 2026-04-20
- Instructor — GitHub — accessed 2026-04-20