Curiosity · Concept

Prompt Chaining

Asking an LLM to plan, write, critique, and revise an essay in a single giant prompt often produces mediocre results because the model has to juggle too many goals at once. Prompt chaining breaks the work into steps — draft outline → expand → critique → polish — with each step specialized and its output validated before the next runs. Chains are easier to debug (you can see which step failed), can branch or loop, and let you put cheap models on easy steps and strong ones on hard steps. Most agent frameworks (LangChain, LangGraph, LlamaIndex, DSPy) revolve around chaining.

Quick reference

Proficiency
Beginner
Also known as
LLM chains, prompt pipelines
Prerequisites
prompting basics

Frequently asked questions

What is prompt chaining?

Prompt chaining is decomposing a complex task into a sequence of simpler prompts, where each step's output flows into the next. Instead of one monolithic prompt, you build a pipeline of focused steps.

When should I chain instead of using one big prompt?

When the task has clearly separable sub-steps, when different steps need different tools or different models, when intermediate validation helps, or when a single prompt is unreliable. If one prompt works, ship one prompt — chains add latency.

What's the failure mode of long chains?

Error compounding — every step has a small failure rate, and they multiply down the chain. Mitigate with validation gates, retries, a smaller number of strong steps rather than many weak ones, and structured output between steps.

Chaining vs agents?

A chain is a fixed DAG of prompts. An agent decides dynamically which step to run next based on the current state. Chains are simpler and more predictable; agents are more flexible but harder to debug.

Sources

  1. Anthropic — Chain complex prompts for stronger performance — accessed 2026-04-20
  2. LangChain — Conceptual guide to chains — accessed 2026-04-20