Curiosity · Concept

Planning in LLM Agents

Reactive agents just pick the next tool. Planning agents first draft a multi-step plan — decompose the goal, order sub-tasks, identify required tools — and then execute against it, typically with a replanning step after each observation. Common approaches are ReAct (interleaved reason-act traces), Plan-and-Execute (plan up front, execute step by step), LLMCompiler (parallel DAG planning), and Reflexion-style replanning after failure. Planning buys reliability on long-horizon tasks at the cost of upfront tokens and complexity.

Quick reference

Proficiency
Intermediate
Also known as
agent planning, task decomposition
Prerequisites
tool calling, chain-of-thought

Frequently asked questions

What is planning in agents?

Planning is the step where an agent decomposes a high-level goal into a concrete sequence or tree of sub-steps before acting, and revises that plan when new information (tool results, user feedback) contradicts the current plan.

What are common planning strategies?

ReAct (reasoning and acting interleaved step by step), Plan-and-Execute (draft full plan up front, then execute), LLMCompiler (parallel DAG of tool calls), and tree search (ToT-style exploration with evaluator scoring).

When is a planner worth the overhead?

Long-horizon tasks (5+ steps), tasks with dependencies between tool calls, and anything where a wrong early step wastes many downstream steps. Short, single-lookup tasks don't need a planner — it just adds latency.

What breaks agent plans most often?

Hallucinated sub-steps (the plan assumes a tool or field that doesn't exist), no replanning after surprising tool output, and monster plans the executor can't keep track of. Keep plans short, validate each step, and replan aggressively.

Sources

  1. Yao et al. — ReAct: Synergizing Reasoning and Acting in Language Models — accessed 2026-04-20
  2. Kim et al. — An LLM Compiler for Parallel Function Calling — accessed 2026-04-20