Capability · Comparison

Chain-of-Thought vs ReAct Pattern

Chain-of-Thought (CoT) and ReAct are the two foundational prompting patterns for getting LLMs to think through problems. CoT is the classic 'let's think step by step' approach — pure reasoning in a single call. ReAct (Reason + Act) interleaves reasoning steps with tool calls, producing agents that can actually interact with the world. Modern reasoning models (o1, o3, R1) internalize CoT; ReAct is still the dominant pattern for agent frameworks.

Side-by-side

Criterion Chain-of-Thought (CoT) ReAct Pattern
Turns per problem 1 Multi-turn loop
Uses external tools No Yes — core of the pattern
Strength Pure reasoning, math, logic Agents that need to fetch, compute, act
Latency Low — one call High — N calls + observation time
Token cost Moderate (long outputs) High (many round-trips)
Observability Reasoning visible in output Each Thought/Action/Observation logged
Built into modern models Yes — reasoning models internalize it No — framework-level pattern
Framework support Any LangChain, LangGraph, LlamaIndex, Autogen
Error recovery No — wrong step in CoT poisons the answer Yes — bad observation triggers re-think

Verdict

CoT and ReAct are complementary, not alternatives. CoT is about making reasoning visible in a single call — a technique any model supports. ReAct is about structuring an agent loop with tools — a multi-turn architectural pattern. For a math question, use CoT (or a reasoning model). For a 'look up the weather and book a flight' task, use ReAct. Reasoning models like o3 and R1 essentially bake CoT in, but you still need ReAct to give them tools.

When to choose each

Choose Chain-of-Thought (CoT) if…

  • The problem is pure reasoning — math, logic, deduction.
  • No external information is needed (model already knows enough).
  • You want a single call with minimal latency.
  • You're using a reasoning model (o3, R1) that internalizes CoT.

Choose ReAct Pattern if…

  • The task requires looking things up or calling external APIs.
  • The problem decomposes into concrete action steps.
  • You need observability and the ability to recover from bad steps.
  • You're building an agent with LangGraph, LangChain, or CrewAI.

Frequently asked questions

Is ReAct obsolete now that reasoning models exist?

No — reasoning models replace the internal reasoning part of CoT, but you still need ReAct (or similar) to give them tools. Every modern agent framework uses a ReAct-shaped loop.

Do I still need to prompt 'let's think step by step'?

Not really. Modern models (including non-reasoning ones) reason by default when the problem demands it. Explicit CoT prompts still help on hard problems with weaker models.

What's the simplest ReAct setup?

A system prompt that tells the model to output `Thought`, `Action(tool, args)`, and stop. Your framework runs the tool, appends the `Observation`, and loops. LangGraph formalizes this.

Sources

  1. ReAct paper (Yao et al., 2022) — accessed 2026-04-20
  2. Chain-of-Thought paper (Wei et al., 2022) — accessed 2026-04-20