Curiosity · Concept
Zero-Shot Prompting
Zero-shot is the simplest possible prompting style — describe the task and let the model do it, with no demonstrations. It was mostly impractical before instruction tuning and RLHF, but modern instruction-tuned models (Claude, GPT-4, Gemini) handle an enormous range of tasks zero-shot and even benefit from a simple 'Let's think step by step' cue (zero-shot chain-of-thought). Reach for few-shot only when zero-shot mis-reads the format or the task is genuinely rare.
Quick reference
- Proficiency
- Beginner
- Also known as
- zero-shot, instruction-only prompting
- Prerequisites
- prompting basics
Frequently asked questions
What is zero-shot prompting?
Zero-shot prompting is giving the model a task instruction with no worked examples — it relies entirely on the model's pretrained knowledge and instruction-tuning to figure out what you want.
When does zero-shot fail?
When the task format is unusual, the output schema is strict, or the domain jargon is rare. Symptoms include wrong field names, missing parts, or the model paraphrasing the instruction instead of executing it. Add 1-3 examples (few-shot) or a JSON schema.
What is zero-shot chain-of-thought?
Appending 'Let's think step by step' to a zero-shot prompt. Kojima et al. (2022) showed it meaningfully improves math and reasoning accuracy on many benchmarks with no examples and no fine-tuning.
Zero-shot vs few-shot — which should I start with?
Start zero-shot on any modern instruction-tuned model. It's shorter, cheaper, and often works. Add examples only when you see specific format or correctness failures that a demonstration would fix.
Sources
- Kojima et al. — Large Language Models are Zero-Shot Reasoners — accessed 2026-04-20
- OpenAI — Prompt engineering guide — accessed 2026-04-20