Curiosity · Concept
Few-Shot Prompting
Few-shot prompting was introduced as the headline capability of GPT-3 in the 2020 paper 'Language Models are Few-Shot Learners.' By showing two to eight example pairs before the actual input, you teach the model a task at inference time — classification, translation, extraction, style transfer — with zero gradient updates. Quality is sensitive to example selection, ordering, and label distribution; retrieving similar examples dynamically (KNN-style) often beats a fixed static prompt. It remains a cheap, fast alternative to fine-tuning for many structured tasks.
Quick reference
- Proficiency
- Beginner
- Also known as
- in-context learning, k-shot prompting
- Prerequisites
- prompting basics
Frequently asked questions
What is few-shot prompting?
Few-shot prompting is a technique where you include a small number of worked examples in the prompt before the real input. The model sees the pattern (input → output) and imitates it for your actual query, without any fine-tuning.
How many examples should I use?
Usually 2-8. More examples help on harder tasks but eat the context window; with very strong modern models you often get most of the lift from 1-3 carefully chosen examples. Diminishing returns set in fast.
Few-shot vs fine-tuning?
Few-shot is instant, reversible, and costs only inference tokens. Fine-tuning permanently updates weights — better for very narrow domains, huge training sets, or when you need to shrink the prompt. Start with few-shot and only fine-tune when you hit its ceiling.
Why does example ordering matter?
LLMs are sensitive to recency and label balance. Putting all positive examples first can bias the output; matching the distribution you expect at inference time helps. Dynamic KNN example selection — picking examples semantically similar to the current input — typically wins over a fixed order.
Sources
- Brown et al. — Language Models are Few-Shot Learners (GPT-3) — accessed 2026-04-20
- Anthropic — Use examples (multishot prompting) to guide Claude — accessed 2026-04-20