Curiosity · AI Model
Mistral Codestral 22B
Codestral 22B is Mistral AI's first code-specialised LLM. It covers more than 80 programming languages, supports fill-in-the-middle (FIM) for IDE autocomplete, and ships with open weights under Mistral's non-production licence plus a hosted endpoint on la Plateforme. It is a popular self-hosted base for coding copilots.
Model specs
- Vendor
- Mistral AI
- Family
- Codestral
- Released
- 2024-05
- Context window
- 32,768 tokens
- Modalities
- text, code
- Input price
- $0.2/M tok
- Output price
- $0.6/M tok
- Pricing as of
- 2026-04-20
Strengths
- Open weights — self-hostable on a single 48 GB GPU
- Fill-in-the-middle (FIM) support for IDE autocomplete
- 80+ programming languages, including long-tail languages
- 32k-token context for repo-scale prompts
Limitations
- Mistral Non-Production License restricts commercial production use without a separate agreement
- Trails Claude Opus 4.7 and GPT-5 on agentic SWE-bench tasks
- Tool-use fine-tuning lighter than frontier chat models
- Requires 48 GB VRAM in FP16 — smaller variants (Codestral-Mamba) help
Use cases
- IDE autocomplete and copilots (Continue, Cody, JetBrains plugins)
- Self-hosted code-review bots
- Refactoring and migration agents
- On-prem code LLM for regulated teams
Benchmarks
| Benchmark | Score | As of |
|---|---|---|
| HumanEval (Python) | ≈81% | 2024-05 |
| MBPP | ≈78% | 2024-05 |
Frequently asked questions
What is Codestral 22B?
Codestral 22B is Mistral AI's first code-specialised open-weight LLM. It covers more than 80 programming languages, supports fill-in-the-middle (FIM) for IDE autocomplete, and has a 32k-token context for repo-scale prompting.
Can I use Codestral commercially?
Codestral's open weights ship under the Mistral Non-Production License, which restricts commercial production use. For production, either take a commercial licence from Mistral or use the hosted Codestral endpoint on la Plateforme.
What hardware does Codestral need?
In FP16, Codestral 22B fits on a single 48 GB GPU. With 4-bit or 8-bit quantisation it runs on 24 GB cards, useful for workstation-grade copilot deployments.
How does Codestral compare with Claude Opus 4.7 for coding?
Claude Opus 4.7 leads on agentic software-engineering tasks and long-horizon coding. Codestral 22B is a strong self-hosted base for autocomplete and on-prem tooling where data residency or cost rule out a frontier API.
Sources
- Mistral — Codestral announcement — accessed 2026-04-20
- Hugging Face — mistralai/Codestral-22B-v0.1 — accessed 2026-04-20