Curiosity · AI Model
Codestral
Codestral is Mistral AI's 2024–2025 code-specialized open-weights family — the latest Codestral 25.01 refresh delivers best-in-class fill-in-the-middle completion, 80+ language coverage, and competitive HumanEval scores. Weights are public under the Mistral Non-Production License (MNPL) with a commercial path via the API.
Model specs
- Vendor
- Mistral AI
- Family
- Codestral
- Released
- 2025-01
- Context window
- 256,000 tokens
- Modalities
- text, code
- Input price
- $0.3/M tok
- Output price
- $0.9/M tok
- Pricing as of
- 2026-04-20
Strengths
- Open weights released for research and non-production use
- Industry-leading fill-in-the-middle for IDE completion
- 256K context window — works at repository scale
- Broad language coverage — 80+ programming languages
Limitations
- Mistral Non-Production License — not a true open-source license for commercial self-host
- Commercial deployment requires Mistral's paid API or a separate license
- Behind DeepSeek Coder V2 on some benchmarks despite newer release
- Smaller than Qwen 2.5 Coder 32B but only marginally faster per-token
Use cases
- Self-hosted IDE extensions with low-latency completion
- Repository-scale refactoring with 256K context
- Multi-language code generation and review pipelines
- Fine-tuning base for language-specific coding assistants
Benchmarks
| Benchmark | Score | As of |
|---|---|---|
| HumanEval | ≈86% | 2025-01 |
| MBPP | ≈81% | 2025-01 |
| RepoBench | ≈38% | 2025-01 |
Frequently asked questions
What license is Codestral released under?
The Mistral Non-Production License (MNPL). Weights are downloadable for research, personal, and evaluation use, but production commercial use requires Mistral's paid API or a separate commercial license.
How does Codestral compare to DeepSeek Coder V2?
DeepSeek Coder V2 often scores higher on HumanEval and MBPP and is MIT-licensed. Codestral has better fill-in-the-middle behavior and longer context in the 25.01 release — pick based on license and use-case fit.
Is Codestral good for IDE completion?
Yes — fill-in-the-middle training makes it one of the strongest open models for IDE-style mid-line completion. Pair with a language server for best results.
Sources
- Mistral AI — Codestral 25.01 — accessed 2026-04-20
- Hugging Face — mistralai/Codestral-22B-v0.1 — accessed 2026-04-20