Curiosity · AI Model
Mathstral 7B
Mathstral 7B, released in July 2024, is Mistral AI's STEM-focused open-weights model. Built in partnership with Project Numina, it fine-tunes Mistral 7B on large mathematical reasoning corpora and achieves competitive scores on MATH and AMC benchmarks — enough to be a useful classroom math tutor running on a single consumer GPU.
Model specs
- Vendor
- Mistral AI
- Family
- Mistral 7B
- Released
- 2024-07
- Context window
- 32,768 tokens
- Modalities
- text
Strengths
- Open weights under Apache 2.0
- Strong math reasoning for a 7B model
- Curated by Project Numina's math specialists
Limitations
- Not a general chat model — narrow specialisation
- Falls well behind o1-class reasoning on hardest problems
- Limited multilingual math coverage
Use cases
- Classroom math tutors on modest hardware
- Olympiad-style problem practice
- Fine-tuning base for custom STEM reasoning models
- Low-cost math step-checker in tiered pipelines
Benchmarks
| Benchmark | Score | As of |
|---|---|---|
| MATH | ≈56% | 2024-07 |
| AMC 2023 | strong for 7B class | 2024-07 |
| GSM8K | ≈84% | 2024-07 |
Frequently asked questions
What is Mathstral 7B?
Mathstral 7B is Mistral AI's open-weights, 7-billion-parameter math specialist model, fine-tuned in partnership with Project Numina for chain-of-thought math reasoning.
What benchmarks does Mathstral target?
Mathstral reports on MATH, GSM8K, and AMC-style competition problems, with strong results for the 7B parameter class.
How does Mathstral 7B compare to o1?
o1 is in a different league for frontier math, but Mathstral 7B offers useful math tutoring on commodity hardware where cloud reasoning models are too costly or too slow.
Sources
- Mistral — Mathstral launch — accessed 2026-04-20
- Hugging Face — mistralai/Mathstral-7B-v0.1 — accessed 2026-04-20