Curiosity · AI Model
DeepSeek-Prover V2
DeepSeek-Prover V2 is DeepSeek's second-generation formal theorem prover for Lean 4, released April 2025. Building on DeepSeek-Math and DeepSeek-Prover V1, it couples an RL-trained proof generator with a Lean verifier, a self-play loop, and supervised warm-start on millions of auto-formalised proofs. V2 reaches new state-of-the-art on the MiniF2F competition-math benchmark and is notable for being a fully open alternative to DeepMind's AlphaProof, with weights released on Hugging Face.
Model specs
- Vendor
- DeepSeek
- Family
- DeepSeek-Prover
- Released
- 2025-04
- Context window
- 32,768 tokens
- Modalities
- text, code
Strengths
- Fully open — weights, code, and training recipes
- SOTA on MiniF2F among open systems
- Strong self-play + RL training pipeline
- Integrates cleanly with Lean 4 tooling
Limitations
- Proof search is compute-heavy
- Behind AlphaProof on IMO-level problems
- Lean-only output — no natural-language explanations
- Requires Lean 4 expertise to use effectively
Use cases
- Open research on theorem proving
- Auto-formalisation of informal math proofs
- Competition math (Putnam / IMO-style) solvers
- Baseline for RL-over-Lean methods
Benchmarks
| Benchmark | Score | As of |
|---|---|---|
| MiniF2F (test) | new SOTA at release | 2025-04 |
| PutnamBench | best open system | 2025-04 |
Frequently asked questions
What is DeepSeek-Prover V2?
An open-weights formal theorem prover for Lean 4 from DeepSeek, trained with RL and self-play on auto-formalised proof data.
How does it compare to AlphaProof?
AlphaProof is stronger on IMO-level problems but closed. DeepSeek-Prover V2 is the leading open alternative and holds SOTA on MiniF2F among public systems.
What licence is DeepSeek-Prover V2 under?
Weights are released under the DeepSeek community licence; code is open-source on GitHub.
Sources
- DeepSeek-Prover V2 on Hugging Face — accessed 2026-04-20
- DeepSeek-Prover V2 paper (arXiv) — accessed 2026-04-20