Curiosity · AI Model
Skywork-o1-Open
Skywork-o1-Open is Kunlun Tech's open-weight reasoning LLM family, released in late 2024. It comes in 8B and 32B sizes and was one of the first large-scale open reproductions of OpenAI's o1 reasoning paradigm, combining rejection-sampling fine-tuning with process reward models to produce explicit chain-of-thought traces.
Model specs
- Vendor
- Skywork
- Family
- Skywork-o1
- Released
- 2024-11
- Context window
- 8,192 tokens
- Modalities
- text
Strengths
- Strong math scores for an open-weight 32B model
- Comes in two sizes for different compute budgets
- Process reward model training recipe is publicly documented
Limitations
- Smaller max context than frontier models
- Surpassed by DeepSeek-R1 on hardest competition math
- English-first training; Chinese reasoning is good but not dominant
Use cases
- Open-weight o1 reproductions for research
- Math tutoring apps needing visible chain-of-thought
- Baselines for building custom reasoning LLMs
- Teaching process reward modelling
Benchmarks
| Benchmark | Score | As of |
|---|---|---|
| MATH | ~73% | 2026-04 |
| GSM8K | ~95% | 2026-04 |
| AIME 2024 | ~20% | 2026-04 |
Frequently asked questions
What is Skywork-o1-Open?
Skywork-o1-Open is Kunlun Tech's open-weight family of o1-style reasoning language models, available in 8B and 32B sizes. It uses process reward models and rejection sampling to teach explicit step-by-step reasoning.
How does Skywork-o1-Open compare to Marco-o1?
Marco-o1 relies on Monte Carlo Tree Search at inference time, while Skywork-o1-Open is trained end-to-end with process rewards. Skywork's 32B variant generally scores higher, at the cost of heavier hardware.
Sources
- Skywork-o1-Open on HuggingFace — accessed 2026-04-20
- Skywork AI homepage — accessed 2026-04-20