Curiosity · AI Model
Hunyuan-Large
Hunyuan-Large is Tencent's 2024 open-weight Mixture-of-Experts model — 389 billion total parameters with 52 billion active per token. It was the largest open-weight MoE at release, with a 256k context window and strong performance on Chinese and multilingual benchmarks. It powers parts of Tencent's AI stack across WeChat and Tencent Cloud.
Model specs
- Vendor
- Tencent
- Family
- Hunyuan
- Released
- 2024-11
- Context window
- 256,000 tokens
- Modalities
- text, code
Strengths
- Largest open-weight MoE at release
- Solid Chinese-language scores via C-Eval and CMMLU
- 256k context window
- Available freely on HuggingFace for research use
Limitations
- Requires substantial GPU infrastructure to self-host (multi-H100)
- English fine-tuning trails Western frontier models
- Outside China, inference providers are limited
Use cases
- Chinese-market LLM research and product dev
- Self-hosted bilingual chatbots for Tencent Cloud customers
- Long-context summarisation and retrieval over Chinese corpora
- Academic benchmarking of large MoE architectures
Benchmarks
| Benchmark | Score | As of |
|---|---|---|
| MMLU | ~88% | 2026-04 |
| C-Eval | ~89% | 2026-04 |
| MATH | ~69% | 2026-04 |
Frequently asked questions
What is Hunyuan-Large?
Hunyuan-Large is Tencent's 389B-parameter Mixture-of-Experts language model, released as open weights in November 2024 with a 256k context window. It is strongest on Chinese and multilingual tasks.
Can I self-host Hunyuan-Large?
Yes. Tencent published the weights on HuggingFace, but serving the full 389B MoE requires a multi-H100 rig. Smaller Hunyuan variants exist for lighter deployments.
Sources
- Tencent Hunyuan on HuggingFace — accessed 2026-04-20
- Tencent Hunyuan GitHub — accessed 2026-04-20