Curiosity · AI Model
Japanese Stable LM 2
Japanese Stable LM 2 1.6B, released by Stability AI Japan in 2024, is a Japanese-language open-weights model derived from the Stable LM 2 backbone. It was continued-pretrained on a large curated Japanese corpus and then instruction-tuned, producing a compact model that runs on modest hardware while delivering strong Japanese fluency and reading-comprehension scores for its size.
Model specs
- Vendor
- Stability AI
- Family
- Stable LM 2
- Released
- 2024-05
- Context window
- 4,096 tokens
- Modalities
- text
Strengths
- Small 1.6B footprint runs on modest hardware
- Stability AI Japan's curated Japanese corpus
- Open weights for research and limited commercial use
Limitations
- Small capacity limits reasoning and long-context tasks
- Mostly Japanese-focused — weaker English coverage
- Superseded by larger models like Qwen2.5 and EvoLLM-JP on benchmarks
Use cases
- Japanese-language chat on edge or consumer devices
- Fine-tuning base for Japanese domain assistants
- Classroom demos of small multilingual LLMs
- Japanese text classification and generation
Benchmarks
| Benchmark | Score | As of |
|---|---|---|
| Japanese MT-Bench (jp-mt-bench) | competitive for 1.6B class | 2024-05 |
Frequently asked questions
What is Japanese Stable LM 2?
Japanese Stable LM 2 1.6B is Stability AI Japan's compact Japanese-language open-weights LLM, derived from the Stable LM 2 backbone via continued pretraining on Japanese text.
Where can I use it?
Weights are available on Hugging Face under 'stabilityai/japanese-stablelm-2-*' repositories. Stability AI Japan also provides hosted demos for exploration.
How does it compare to EvoLLM-JP?
EvoLLM-JP is a merged model aggregating multiple parents and usually scores higher on Japanese math and reasoning benchmarks. Japanese Stable LM 2 remains a clean, single-lineage baseline.
Sources
- Stability AI — Japanese Stable LM 2 — accessed 2026-04-20
- Hugging Face — Stability AI Japan — accessed 2026-04-20