Curiosity · AI Model
Code Llama 13B
Code Llama 13B is the mid-size member of Meta's 2023 Code Llama family, fine-tuned from Llama 2 on a large corpus of code and long-context sequences. It supports infilling, instruction tuning, and Python-specialised variants, and was the go-to open coding LLM in 2023-24 before the DeepSeek-Coder and Qwen-Coder families surpassed it.
Model specs
- Vendor
- Meta
- Family
- Code Llama
- Released
- 2023-08
- Context window
- 16,384 tokens
- Modalities
- text, code
Strengths
- Open weights under Llama 2 community license
- Good 16K-context code infilling in its era
- Widely supported in llama.cpp, vLLM, and Hugging Face
Limitations
- Far below DeepSeek-Coder V2 and Qwen2.5-Coder on modern evals
- No tool-use or chat-persona tuning
- Licence excludes some commercial use cases
Use cases
- Local code completion in VS Code / Neovim plugins
- Offline coding assistants in air-gapped environments
- Fine-tuning baselines for bespoke coding corpora
- Classroom examples of open-weights coding LLMs
Benchmarks
| Benchmark | Score | As of |
|---|---|---|
| HumanEval | ≈36% | 2023-08 |
| MBPP | ≈47% | 2023-08 |
Frequently asked questions
What is Code Llama 13B?
Code Llama 13B is Meta's 13-billion-parameter open-weights code-generation model, fine-tuned from Llama 2 and released in August 2023 alongside 7B and 34B variants.
Is Code Llama 13B still worth using?
For new projects, DeepSeek-Coder V2 Lite or Qwen2.5-Coder 14B is usually a better open-weights choice. Code Llama 13B remains relevant for legacy deployments and historical comparisons.
Where can I download Code Llama 13B?
Weights are available on Hugging Face under 'codellama/CodeLlama-13b' and related instruct / Python-specialised repositories.
Sources
- Meta — Code Llama announcement — accessed 2026-04-20
- Hugging Face — codellama/CodeLlama-13b-hf — accessed 2026-04-20