Capability · Framework — fine-tuning
LitGPT
LitGPT re-implements Llama, Mistral, Phi, Gemma, StableLM and others as clean PyTorch Lightning models. You get one recipe for pretraining, continued pretraining, full fine-tuning, LoRA, QLoRA, and serving — and the codebase is small enough to read end-to-end, which matters for research and debugging.
Framework facts
- Category
- fine-tuning
- Language
- Python
- License
- Apache-2.0
- Repository
- https://github.com/Lightning-AI/litgpt
Install
pip install 'litgpt[all]' Quickstart
# Download a base model and LoRA fine-tune
litgpt download --repo_id meta-llama/Llama-3.1-8B
litgpt finetune_lora meta-llama/Llama-3.1-8B \
--data Alpaca2k \
--train.epochs 3 \
--out_dir out/llama-lora
litgpt chat out/llama-lora/final Alternatives
- Hugging Face TRL — widely used training library
- Axolotl — YAML-first fine-tune framework
- Unsloth — memory-efficient
- TorchTune — PyTorch-native
Frequently asked questions
Who is LitGPT for?
Researchers and engineers who want to read, modify, and extend the training loop. If you just want a config-driven fine-tune, Axolotl may be quicker; if you need max GPU efficiency, Unsloth.
Does LitGPT support multi-GPU and FSDP?
Yes. It uses PyTorch Lightning's Fabric for FSDP, DeepSpeed, and distributed training out of the box.
Sources
- LitGPT — GitHub — accessed 2026-04-20
- LitGPT — docs — accessed 2026-04-20