Capability · Framework — fine-tuning

Axolotl

Axolotl (from OpenAccess-AI-Collective) is the config-first alternative to code-heavy training loops. Instead of writing trainer scripts, you define everything in a YAML file — dataset format, base model, LoRA rank, learning rate, DeepSpeed config — and Axolotl composes the training run. It's a favourite of open-source model builders and has powered many fine-tunes released on Hugging Face.

Framework facts

Category
fine-tuning
Language
Python
License
Apache 2.0
Repository
https://github.com/axolotl-ai-cloud/axolotl

Install

git clone https://github.com/axolotl-ai-cloud/axolotl
cd axolotl && pip install -e '.[flash-attn,deepspeed]'

Quickstart

# config.yml
base_model: meta-llama/Llama-3.1-8B
datasets:
  - path: my-dataset.jsonl
    type: alpaca
adapter: qlora
load_in_4bit: true
lora_r: 16
micro_batch_size: 2
num_epochs: 3

# CLI
# accelerate launch -m axolotl.cli.train config.yml

Alternatives

  • Unsloth — fastest single-GPU library
  • torchtune — PyTorch-native alternative
  • LLaMA-Factory — GUI-oriented config fine-tuner
  • TRL — lower-level HF library

Frequently asked questions

Axolotl or Unsloth?

Unsloth wins for single-GPU speed and memory. Axolotl wins when you need multi-GPU/multi-node, custom dataset formats, or diverse training objectives (DPO, ORPO, full-parameter). Many teams prototype with Unsloth and scale with Axolotl.

Can I run it on a cloud GPU service?

Yes — Axolotl runs cleanly on RunPod, Modal, Lambda Labs, and anywhere with CUDA-enabled Python. Example YAMLs are published in the repo for popular base models.

Sources

  1. Axolotl — docs — accessed 2026-04-20
  2. Axolotl on GitHub — accessed 2026-04-20