Capability · Framework — fine-tuning
LM Studio
LM Studio is a free (for personal and approved work use) desktop application that makes running local LLMs feel like using the App Store. It integrates Hugging Face model search, automatic GGUF / MLX backend selection, an OpenAI-compatible REST server, structured-output support, and a headless `lms` CLI for scripted deployments. Widely used by developers who want a GUI on top of llama.cpp without the CLI.
Framework facts
- Category
- fine-tuning
- Language
- TypeScript / C++
- License
- Proprietary (free for personal and some commercial use)
Install
# Download the installer from https://lmstudio.ai
# Or via Homebrew cask:
brew install --cask lm-studio Quickstart
# From the app: Developer → Start Server, then:
curl http://localhost:1234/v1/chat/completions \
-H 'Content-Type: application/json' \
-d '{"model": "local-model", "messages": [{"role":"user","content":"hello"}]}'
# Or headless
lms server start && lms load llama-3.2-3b-instruct Alternatives
- Jan — open-source alternative
- Ollama — CLI-first
- llamafile — single-binary
Frequently asked questions
Is LM Studio open source?
No — the app is proprietary, though the inference engines it wraps (llama.cpp, MLX) are open. A free license is available for personal and some commercial use; check their Terms for details.
Does LM Studio support Apple Silicon acceleration?
Yes — it auto-selects Metal for GGUF models and can use Apple MLX for native quantised models, typically giving 2-4x speedups on M-series chips over CPU.
Sources
- LM Studio docs — accessed 2026-04-20
- LM Studio home — accessed 2026-04-20