Capability · Framework — agents
gptme
gptme runs as a CLI REPL on your laptop, giving an LLM a toolbox of shell, Python, file-edit, and browser actions. It's designed to be hackable: plain Python, simple prompts, and first-class support for Anthropic, OpenAI, local Ollama, and Groq.
Framework facts
- Category
- agents
- Language
- Python
- License
- MIT
- Repository
- https://github.com/gptme/gptme
Install
pipx install gptme Quickstart
# one-shot from the CLI
gptme 'read README.md then write a one-paragraph elevator pitch'
# interactive
gptme
> read src/app.py and suggest 3 refactors
> apply suggestion 1 and run pytest Alternatives
- Open Interpreter — similar but more notebook-flavoured
- Aider — focused purely on Git-aware code editing
- Continue — IDE-embedded coding agent
- Claude Code — Anthropic's official terminal coding agent
Frequently asked questions
How is gptme different from Open Interpreter?
gptme leans more toward a terminal chat workflow with Git-friendly file edits, while Open Interpreter centres on an interactive code interpreter experience. Both overlap but differ in ergonomics.
Can I run gptme fully locally?
Yes. Point it at Ollama, LM Studio, or any OpenAI-compatible server with an env var (OPENAI_BASE_URL). Expect slower reasoning on small local models.
Sources
- gptme — GitHub — accessed 2026-04-20
- gptme — docs — accessed 2026-04-20