Capability · Framework — rag

Verba

Verba (The Golden RAGtriever) is Weaviate's opinionated RAG application. It provides end-to-end ingestion, chunking, embedding, retrieval, and a chat UI backed by a Weaviate vector database. You can run it entirely locally with Ollama and local embeddings or use hosted providers (OpenAI, Anthropic, Cohere, Google, HuggingFace). It's a popular starting point for teams prototyping a chat-with-your-docs experience.

Framework facts

Category
rag
Language
Python / TypeScript
License
BSD-3-Clause
Repository
https://github.com/weaviate/Verba

Install

pip install goldenverba
# then
# verba start

Quickstart

# After pip install goldenverba
export OPENAI_API_KEY=sk-...
export WEAVIATE_URL_VERBA=...
export WEAVIATE_API_KEY_VERBA=...
verba start
# Open http://localhost:8000 and upload documents

Alternatives

  • R2R — RAG server, no UI by default
  • Chat with LlamaIndex — reference chat UI
  • open-webui — chat UI with RAG plugins
  • AnythingLLM — similar open-source RAG app

Frequently asked questions

Do I have to use Weaviate Cloud?

No. Verba supports embedded Weaviate, local Docker Weaviate, and Weaviate Cloud. For a zero-infrastructure start you can use embedded mode with local Ollama embeddings.

Is Verba production-ready?

Verba is best treated as a high-quality reference implementation. It's fine for internal tools and demos, and many teams fork it for production. For heavy multi-tenant deployments you'd typically build on Weaviate directly.

Sources

  1. Verba — GitHub — accessed 2026-04-20
  2. Weaviate — docs — accessed 2026-04-20