Capability · Framework — rag
R2R
R2R from SciPhi is a batteries-included RAG system you deploy as a service. It handles document ingestion and chunking, hybrid search (vector + keyword), knowledge-graph construction and querying, agentic RAG, observability, and user-level document permissions. It ships a REST API, Python and TypeScript SDKs, and a dashboard — positioned as the RAG equivalent of Supabase for apps.
Framework facts
- Category
- rag
- Language
- Python / TypeScript
- License
- MIT
- Repository
- https://github.com/SciPhi-AI/R2R
Install
pip install r2r
# run server:
# docker compose -f r2r-compose.yaml up -d Quickstart
from r2r import R2RClient
client = R2RClient('http://localhost:7272')
client.documents.create(file_path='paper.pdf')
result = client.retrieval.rag(
query='What are the main findings?'
)
print(result.results.generated_answer) Alternatives
- Haystack — pipeline framework, not a server
- LlamaIndex — framework with managed LlamaCloud
- Verba — Weaviate's opinionated RAG app
- txtai — embeddings-first toolkit
Frequently asked questions
Is R2R a framework or a service?
R2R is a service — a deployable server with a REST API. You don't compose pipelines in Python like with Haystack or LlamaIndex; you call retrieve/rag/ingest endpoints. That makes it fast to stand up an internal RAG platform.
What does 'Reason to Retrieve' mean?
R2R supports agentic retrieval — the LLM can decide what to search for, iterate, and use knowledge graphs — rather than a fixed embed-then-retrieve pipeline. This is the 'reason to retrieve' behaviour it's named after.
Sources
- R2R — docs — accessed 2026-04-20
- R2R — GitHub — accessed 2026-04-20