Capability · Framework — rag
txtai
txtai from NeuML is a compact embeddings-first framework. It bundles a local vector database (with optional backends like SQLite, DuckDB, Faiss, Postgres), full-text + graph + ANN hybrid search, RAG pipelines, workflows, agents, and a large catalogue of application recipes. It's a good choice when you want to run RAG and agentic search entirely inside your own process with minimal external infrastructure.
Framework facts
- Category
- rag
- Language
- Python
- License
- Apache 2.0
- Repository
- https://github.com/neuml/txtai
Install
pip install txtai Quickstart
from txtai import Embeddings
embeddings = Embeddings(path='sentence-transformers/all-MiniLM-L6-v2')
embeddings.index([
'US tops 5 million virus cases',
'Canada\'s NHL returns to play',
'The sky is blue today'
])
print(embeddings.search('sports', 2)) Alternatives
- Chroma — pure embedded vector DB
- Haystack — pipeline framework
- LlamaIndex — RAG-focused framework
- LangChain + FAISS — DIY alternative
Frequently asked questions
Is txtai a vector database or a framework?
Both. At its core is an embeddings database, but it wraps enough pipelines, workflows, and agent primitives that you can build complete RAG and agentic apps without pulling in another framework.
Does txtai need a GPU?
No. txtai runs CPU-only with reasonable defaults (MiniLM-style models). GPU accelerates embedding and inference if you have one, but all features work on a CPU-only laptop — useful for offline/edge deployments.
Sources
- txtai — docs — accessed 2026-04-20
- txtai — GitHub — accessed 2026-04-20