Capability · Comparison
pgvector vs Qdrant
pgvector and Qdrant represent a fundamental trade-off in vector storage. pgvector lives inside Postgres — embeddings sit in the same DB as users, documents, and transactions, making joins and backups trivial. Qdrant is a dedicated Rust engine with richer indexing, quantisation, and clustering, but a separate service. The decision usually comes down to data locality vs. specialised performance.
Side-by-side
| Criterion | pgvector | Qdrant |
|---|---|---|
| Architecture | Postgres extension (same DB) | Dedicated service, Rust engine |
| Index type | IVFFlat, HNSW (since 0.5) | HNSW with rich tuning |
| Quantisation | Limited — pgvector 0.7+ adds binary | Scalar, product, binary quantisation |
| SQL joins with business data | Native — same database | Not native — app-level join |
| Scale where it's still comfortable | Up to 10-50M vectors (hardware-dependent) | 100M+ with sharding and quantisation |
| Hybrid search | Via tsvector + pgvector (manual) | First-class with sparse vectors |
| Ops burden | Zero new service — use your Postgres | One new stateful service to run |
| License | PostgreSQL license | Apache 2.0 |
Verdict
For teams that already run Postgres and have well under 10 million vectors, pgvector is almost always the right starting choice — no new service, no data-sync pipeline, and your filters can join directly against business tables. For teams pushing 50-100M+ vectors, needing quantisation, or wanting low-latency at high QPS, Qdrant's dedicated engine pulls ahead. A common pattern is to start on pgvector and migrate as scale demands it — the API surface is small enough that porting takes days, not months.
When to choose each
Choose pgvector if…
- You already run Postgres and don't want another service.
- Your vector count is under ~10M.
- You want to filter by business data in the same query.
- Backups, replication, and compliance are solved by your Postgres stack.
Choose Qdrant if…
- You have 50M+ vectors and want consistent low-latency queries.
- Quantisation is needed to fit indexes in RAM.
- You want first-class hybrid search with sparse vectors.
- Query throughput (QPS) is a binding constraint.
Frequently asked questions
At what scale should I migrate from pgvector?
Roughly when your HNSW index stops fitting in shared_buffers and query p99 gets noisy — typically somewhere between 10M and 50M vectors, heavily dependent on hardware and dimensionality.
Can I run pgvector on RDS / managed Postgres?
Yes — RDS, Aurora, Cloud SQL, and Azure Database for PostgreSQL all support pgvector as an extension.
Does Qdrant support SQL?
No. Filters use Qdrant's own payload filter syntax. If SQL semantics matter more than pure vector performance, pgvector is the better fit.