Curiosity · AI Model

Voyage AI voyage-3

voyage-3 is Voyage AI's general-purpose retrieval embedding model, designed explicitly for RAG pipelines. Voyage publishes domain variants — voyage-code, voyage-law, voyage-finance — plus voyage-3-lite for high-throughput workloads. Anthropic recommends Voyage for Claude-based RAG, which makes it a natural pair for enterprise agents.

Model specs

Vendor
Voyage AI
Family
Voyage 3
Released
2024-09
Context window
32,000 tokens
Modalities
text
Input price
$0.06/M tok
Output price
n/a
Pricing as of
2026-04-20

Strengths

  • Leads general MTEB retrieval among similarly priced models
  • Domain-specialised embeddings (code, law, finance) outperform generalist models in-domain
  • 32k-token input — embed long chunks without splitting
  • Recommended by Anthropic for Claude-based RAG pipelines

Limitations

  • Closed API-only — no open weights
  • Rate limits can bite at very high ingest rates — batch carefully
  • Ecosystem smaller than OpenAI or Cohere for framework integrations

Use cases

  • Production RAG over enterprise knowledge bases
  • Code retrieval with voyage-code-2 / voyage-code-3
  • Legal and financial document search with domain variants
  • Agent memory stores paired with Claude or GPT

Benchmarks

BenchmarkScoreAs of
MTEB retrieval (avg NDCG@10)≈582024-09
BEIR avg≈552024-09

Frequently asked questions

What is voyage-3?

voyage-3 is Voyage AI's flagship general-purpose text embedding model, optimised for retrieval and RAG. It ships alongside domain-specialised siblings (voyage-code, voyage-law, voyage-finance) and a cheaper voyage-3-lite tier.

Why does Anthropic recommend Voyage for Claude RAG?

Anthropic published retrieval best practices that endorse Voyage models for Claude-based RAG. Voyage's domain variants and long 32k context pair well with Claude's long-context reasoning, letting teams index big chunks and hand them to Claude at query time.

How much does voyage-3 cost?

As of April 2026, voyage-3 is priced at roughly USD 0.06 per million input tokens on the Voyage API, with voyage-3-lite offering a further discount for high-throughput workloads.

When should I pick a domain-specialised Voyage model?

If most of your corpus is code, law, or finance, the matching domain variant typically outperforms voyage-3 general by several NDCG points on in-domain queries. For mixed corpora, voyage-3 is the safer default.

Sources

  1. Voyage AI — voyage-3 announcement — accessed 2026-04-20
  2. Voyage AI — Embeddings docs — accessed 2026-04-20