Curiosity · AI Model
Mistral Embed
Mistral Embed is Mistral AI's hosted text embedding model. It produces 1024-dimensional vectors tuned for retrieval and classification, with particularly strong English and French quality. Teams already using Mistral's LLMs on la Plateforme get a natural single-provider stack for RAG.
Model specs
- Vendor
- Mistral AI
- Family
- Mistral Embed
- Released
- 2024-02
- Context window
- 8,192 tokens
- Modalities
- text
- Input price
- $0.1/M tok
- Output price
- n/a
- Pricing as of
- 2026-04-20
Strengths
- Tight integration with Mistral's LLM catalogue
- EU-hosted option for data-residency requirements
- Strong French-language quality
- Simple pricing and API shape
Limitations
- Smaller ecosystem than OpenAI embeddings
- Fewer domain-specialised variants than Voyage
- Multilingual coverage is narrower than Cohere Embed v3 multilingual
Use cases
- RAG on la Plateforme paired with Mistral Large or Codestral
- European enterprise search with EU data residency
- French-language semantic search
- Classification and intent routing
Benchmarks
| Benchmark | Score | As of |
|---|---|---|
| MTEB (English) | ≈62 | 2024 |
| MTEB (French) | ≈67 | 2024 |
Frequently asked questions
What is Mistral Embed?
Mistral Embed is the hosted text embedding model from Mistral AI. It produces 1024-dimensional vectors and is served on la Plateforme alongside Mistral's generative LLMs, giving teams a single-provider stack for RAG.
Is Mistral Embed available in the EU?
Yes — Mistral la Plateforme is based in France, and the service is a natural fit for teams that need European data residency for their embedding pipeline.
How much does Mistral Embed cost?
As of April 2026, Mistral Embed is priced at roughly USD 0.10 per million input tokens on la Plateforme, similar to Cohere Embed v3 and priced between OpenAI text-embedding-3-small and text-embedding-3-large.
When should I choose Mistral Embed over OpenAI?
Choose Mistral Embed when you already host your LLMs with Mistral, need EU hosting for data residency, or serve a French-language corpus. Choose OpenAI when you want maximum English MTEB quality or already use the OpenAI API.
Sources
- Mistral — Embeddings docs — accessed 2026-04-20
- Mistral — Pricing — accessed 2026-04-20