Creativity · MCP — server
MCP Pinecone Server
The MCP Pinecone Server connects Model Context Protocol clients to a Pinecone vector database. With an API key scoped to a specific index, Claude Desktop or Cursor can search for top-k nearest neighbours, upsert embeddings, and pull document metadata — turning any private corpus into an LLM-searchable knowledge base over stdio.
MCP facts
- Kind
- server
- Ecosystem
- anthropic-mcp
- Language
- TypeScript / Node.js
- Transports
- stdio
Capabilities
- Tools: search_records — top-k semantic search with optional metadata filter
- Tools: upsert_records for adding documents, delete_records for cleanup
- Tools: describe_index, list_indexes for index introspection
- Works with Pinecone serverless and pod-based indexes
Install
npx -y @pinecone-database/mcp Configuration
{
"mcpServers": {
"pinecone": {
"command": "npx",
"args": ["-y", "@pinecone-database/mcp"],
"env": {
"PINECONE_API_KEY": "pcsk_xxx",
"PINECONE_INDEX_NAME": "docs-prod"
}
}
}
} Frequently asked questions
Do I need to run my own embedding pipeline?
Yes — the MCP server is for query and upsert. Use a batch ETL (e.g. a Python script or a Vercel/Cloudflare Worker) to chunk, embed, and push documents. The agent then searches the index on demand.
Which embedding model should I pair it with?
Pinecone is model-agnostic. Common pairings are OpenAI text-embedding-3-large, Cohere embed-v3, and Voyage voyage-3 — pick one, match its dimension to the index, and keep it consistent for upserts and queries.
Can I scope the API key to a single index?
Yes — in Pinecone's console create a project-scoped key or, for fine-grained control, a project with only the target index. Avoid org-level keys in agent configs.
Sources
- Pinecone MCP server — accessed 2026-04-20
- Pinecone API reference — accessed 2026-04-20