Creativity · MCP — server
MCP MLflow Server
The MCP MLflow Server wraps the MLflow REST API for tracking and model registry. MCP clients can search experiments, read run artifacts and metrics, register a model version, and transition between Staging and Production. It's a natural companion to a CI pipeline that promotes the best model of the week.
MCP facts
- Kind
- server
- Ecosystem
- anthropic-mcp
- Language
- Python (mlflow)
- Transports
- stdio
Capabilities
- Tools: search_runs, get_run, register_model, transition_model_version, list_registered_models
- Resources: mlflow://run/{id}, mlflow://model/{name}/version/{version}
- Auth: MLflow tracking URI + optional Databricks PAT or basic auth
Install
pipx install mcp-server-mlflow Configuration
{
"mcpServers": {
"mlflow": {
"command": "mcp-server-mlflow",
"env": {
"MLFLOW_TRACKING_URI": "https://mlflow.example.com",
"MLFLOW_TRACKING_TOKEN": "${MLFLOW_TOKEN}"
}
}
}
} Frequently asked questions
Can it work with Databricks-managed MLflow?
Yes — set MLFLOW_TRACKING_URI to databricks and pass a Databricks personal access token.
Is stage transition safe from an LLM?
Always gate promotions behind human approval, CI checks, or an evaluation tool. MLflow transitions are permissioned, but an LLM alone should not promote to Production.
Does it expose served endpoints?
Optionally — an invoke_model tool can send a payload to a deployed MLflow serving endpoint. Treat this like any other inference tool.
Sources
- MLflow REST API — accessed 2026-04-20
- Model Context Protocol — accessed 2026-04-20
- MCP servers repo — accessed 2026-04-20