Capability · Framework — orchestration
Microsoft PromptFlow
PromptFlow lets engineers express LLM apps as DAGs of Python tools, LLM nodes, and prompt templates — with built-in bulk testing, evaluation, and tracing. It's Microsoft's recommended way to author and ship LLM flows onto Azure AI, with local SDK/CLI and a VS Code visual authoring experience.
Framework facts
- Category
- orchestration
- Language
- Python
- License
- MIT
- Repository
- https://github.com/microsoft/promptflow
Install
pip install promptflow promptflow-tools Quickstart
# create a flow
pf flow init --flow ./my_flow --type standard
# run it locally
pf flow test --flow ./my_flow --inputs question='what is RAG?'
# bulk eval with a CSV
pf run create --flow ./my_flow --data ./questions.jsonl --stream Alternatives
- LangChain + LangSmith — more ecosystem, less Azure
- Dify — no-code competitor
- Semantic Kernel — Microsoft's sibling framework, more SDK-like
- LangFlow — visual-first alternative
Frequently asked questions
Is PromptFlow tied to Azure?
The SDK is open-source and runs fully locally. Azure AI Foundry adds managed deployment, managed evals, content safety, and lineage — but you can prototype and serve flows entirely on-prem.
PromptFlow vs Semantic Kernel?
PromptFlow is flow-graph oriented (DAG of tools and prompts, great for RAG + eval) while Semantic Kernel is SDK-style (plan and invoke functions). They interop, and many teams use both.
Sources
- PromptFlow — docs — accessed 2026-04-20
- Azure AI Foundry — PromptFlow — accessed 2026-04-20