Capability · Framework — orchestration
OpenAI SDK (Python)
The OpenAI Python SDK is the de-facto reference for calling hosted LLM APIs. It powers the majority of Python-based LLM apps — directly, via the many OpenAI-compatible providers (Azure, Groq, Together, vLLM, Ollama), and indirectly through frameworks like LangChain that depend on it. It's actively maintained with full typing and async support.
Framework facts
- Category
- orchestration
- Language
- Python
- License
- Apache-2.0
- Repository
- https://github.com/openai/openai-python
Install
pip install openai
export OPENAI_API_KEY=sk-... Quickstart
from openai import OpenAI
client = OpenAI()
res = client.chat.completions.create(
model='gpt-4o-mini',
messages=[{'role':'user','content':'Summarise the ReAct pattern.'}]
)
print(res.choices[0].message.content) Alternatives
- Anthropic Python SDK — native Claude client
- LiteLLM — provider-agnostic wrapper
- LangChain / Mirascope — higher-level abstractions
- Vercel AI SDK — JavaScript equivalent
Frequently asked questions
Can I use this SDK with non-OpenAI providers?
Yes — any provider with an OpenAI-compatible endpoint (vLLM, Groq, Together, Ollama, Azure OpenAI) works by setting `base_url` and `api_key`. That's why so many tools standardise on this SDK.
Chat Completions or Responses API?
For new code on OpenAI models, prefer the Responses API (it supports hosted tools, built-in retrieval, and MCP). For portability across providers, Chat Completions is still the widest-supported surface.
Sources
- OpenAI Python SDK GitHub — accessed 2026-04-20
- OpenAI API docs — accessed 2026-04-20