openai/openai-agents-python
OpenAI Agents SDK
The OpenAI Agents SDK is a lightweight yet powerful framework for building multi-agent workflows. It is provider-agnostic, supporting the OpenAI Responses and Chat Completions APIs, as well as 100+ other LLMs.
[!NOTE] Looking for the JavaScript/TypeScript version? Check out Agents SDK JS/TS.
Core concepts:
- Agents: LLMs configured with instructions, tools, guardrails, and handoffs
- Handoffs: A specialized tool call used by the Agents SDK for transferring control between agents
- Guardrails: Configurable safety checks for input and output validation
- Sessions: Automatic conversation history management across agent runs
- Tracing: Built-in tracking of agent runs, allowing you to view, debug and optimize your workflows
Explore the examples directory to see the SDK in action, and read our documentation for more details.
Get started
To get started, set up your Python environment (Python 3.9 or newer required), and then install OpenAI Agents SDK package.
venv
|
|
For voice support, install with the optional voice group: pip install 'openai-agents[voice]'.
For Redis session support, install with the optional redis group: pip install 'openai-agents[redis]'.
uv
If you’re familiar with uv, using the tool would be even similar:
|
|
For voice support, install with the optional voice group: uv add 'openai-agents[voice]'.
For Redis session support, install with the optional redis group: uv add 'openai-agents[redis]'.
Hello world example
|
|
(If running this, ensure you set the OPENAI_API_KEY environment variable)
(For Jupyter notebook users, see hello_world_jupyter.ipynb)
Handoffs example
|
|
Functions example
|
|
The agent loop
When you call Runner.run(), we run a loop until we get a final output.
- We call the LLM, using the model and settings on the agent, and the message history.
- The LLM returns a response, which may include tool calls.
- If the response has a final output (see below for more on this), we return it and end the loop.
- If the response has a handoff, we set the agent to the new agent and go back to step 1.
- We process the tool calls (if any) and append the tool responses messages. Then we go to step 1.
There is a max_turns parameter that you can use to limit the number of times the loop executes.
Final output
Final output is the last thing the agent produces in the loop.
- If you set an
output_typeon the agent, the final output is when the LLM returns something of that type. We use structured outputs for this. - If there’s no
output_type(i.e. plain text responses), then the first LLM response without any tool calls or handoffs is considered as the final output.
As a result, the mental model for the agent loop is:
- If the current agent has an
output_type, the loop runs until the agent produces structured output matching that type. - If the current agent does not have an
output_type, the loop runs until the current agent produces a message without any tool calls/handoffs.
Common agent patterns
The Agents SDK is designed to be highly flexible, allowing you to model a wide range of LLM workflows including deterministic flows, iterative loops, and more. See examples in examples/agent_patterns.
Tracing
The Agents SDK automatically traces your agent runs, making it easy to track and debug the behavior of your agents. Tracing is extensible by design, supporting custom spans and a wide variety of external destinations, including Logfire, AgentOps, Braintrust, Scorecard, and Keywords AI. For more details about how to customize or disable tracing, see Tracing, which also includes a larger list of external tracing processors.
Long running agents & human-in-the-loop
You can use the Agents SDK Temporal integration to run durable, long-running workflows, including human-in-the-loop tasks. View a demo of Temporal and the Agents SDK working in action to complete long-running tasks in this video, and view docs here.
Sessions
The Agents SDK provides built-in session memory to automatically maintain conversation history across multiple agent runs, eliminating the need to manually handle .to_input_list() between turns.
Quick start
|
|
Session options
- No memory (default): No session memory when session parameter is omitted
session: Session = DatabaseSession(...): Use a Session instance to manage conversation history
|
|
Custom session implementations
You can implement your own session memory by creating a class that follows the Session protocol:
|
|
Development (only needed if you need to edit the SDK/examples)
- Ensure you have
uvinstalled.
|
|
- Install dependencies
|
|
- (After making changes) lint/test
|
|
Or to run them individually:
|
|
Acknowledgements
We’d like to acknowledge the excellent work of the open-source community, especially:
- Pydantic (data validation) and PydanticAI (advanced agent framework)
- LiteLLM (unified interface for 100+ LLMs)
- MkDocs
- Griffe
- uv and ruff
We’re committed to continuing to build the Agents SDK as an open source framework so others in the community can expand on our approach.