Agents SDK
This page provides information about the OpenAI Agents SDK and integration instructions with the Compass API.
What is the Agents SDK?
The OpenAI Agents SDK enables you to build agentic AI apps in a lightweight and easy-to-use package with very few abstractions.
The main features of the SDK include:
-
Agent loop: Built-in agent loop that handles calling tools, sending results to the LLM, and looping until the LLM is done.
-
Python-first: Use built-in language features to orchestrate and chain agents, rather than needing to learn new abstractions.
-
Handoffs: A powerful feature to coordinate and delegate between multiple agents.
-
Guardrails: Run input validations and checks in parallel to your agents, breaking early if the checks fail.
-
Sessions: Automatic conversation history management across agent runs, eliminating manual state handling.
-
Function tools: Turn any Python function into a tool, with automatic schema generation and Pydantic-powered validation.
-
Tracing: Built-in tracing that lets you visualize, debug, and monitor your workflows, as well as use the OpenAI suite of evaluation, fine-tuning, and distillation tools.
The Agent SDK is supported for all the Chat Completion models except for Mistral 7B and Mixtral 8x7B models.
Integration with Compass
Code Setup Instructions
Compass recommends using Python 3.10 or higher for Agent SDK integration.
Below are the required Python libraries along with the commands to install them before running the example code:
pip install openai
pip install openai-agents
Example Code
Agent SDK provides an interface OpenAIChatCompletionsModel
to leverage the OpenAI-compatible API as seen in the example below.
from openai import AsyncOpenAI
from agents import (
Agent,
Model,
ModelProvider,
OpenAIChatCompletionsModel,
RunConfig,
Runner,
function_tool,
set_tracing_disabled,
Runner,
RunConfig,
)
BASE_URL = "https://api.core42.ai/v1" # base URL for Compass API
API_KEY = "xxxxxxx" # replace with your API key value
MODEL_NAME = "gpt-4o" # change to available model that can be used for the provided API key
"""This example uses a custom provider for some calls to Runner.run(), and direct calls to OpenAI for
others. Steps:
1. Create a custom OpenAI client.
2. Create a ModelProvider that uses the custom client.
3. Use the ModelProvider in calls to Runner.run(), only when we want to use the custom LLM provider.
Note that in this example, we disable tracing under the assumption that you don't have an API key
from platform.openai.com. If you do have one, you can either set the `OPENAI_API_KEY` env var
or call set_tracing_export_api_key() to set a tracing specific key.
"""
client = AsyncOpenAI(base_url=BASE_URL, api_key=API_KEY)
set_tracing_disabled(disabled=True)
class CustomModelProvider(ModelProvider):
def get_model(self, model_name: str | None) -> Model:
return OpenAIChatCompletionsModel(
model=model_name or MODEL_NAME, openai_client=client
)
CUSTOM_MODEL_PROVIDER = CustomModelProvider()
agent = Agent(name="Assistant", instructions="You are a helpful assistant")
result = Runner.run_sync(
agent,
"Write a haiku about recursion in programming.",
run_config=RunConfig(model_provider=CUSTOM_MODEL_PROVIDER),
)
print(result.final_output)