Skip to content
  • There are no suggestions because the search field is empty.

Agents SDK

Overview

This document outlines OpenAI Agents SDK, and integration instructions with Compass API.

What is Agents SDK?

The OpenAI Agents SDK enables you to build agentic AI apps in a lightweight, and easy-to-use package with very few abstractions.

It's a production-ready upgrade of our previous experimentation for agents, Swarm.

The main features of the SDK inludes:

  • Agent loop: Built-in agent loop that handles calling tools, sending results to the LLM, and looping until the LLM is done.

  • Python-first: Use built-in language features to orchestrate and chain agents, rather than needing to learn new abstractions.

  • Handoffs: A powerful feature to coordinate and delegate between multiple agents.

  • Guardrails: Run input validations and checks in parallel to your agents, breaking early if the checks fail.

  • Sessions: Automatic conversation history management across agent runs, eliminating manual state handling.

  • Function tools: Turn any Python function into a tool, with automatic schema generation and Pydantic-powered validation.

  • Tracing: Built-in tracing that lets you visualize, debug and monitor your workflows, as well as use the OpenAI suite of evaluation, fine-tuning and distillation tools.

Integration with Compass

Agent SDK provide interface OpenAIChatCompletionsModel to leverage OpenAI compatible API as seen in the example below.

 from agents import Agent, Runner, RunConfig
from openai import AsyncOpenAI

from agents import (
    Agent,
    Model,
    ModelProvider,
    OpenAIChatCompletionsModel,
    RunConfig,
    Runner,
    function_tool,
    set_tracing_disabled,
)

BASE_URL = "https://api.core42.ai" # base URL for Compass API
API_KEY = "xxxxxxx" # replace with your API key value
MODEL_NAME = "gpt-4o" # change to available model that can be used for the provided API key

"""This example uses a custom provider for some calls to Runner.run(), and direct calls to OpenAI for
others. Steps:
1. Create a custom OpenAI client.
2. Create a ModelProvider that uses the custom client.
3. Use the ModelProvider in calls to Runner.run(), only when we want to use the custom LLM provider.

Note that in this example, we disable tracing under the assumption that you don't have an API key
from platform.openai.com. If you do have one, you can either set the `OPENAI_API_KEY` env var
or call set_tracing_export_api_key() to set a tracing specific key.
"""
client = AsyncOpenAI(base_url=BASE_URL, api_key=API_KEY)
set_tracing_disabled(disabled=True)

class CustomModelProvider(ModelProvider):
    def get_model(self, model_name: str | None) -> Model:
        return OpenAIChatCompletionsModel(
            model=model_name or MODEL_NAME, openai_client=client
        )

CUSTOM_MODEL_PROVIDER = CustomModelProvider()

agent = Agent(name="Assistant", instructions="You are a helpful assistant")

result = Runner.run_sync(
    agent,
    "Write a haiku about recursion in programming.",
    run_config=RunConfig(model_provider=CUSTOM_MODEL_PROVIDER),
)
print(result.final_output)

Reference