OpenAI Agents SDK Integration
Route OpenAI Agents SDK calls through the Curate-Me gateway for cost tracking, rate limiting, PII scanning, and governance. The Agents SDK uses an OpenAI client internally — just set the base_url and register it as the default.
Prerequisites
# Python
pip install openai-agents
# TypeScript
npm install @openai/agents openaiPython Setup
Create a gateway-routed client and register it as the default for all agents:
from openai import AsyncOpenAI
from agents import Agent, Runner, set_default_openai_client
client = AsyncOpenAI(
api_key="sk-your-openai-key",
base_url="https://api.curate-me.ai/v1/openai",
default_headers={
"X-CM-API-Key": "cm_sk_xxx",
"X-CM-Tags": "project=my-agents,env=production",
},
)
# All agents now route through the gateway
set_default_openai_client(client)Defining Agents
Define agents as normal. The gateway is transparent — agents don’t need to know about it:
triage_agent = Agent(
name="Triage",
instructions="You route questions to the right specialist. For technical "
"questions, hand off to the technical agent. For general questions, "
"answer directly.",
model="gpt-4o",
)
technical_agent = Agent(
name="Technical Advisor",
instructions="You are a technical advisor specializing in AI infrastructure. "
"Give concise, actionable answers.",
model="gpt-4o",
)
# Wire up handoffs
triage_agent.handoffs = [technical_agent]Running Agents
import asyncio
async def main():
# Single agent
result = await Runner.run(
triage_agent,
input="What is an AI gateway and why do I need one?",
)
print(f"Agent: {result.last_agent.name}")
print(f"Response: {result.final_output}")
# With handoff -- triage routes to technical agent
result = await Runner.run(
triage_agent,
input="How do I set up rate limiting for my LLM API calls?",
)
print(f"Agent: {result.last_agent.name}")
print(f"Response: {result.final_output}")
asyncio.run(main())Every LLM call made by every agent is routed through the gateway. The dashboard shows per-agent cost breakdowns when you use X-CM-Tags.
TypeScript Setup
import OpenAI from "openai";
import { Agent, run, setDefaultOpenAIClient } from "@openai/agents";
const client = new OpenAI({
apiKey: "sk-your-openai-key",
baseURL: "https://api.curate-me.ai/v1/openai",
defaultHeaders: {
"X-CM-API-Key": "cm_sk_xxx",
"X-CM-Tags": "project=my-agents,env=production",
},
});
setDefaultOpenAIClient(client);
const technicalAgent = new Agent({
name: "Technical Advisor",
instructions: "You are a technical advisor. Give concise answers.",
model: "gpt-4o",
});
const triageAgent = new Agent({
name: "Triage",
instructions: "Route technical questions to the technical agent.",
model: "gpt-4o",
handoffs: [technicalAgent],
});
const result = await run(triageAgent, {
input: "How do I set up rate limiting for my LLM API calls?",
});
console.log(`Agent: ${result.lastAgent.name}`);
console.log(`Response: ${result.finalOutput}`);Governance with Multi-Agent Handoffs
When agents hand off to each other, each agent’s LLM calls are individually governed:
- Rate limiting applies per-org across all agents (shared quota)
- Cost tracking records each call separately with the agent name in metadata
- PII scanning checks every message at every handoff boundary
- Budget enforcement applies the same daily/monthly budget across the entire org
Use X-CM-Tags to track costs per agent role:
# Per-agent cost tags
client = AsyncOpenAI(
api_key="sk-your-key",
base_url="https://api.curate-me.ai/v1/openai",
default_headers={
"X-CM-API-Key": "cm_sk_xxx",
"X-CM-Tags": "agent=triage,project=support-bot",
},
)Environment Variables
CM_GATEWAY_URL=https://api.curate-me.ai
CM_API_KEY=cm_sk_xxx
PROVIDER_KEY=sk-your-openai-keyimport os
client = AsyncOpenAI(
api_key=os.environ["PROVIDER_KEY"],
base_url=f"{os.environ['CM_GATEWAY_URL']}/v1/openai",
default_headers={
"X-CM-API-Key": os.environ["CM_API_KEY"],
},
)
set_default_openai_client(client)Next Steps
- Gateway Quickstart — full setup walkthrough
- Cost Attribution — track costs by agent, project, team
- Orchestration Patterns — DAG-based multi-agent workflows
- Framework Integrations — other SDK integrations