Skip to Content
GuidesOpenAI SDK Integration

OpenAI SDK Integration

Route all OpenAI SDK calls through the Curate-Me gateway with two line changes. Works with both the Python and TypeScript/Node.js SDKs.

Before and after

The only changes are base_url and the X-CM-API-Key header. Your existing code, prompts, and model names stay the same.

Python

# Before — direct to OpenAI from openai import OpenAI client = OpenAI( api_key="sk-your-openai-key", ) # After — through Curate-Me gateway from openai import OpenAI client = OpenAI( api_key="sk-your-openai-key", base_url="https://api.curate-me.ai/v1/openai", # ← added default_headers={"X-CM-API-Key": "cm_sk_YOUR_KEY"}, # ← added )

TypeScript / Node.js

// Before — direct to OpenAI import OpenAI from 'openai'; const client = new OpenAI({ apiKey: 'sk-your-openai-key', }); // After — through Curate-Me gateway import OpenAI from 'openai'; const client = new OpenAI({ apiKey: 'sk-your-openai-key', baseURL: 'https://api.curate-me.ai/v1/openai', // ← added defaultHeaders: { 'X-CM-API-Key': 'cm_sk_YOUR_KEY' }, // ← added });

Full Python example

from openai import OpenAI client = OpenAI( api_key="sk-your-openai-key", base_url="https://api.curate-me.ai/v1/openai", default_headers={ "X-CM-API-Key": "cm_sk_YOUR_KEY", "X-CM-Tags": "project=my-app,env=production", # optional cost tags }, ) # Non-streaming response = client.chat.completions.create( model="gpt-4o-mini", messages=[{"role": "user", "content": "What is AI governance?"}], ) print(response.choices[0].message.content) # Streaming stream = client.chat.completions.create( model="gpt-4o-mini", messages=[{"role": "user", "content": "Explain rate limiting."}], stream=True, ) for chunk in stream: if chunk.choices[0].delta.content: print(chunk.choices[0].delta.content, end="", flush=True)

Full TypeScript example

import OpenAI from 'openai'; const client = new OpenAI({ apiKey: 'sk-your-openai-key', baseURL: 'https://api.curate-me.ai/v1/openai', defaultHeaders: { 'X-CM-API-Key': 'cm_sk_YOUR_KEY', 'X-CM-Tags': 'project=my-app,env=production', }, }); // Non-streaming const response = await client.chat.completions.create({ model: 'gpt-4o-mini', messages: [{ role: 'user', content: 'What is AI governance?' }], }); console.log(response.choices[0].message.content); // Streaming const stream = await client.chat.completions.create({ model: 'gpt-4o-mini', messages: [{ role: 'user', content: 'Explain rate limiting.' }], stream: true, }); for await (const chunk of stream) { const content = chunk.choices[0]?.delta?.content; if (content) process.stdout.write(content); }

Environment variable approach

If your app reads OPENAI_BASE_URL from the environment, you can route through the gateway with zero code changes:

export OPENAI_BASE_URL=https://api.curate-me.ai/v1/openai export OPENAI_API_KEY=sk-your-openai-key

Then add your gateway key as a default header in your SDK initialization, or store your provider key in the gateway and use your cm_sk_ key directly as the API key.

What you get

Every request through the gateway automatically receives:

FeatureDescription
Cost trackingPer-request token and dollar cost recorded in real time
Rate limitingPer-org, per-key request throttling
PII scanningRegex scan for secrets and PII before hitting the provider
Model allowlistsOnly approved models per your org policy
Budget capsDaily and monthly spend limits enforced
Audit trailFull request metadata logged for compliance

Working examples

Next steps