OpenAI SDK Integration
Route all OpenAI SDK calls through the Curate-Me gateway with two line changes. Works with both the Python and TypeScript/Node.js SDKs.
Before and after
The only changes are base_url and the X-CM-API-Key header. Your existing
code, prompts, and model names stay the same.
Python
# Before — direct to OpenAI
from openai import OpenAI
client = OpenAI(
api_key="sk-your-openai-key",
)
# After — through Curate-Me gateway
from openai import OpenAI
client = OpenAI(
api_key="sk-your-openai-key",
base_url="https://api.curate-me.ai/v1/openai", # ← added
default_headers={"X-CM-API-Key": "cm_sk_YOUR_KEY"}, # ← added
)TypeScript / Node.js
// Before — direct to OpenAI
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'sk-your-openai-key',
});
// After — through Curate-Me gateway
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'sk-your-openai-key',
baseURL: 'https://api.curate-me.ai/v1/openai', // ← added
defaultHeaders: { 'X-CM-API-Key': 'cm_sk_YOUR_KEY' }, // ← added
});Full Python example
from openai import OpenAI
client = OpenAI(
api_key="sk-your-openai-key",
base_url="https://api.curate-me.ai/v1/openai",
default_headers={
"X-CM-API-Key": "cm_sk_YOUR_KEY",
"X-CM-Tags": "project=my-app,env=production", # optional cost tags
},
)
# Non-streaming
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "What is AI governance?"}],
)
print(response.choices[0].message.content)
# Streaming
stream = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Explain rate limiting."}],
stream=True,
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)Full TypeScript example
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'sk-your-openai-key',
baseURL: 'https://api.curate-me.ai/v1/openai',
defaultHeaders: {
'X-CM-API-Key': 'cm_sk_YOUR_KEY',
'X-CM-Tags': 'project=my-app,env=production',
},
});
// Non-streaming
const response = await client.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'What is AI governance?' }],
});
console.log(response.choices[0].message.content);
// Streaming
const stream = await client.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Explain rate limiting.' }],
stream: true,
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) process.stdout.write(content);
}Environment variable approach
If your app reads OPENAI_BASE_URL from the environment, you can route through
the gateway with zero code changes:
export OPENAI_BASE_URL=https://api.curate-me.ai/v1/openai
export OPENAI_API_KEY=sk-your-openai-keyThen add your gateway key as a default header in your SDK initialization, or
store your provider key in the gateway and use your cm_sk_ key directly as
the API key.
What you get
Every request through the gateway automatically receives:
| Feature | Description |
|---|---|
| Cost tracking | Per-request token and dollar cost recorded in real time |
| Rate limiting | Per-org, per-key request throttling |
| PII scanning | Regex scan for secrets and PII before hitting the provider |
| Model allowlists | Only approved models per your org policy |
| Budget caps | Daily and monthly spend limits enforced |
| Audit trail | Full request metadata logged for compliance |
Working examples
- Python example — non-streaming, streaming, system prompts
- TypeScript example — non-streaming, streaming, system prompts
Next steps
- Gateway Quickstart — full setup walkthrough
- Cost Tracking — budget alerts and cost attribution
- Governance Chain — deep dive into each governance step