LangChain Integration
Route all LangChain LLM calls through the Curate-Me gateway for automatic cost
tracking, rate limiting, and PII scanning. One parameter changed — base_url.
Before and after
# Before — direct to OpenAI
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o")
# After — through Curate-Me gateway
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="gpt-4o",
base_url="https://api.curate-me.ai/v1/openai", # ← added
api_key="sk-your-openai-key",
default_headers={"X-CM-API-Key": "cm_sk_YOUR_KEY"}, # ← added
)Every chain, agent, and tool call that uses this LLM instance is now governed by the gateway.
Full example
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
# Point LangChain at the gateway
llm = ChatOpenAI(
model="gpt-4o",
base_url="https://api.curate-me.ai/v1/openai",
api_key="sk-your-openai-key",
default_headers={
"X-CM-API-Key": "cm_sk_YOUR_KEY",
"X-CM-Tags": "project=langchain-app,env=production", # optional cost tags
},
max_tokens=150,
)
# Simple invocation
response = llm.invoke("What is an AI gateway proxy?")
print(response.content)
# Streaming
for chunk in llm.stream("List three benefits of LLM cost tracking."):
print(chunk.content, end="", flush=True)
# Chains work identically
prompt = ChatPromptTemplate.from_messages([
("system", "You are a concise technical writer. Reply in one paragraph."),
("human", "{question}"),
])
chain = prompt | llm
result = chain.invoke({"question": "How does rate limiting protect AI apps?"})
print(result.content)LangGraph agents
LangGraph agents use the same ChatOpenAI instance. Swap the base URL once
and every agent step goes through the gateway:
from langgraph.prebuilt import create_react_agent
agent = create_react_agent(llm, tools=[...])
result = agent.invoke({"messages": [("user", "Research AI governance trends")]})TypeScript (LangChain.js)
import { ChatOpenAI } from '@langchain/openai';
const llm = new ChatOpenAI({
model: 'gpt-4o',
configuration: {
baseURL: 'https://api.curate-me.ai/v1/openai',
defaultHeaders: { 'X-CM-API-Key': 'cm_sk_YOUR_KEY' },
},
openAIApiKey: 'sk-your-openai-key',
});
const response = await llm.invoke('What is AI governance?');
console.log(response.content);Prerequisites
pip install langchain-openaiWhat you get
Every LangChain call through the gateway automatically receives:
| Feature | Description |
|---|---|
| Cost tracking | Per-request token and dollar cost recorded in real time |
| Rate limiting | Per-org, per-key request throttling |
| PII scanning | Regex scan for secrets and PII before hitting the provider |
| Model allowlists | Only approved models per your org policy |
| Budget caps | Daily and monthly spend limits enforced |
| Audit trail | Full request metadata logged for compliance |
Working examples
- Python example — non-streaming, streaming, chains
- TypeScript example — ChatOpenAI with chains
Next steps
- Gateway Quickstart — full setup walkthrough
- CrewAI Integration — multi-agent crews with per-agent cost tags
- Cost Tracking — budget alerts and cost attribution