Skip to Content
GuidesLangChain Integration

LangChain Integration

Route all LangChain LLM calls through the Curate-Me gateway for automatic cost tracking, rate limiting, and PII scanning. One parameter changed — base_url.

Before and after

# Before — direct to OpenAI from langchain_openai import ChatOpenAI llm = ChatOpenAI(model="gpt-4o") # After — through Curate-Me gateway from langchain_openai import ChatOpenAI llm = ChatOpenAI( model="gpt-4o", base_url="https://api.curate-me.ai/v1/openai", # ← added api_key="sk-your-openai-key", default_headers={"X-CM-API-Key": "cm_sk_YOUR_KEY"}, # ← added )

Every chain, agent, and tool call that uses this LLM instance is now governed by the gateway.

Full example

from langchain_openai import ChatOpenAI from langchain_core.prompts import ChatPromptTemplate # Point LangChain at the gateway llm = ChatOpenAI( model="gpt-4o", base_url="https://api.curate-me.ai/v1/openai", api_key="sk-your-openai-key", default_headers={ "X-CM-API-Key": "cm_sk_YOUR_KEY", "X-CM-Tags": "project=langchain-app,env=production", # optional cost tags }, max_tokens=150, ) # Simple invocation response = llm.invoke("What is an AI gateway proxy?") print(response.content) # Streaming for chunk in llm.stream("List three benefits of LLM cost tracking."): print(chunk.content, end="", flush=True) # Chains work identically prompt = ChatPromptTemplate.from_messages([ ("system", "You are a concise technical writer. Reply in one paragraph."), ("human", "{question}"), ]) chain = prompt | llm result = chain.invoke({"question": "How does rate limiting protect AI apps?"}) print(result.content)

LangGraph agents

LangGraph agents use the same ChatOpenAI instance. Swap the base URL once and every agent step goes through the gateway:

from langgraph.prebuilt import create_react_agent agent = create_react_agent(llm, tools=[...]) result = agent.invoke({"messages": [("user", "Research AI governance trends")]})

TypeScript (LangChain.js)

import { ChatOpenAI } from '@langchain/openai'; const llm = new ChatOpenAI({ model: 'gpt-4o', configuration: { baseURL: 'https://api.curate-me.ai/v1/openai', defaultHeaders: { 'X-CM-API-Key': 'cm_sk_YOUR_KEY' }, }, openAIApiKey: 'sk-your-openai-key', }); const response = await llm.invoke('What is AI governance?'); console.log(response.content);

Prerequisites

pip install langchain-openai

What you get

Every LangChain call through the gateway automatically receives:

FeatureDescription
Cost trackingPer-request token and dollar cost recorded in real time
Rate limitingPer-org, per-key request throttling
PII scanningRegex scan for secrets and PII before hitting the provider
Model allowlistsOnly approved models per your org policy
Budget capsDaily and monthly spend limits enforced
Audit trailFull request metadata logged for compliance

Working examples

Next steps