TypeScript SDK
The @curate-me/sdk TypeScript SDK provides a complete, type-safe client for the Curate-Me AI Gateway and managed runner platform. It is Edge-runtime compatible (Vercel, Cloudflare Workers), uses the standard fetch API, and ships with full TypeScript definitions.
Installation
npm install @curate-me/sdk
# or
yarn add @curate-me/sdk
# or
pnpm add @curate-me/sdkRequires Node.js 18 or later.
Quick Start
Gateway Integration (Zero Code Changes)
Point your existing OpenAI or Anthropic SDK at the gateway:
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://api.curate-me.ai/v1/openai',
defaultHeaders: { 'X-CM-API-Key': 'cm_sk_xxx' },
});
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(response.choices[0].message.content);Using the SDK Gateway Wrapper
For automatic configuration of provider SDKs:
import { CurateGateway } from '@curate-me/sdk';
import OpenAI from 'openai';
const gw = new CurateGateway('cm_sk_xxx', 'https://api.curate-me.ai');
// Get config for OpenAI SDK
const client = new OpenAI(gw.openaiConfig('sk-your-openai-key'));
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
});Using the Main Client
The CurateMe client provides access to all platform APIs:
import { CurateMe } from '@curate-me/sdk';
// Initialize with API key
const client = new CurateMe({
apiKey: 'cm_xxx',
orgId: 'org_xxx',
});
// Or from environment variables
const client = CurateMe.fromEnv();
// Use the APIs
const agents = await client.agents.list();
const costs = await client.costs.getSummary();Gateway Methods
CurateGateway
| Method | Description |
|---|---|
gw.openaiConfig(providerKey?) | Get OpenAI SDK config for the gateway |
gw.anthropicConfig(providerKey?) | Get Anthropic SDK config for the gateway |
GatewayAdmin
import { GatewayAdmin } from '@curate-me/sdk';
const admin = new GatewayAdmin({ apiKey: 'cm_sk_xxx' });
// Usage and cost tracking
const usage = await admin.getUsage({ days: 7 });
const costs = await admin.getDailyCosts({ days: 30 });
// Governance policies
const policies = await admin.getPolicies();
await admin.updatePolicies({ dailyBudgetUsd: 50.0 });
// API key management
const keys = await admin.listKeys();
const key = await admin.createKey({ name: 'production' });
await admin.revokeKey(keyId);Runner Methods
The client.runners property provides full runner lifecycle management:
const client = new CurateMe({ apiKey: 'cm_xxx', orgId: 'org_xxx' });
// List runners
const { runners, total } = await client.runners.list({ state: 'running', limit: 10 });
// Create a runner
const runner = await client.runners.create({
tool_profile: 'locked', // locked | web_automation | full_vm_tools
provider_type: 'hetzner_vps',
ttl_seconds: 3600,
});
// Start a session
const session = await client.runners.startSession(runner.runner_id);
// Execute a command
const result = await client.runners.execCommand(
runner.runner_id,
session.session_id,
['echo', 'Hello from runner!'],
30, // timeout seconds
);
console.log(result.stdout); // "Hello from runner!"
console.log(result.exit_code); // 0
// Get audit events
const events = await client.runners.getEvents(runner.runner_id, 50);
// Stream real-time events via SSE
for await (const event of client.runners.streamEvents(runner.runner_id)) {
console.log(event.event, event.data);
}
// Stop session and terminate
await client.runners.stopSession(runner.runner_id, session.session_id);
await client.runners.terminate(runner.runner_id);Runner Types
interface RunnerResponse {
runner_id: string;
org_id: string;
state: 'provisioning' | 'ready' | 'running' | 'stopped' | 'failed' | 'terminated';
provider_type: string;
tool_profile: string;
ttl_seconds: number;
created_at: string;
updated_at: string;
session?: SessionResponse | null;
}
interface CommandResponse {
command_id: string;
session_id: string;
exit_code: number | null;
stdout: string | null;
stderr: string | null;
}Advanced Runner Features
// Inventory with filtering and cursor pagination
const inventory = await client.runners.listInventory({
state: 'ready',
tool_profile: 'locked',
limit: 10,
sort: 'created_at_desc',
});
// Async command jobs (non-blocking execution)
const job = await client.runners.enqueueCommand(runnerId, sessionId, ['npm', 'test']);
const result = await client.runners.getCommandJob(runnerId, job.command_id);
await client.runners.cancelCommandJob(runnerId, job.command_id);
// Logs
const logs = await client.runners.getLogs(runnerId, sessionId, { limit: 100 });
// Artifacts
const artifact = await client.runners.uploadArtifact(runnerId, {
filename: 'report.pdf',
path: '/workspace/report.pdf',
size_bytes: 102400,
});
const { artifacts } = await client.runners.listArtifacts(runnerId);
// Egress policy
const { policy } = await client.runners.getEgressPolicy();
await client.runners.updateEgressPolicy({
allowed_domains: ['api.openai.com', 'pypi.org'],
allowed_cidrs: [],
allow_all: false,
});
// Quotas
const quotas = await client.runners.getQuotas();
await client.runners.updateQuotas({ max_runners: 10 });
// Auth tokens
const { token } = await client.runners.issueToken(runnerId);
// Desktop streaming
const streamToken = await client.runners.desktopStreamToken(runnerId, sessionId);
const screenshot = await client.runners.desktopScreenshot(runnerId, sessionId);Error Handling
The SDK provides specific error types for different failure modes:
import {
CurateMe,
RateLimitError,
BudgetExceededError,
AuthenticationError,
GatewayGovernanceError,
isRetryableError,
getRetryDelay,
} from '@curate-me/sdk';
try {
const result = await client.agents.run('agent_id', { query: 'Hello' });
} catch (error) {
if (error instanceof RateLimitError) {
console.log(`Rate limited. Retry after ${error.retryAfter}s`);
} else if (error instanceof BudgetExceededError) {
console.log('Daily budget exceeded');
} else if (error instanceof AuthenticationError) {
console.log('Invalid API key');
} else if (isRetryableError(error)) {
const delay = getRetryDelay(error);
await new Promise(resolve => setTimeout(resolve, delay));
}
}| Error | When |
|---|---|
AuthenticationError | Invalid or missing API key |
AuthorizationError | Insufficient permissions |
RateLimitError | Rate limit exceeded (HTTP 429) |
BudgetExceededError | Daily budget exceeded |
NotFoundError | Resource not found |
ValidationError | Invalid request parameters |
ServerError | Gateway or provider server error |
GatewayGovernanceError | Governance policy denied the request |
Edge Runtime Support
The SDK works in edge runtimes (Vercel Edge Functions, Cloudflare Workers) because it uses only the standard fetch API with no Node.js-specific dependencies:
// Vercel Edge Function
import { CurateGateway } from '@curate-me/sdk';
export const config = { runtime: 'edge' };
export default async function handler(req: Request) {
const gw = new CurateGateway(process.env.CM_API_KEY!);
const config = gw.openaiConfig();
const response = await fetch(`${config.baseURL}/chat/completions`, {
method: 'POST',
headers: { ...config.defaultHeaders, 'Content-Type': 'application/json' },
body: JSON.stringify({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
}),
});
return new Response(response.body, {
headers: { 'Content-Type': 'text/event-stream' },
});
}Configuration
| Environment Variable | Description | Default |
|---|---|---|
CURATE_ME_API_KEY | Your API key | (required) |
CURATE_ME_ORG_ID | Organization ID | (optional) |
CURATE_ME_BASE_URL | API base URL | https://api.curate-me.ai/api/v1 |
const client = new CurateMe({
apiKey: 'cm_xxx',
orgId: 'org_xxx',
baseUrl: 'http://localhost:8001/api/v1', // Local development
timeout: 120000, // Request timeout in ms
debug: true, // Enable debug logging
retry: {
maxRetries: 5,
initialDelay: 500,
maxDelay: 8000,
backoffMultiplier: 2,
},
// Custom fetch for edge runtimes or testing
fetch: globalThis.fetch,
});Streaming Error Recovery
The SDK includes resilient streaming utilities that automatically handle connection drops and retries:
import { resilientOpenAIStream } from '@curate-me/sdk';
const stream = resilientOpenAIStream(
() => openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
stream: true,
}),
{
maxRetries: 3,
initialDelay: 1000,
onRetry: (attempt, error) => console.log(`Retry ${attempt}: ${error.message}`),
},
);
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content ?? '');
}