Skip to Content
BlogAdd Governance to Your OpenClaw Agents in 5 Minutes

Add Governance to Your OpenClaw Agents in 5 Minutes

Published February 27, 2026

If you are running OpenClaw agents in production, they are making LLM calls without cost limits, PII scanning, or audit trails. This guide shows you how to add all of that in under five minutes using the Curate-Me governance gateway.

No Docker setup. No infrastructure changes. One environment variable.

Prerequisites

  • An existing OpenClaw setup (local or hosted)
  • A Curate-Me account (sign up free )
  • Your cm_sk_xxx API key from the dashboard

Step 1: Install the SDK (Optional)

You do not need the SDK to use the gateway — a base URL swap is sufficient. But the SDK provides typed helpers for policy configuration.

Python

pip install curate-me
from curate_me import CurateMe client = CurateMe(api_key="cm_sk_xxx")

TypeScript

npm install @curate-me/sdk
import { CurateMe } from '@curate-me/sdk' const client = new CurateMe({ apiKey: 'cm_sk_xxx' })

Step 2: Set Your Base URL

This is the only required change. Point your LLM SDK at the Curate-Me gateway instead of the provider directly.

Environment Variable (Works with Any SDK)

# .env OPENAI_BASE_URL=https://api.curate-me.ai/v1/openai

Add the X-CM-API-Key header to your requests. Most OpenAI-compatible SDKs support custom headers:

Python (OpenAI SDK)

from openai import OpenAI client = OpenAI( base_url="https://api.curate-me.ai/v1/openai", default_headers={"X-CM-API-Key": "cm_sk_xxx"}, ) response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Hello"}], )

TypeScript (OpenAI SDK)

import OpenAI from 'openai' const client = new OpenAI({ baseURL: 'https://api.curate-me.ai/v1/openai', defaultHeaders: { 'X-CM-API-Key': 'cm_sk_xxx' }, }) const response = await client.chat.completions.create({ model: 'gpt-4o', messages: [{ role: 'user', content: 'Hello' }], })

That is it. Your requests now flow through the governance chain.

Step 3: Configure Policies

Open the Curate-Me dashboard  and navigate to Settings > Policies. Enable the controls you need:

PolicyWhat It DoesRecommended Setting
Daily BudgetMaximum spend per day per org$25 for dev, $100 for production
Per-Request LimitMaximum estimated cost per request$1.00
PII ScanningScan for secrets/PII in request contentEnabled
Model AllowlistRestrict which models can be usedAllow only models you need
HITL GateRequire approval for expensive requestsEnable for requests > $5.00

All policies can be configured per organization, per API key, or globally.

Step 4: Verify It Works

Make a test request and check the dashboard. You should see:

  1. Request logged in the Activity feed with latency, cost, and model details
  2. Cost tracked in real-time on the Costs page
  3. Governance chain applied — each policy evaluation is visible in the request detail view

Quick Verification

curl https://api.curate-me.ai/v1/openai/chat/completions \ -H "Authorization: Bearer your-openai-key" \ -H "X-CM-API-Key: cm_sk_xxx" \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-4o-mini", "messages": [{"role": "user", "content": "ping"}] }'

If the response comes back normally, governance is active. Check the dashboard to confirm the request was logged.

What You Get

After these four steps, every LLM call from your OpenClaw agents has:

  • Cost tracking — real-time spend per org, per agent, per model
  • Budget caps — daily limits that prevent runaway costs
  • PII scanning — secrets and PII blocked before reaching providers
  • Model control — only approved models can be used
  • Audit trail — every request logged with full context
  • HITL approvals — human sign-off on expensive operations

All without changing a single line of your agent code.


Questions? Email support@curate-me.ai or read the full gateway documentation.