Skip to Content
BlogIntroducing Curate-Me: The Governance Layer for AI Agents

Introducing Curate-Me: The Governance Layer for AI Agents

Published February 27, 2026

AI agents are going to production. OpenClaw has 233K+ GitHub stars, thousands of teams are deploying autonomous agents that make LLM calls, execute code, browse the web, and interact with external services. The tooling for building agents is excellent. The tooling for governing them barely exists.

Today we are launching Curate-Me — the governance gateway for AI agents.

The Problem

When you deploy an AI agent to production, you take on three categories of risk that the agent frameworks do not address:

Cost explosion. An agent in a retry loop can burn $500 in LLM credits in under an hour. Without per-request cost limits or daily budget caps, a single misconfigured agent drains your entire month’s budget overnight. There is no kill switch.

Data leakage. Agents process user inputs that frequently contain API keys, passwords, and PII. Without scanning, this data flows to third-party LLM providers in plaintext. One leaked credential in a prompt compromises your infrastructure.

No audit trail. When an autonomous agent sends an email, creates a PR, or modifies a database, there is often no record of why. If something goes wrong, you cannot reconstruct the decision chain. Regulators are starting to require this — the EU AI Act enters full enforcement in August 2026.

Existing tools solve pieces of this. Portkey and Helicone provide LLM proxy logging. E2B and Daytona offer sandboxed execution. But nobody combines governance, execution, and observability in a single platform.

What Curate-Me Does

Curate-Me is a reverse proxy that sits between your AI agents and the LLM providers they call. Integration takes two minutes — change one environment variable:

# Before (direct to provider): OPENAI_BASE_URL=https://api.openai.com/v1 # After (through Curate-Me): OPENAI_BASE_URL=https://api.curate-me.ai/v1/openai X-CM-API-Key: cm_sk_xxx

Zero code changes. Your existing OpenAI, Anthropic, Google, Groq, Mistral, xAI, or any of 17+ supported providers’ SDK calls work unchanged. Every request now flows through a 5-step governance chain before reaching the upstream provider:

  1. Rate Limiting — Per-org, per-key request throttling. Stop runaway loops before they start.
  2. Cost Estimation — Estimate cost before execution. Compare against per-request and daily budget limits. Reject if the request would exceed caps.
  3. PII Scanning — Regex scan for secrets, API keys, passwords, and PII in request content. Block before data leaves your network.
  4. Model Allowlists — Control which models each team or API key can use. Prevent accidental use of expensive models.
  5. Human-in-the-Loop — Route high-cost or sensitive operations to an approval queue. A human reviews before the request proceeds.

The chain short-circuits on the first denial. If a request fails the cost check, it never reaches the PII scanner.

Managed Runners

Most governance tools stop at the API proxy layer. But AI agents do more than make LLM calls — they execute code, browse websites, and interact with filesystems. This execution layer is ungoverned.

Curate-Me provides managed runners powered by OpenClaw. Each runner is a sandboxed container with:

  • 3 tool profiles: Full dev tools (shell, git, filesystem), browser automation (Playwright, MCP), or data processing only (no tools)
  • State machine lifecycle: Provisioned, ready, running, stopped, terminated — with immutable audit trail at every transition
  • Compute governance: CPU, memory, and time quotas per runner
  • Network control: Egress policies that restrict which external services an agent can reach
  • Time-travel debugging: Replay any agent execution step-by-step after the fact

No competitor offers managed execution environments with governance controls. Portkey proxies requests but cannot run agents. E2B runs code but does not apply governance policies.

The Dashboard

Every LLM call, every agent execution, every cost event is visible in the Curate-Me dashboard:

  • Real-time cost tracking: See spend per agent, per model, per org, updated live
  • Governance audit log: Every policy decision recorded — which checks passed, which denied, and why
  • Agent observability: Execution traces, token usage, latency metrics across all agents
  • Budget management: Set daily spending caps per org, get alerts before you hit them
  • Runner console: Provision, monitor, and control managed execution environments

SDKs and CLI

For teams that want programmatic access:

  • Python SDK: pip install curate-me — full gateway and runner control
  • TypeScript SDK: npm install @curate-me/sdk — v1.0.0 shipping today
  • CLI: npm install -g @curate-me/cli — manage agents, runners, and policies from the terminal
  • Embed widget: Drop-in chat widget for customer-facing agent interfaces

Pricing

We are launching with three tiers designed for teams at different stages:

StarterGrowthEnterprise
Price$49/mo$199/mo$499/mo
Gateway Requests100K/mo500K/moUnlimited
Managed Runners2 concurrent10 concurrentUnlimited
GovernanceFull chainFull chainFull chain
Audit Retention30 days90 days1 year
SupportCommunityEmail (24h SLA)Slack (4h SLA)

All plans include the complete governance chain. No features are paywalled behind enterprise.

A free tier (10K requests/month, 1 runner) is available for evaluation.

What is Next

We are onboarding our first 10 design partners this week. If you are running AI agents in production and want cost control, security scanning, and an audit trail without building it yourself, we want to talk.

If you have questions, feedback, or want to discuss governance for AI agents, reach out at hello@curate-me.ai.


Curate-Me is the governance layer for AI agents. Cost caps, PII scanning, rate limiting, HITL approvals, managed runners, and a full audit trail — zero code changes.