Skip to Content
PlatformHow Curate-Me Works

How Curate-Me Works

This page explains the platform in plain English.

If you are trying to answer “what does this actually do for us?” without reading backend code first, start here.

The Problem It Solves

Most teams build AI products in this order:

  1. get the first model call working
  2. add prompts, tools, and workflows
  3. ship
  4. only later discover they now need:
    • budget controls
    • request logs
    • provider routing
    • approval flows
    • safer execution environments
    • a way for non-engineers to see what is happening

Curate-Me is the layer you add when the AI system is useful enough that it now needs governance and operations, not just prompts.

The Simple Mental Model

Curate-Me has three jobs:

Product surfaceIn plain English
GatewayChecks each AI request before it leaves your system
Managed RunnersGives agents a safer place to run work
DashboardGives humans a way to see, control, and explain what the AI system is doing

What Happens When You Use It

1. Your app sends a request

Your code still uses an LLM SDK. The main difference is that instead of talking directly to a provider, it talks to Curate-Me first.

# Before OPENAI_BASE_URL=https://api.openai.com/v1 # After OPENAI_BASE_URL=https://api.curate-me.ai/v1/openai

2. Curate-Me checks the request

Before the request reaches the provider, Curate-Me can answer questions like:

  • is this request within budget?
  • is this team allowed to use this model?
  • does the request contain secrets or personal data?
  • should this request be approved by a human before it runs?

If the request is safe and allowed, Curate-Me forwards it. If not, it can block it or pause it for approval.

3. Curate-Me records what happened

After the request finishes, Curate-Me records:

  • model used
  • latency
  • token usage
  • cost
  • whether a policy blocked or approved it

That information shows up in the dashboard for operators and other teams.

4. Runners handle AI work that needs an environment

Some agent tasks need more than a model call. They may need files, a shell, browser automation, a sandbox, or a scheduled workflow.

That is where managed runners come in.

A runner is a controlled execution environment for agent work. It can be:

  • started and stopped
  • given a template
  • attached to files or channels
  • monitored by operators
  • billed and governed separately

5. Humans stay in the loop

Curate-Me is not just for agents. It is also for the people responsible for them.

The dashboard lets teams:

  • inspect request logs
  • review approvals
  • watch health and provider status
  • track costs
  • manage API keys and secrets
  • manage runners and templates

Why Teams Buy This Instead Of Building It Themselves

Because the hard part is usually not “make one model call.”

The hard part is:

  • making the system safe enough for real customers
  • making costs understandable
  • giving non-engineers visibility
  • controlling agent execution environments
  • explaining failures when they happen

Curate-Me bundles those concerns into one operating layer instead of forcing teams to stitch together multiple tools and custom code.

Who Usually Benefits First

The first visible wins usually land with:

  • engineering teams that need cost and policy controls
  • platform or security teams that need guardrails
  • support and ops teams that need logs and traceability
  • finance and leadership teams that need spend visibility

If You Want The Technical Version

If You Want To Try It