EU AI Act Readiness Guide
Disclaimer: This guide and the linked readiness dashboard are tooling to help you identify configuration gaps. They are not legal advice and do not substitute for the conformity assessment required under Articles 43 and 47, nor for legal review of your obligations as a provider or deployer.
See also: Gateway EU AI Act Readiness for the automated readiness assessment engine and one-click remediation dashboard.
The EU AI Act (Regulation 2024/1689) is the world’s first comprehensive legal framework for artificial intelligence. It entered into force on August 1, 2024, with obligations phased in over three years.
This guide explains how Curate-Me’s gateway, governance chain, and managed runners help you meet the EU AI Act’s requirements for deploying and operating AI agents.
Timeline
| Date | Milestone |
|---|---|
| August 1, 2024 | Regulation enters into force |
| February 2, 2025 | Prohibited AI practices take effect |
| August 2, 2025 | Governance rules and obligations for general-purpose AI models |
| August 2, 2026 | High-risk AI system obligations (most relevant for agent operators) |
| August 2, 2027 | Full enforcement, including high-risk systems in Annex I |
Who Does This Apply To?
If you use LLMs or AI agents in your products or operations, the EU AI Act likely classifies you as a deployer. Deployers have specific obligations under Articles 26-29:
- Monitor AI system operation and report malfunctions
- Keep logs generated by the AI system for an appropriate period
- Ensure human oversight measures are in place
- Use the AI system according to the provider’s instructions of use
- Conduct a Data Protection Impact Assessment when required
If you build and distribute AI-powered products, you may also be a provider with additional obligations under Articles 16-25.
Risk Classification
The EU AI Act uses a four-tier risk framework. Your obligations scale with the risk level.
Unacceptable Risk (Article 5 — Prohibited)
AI practices that are banned outright:
- Social scoring by governments
- Real-time biometric identification in public spaces (limited exceptions)
- Exploitation of vulnerable groups
- Subliminal manipulation techniques
Curate-Me: Does not provide or enable any prohibited AI practices.
High Risk (Articles 6-27, Annex III)
AI systems used in critical areas: biometric identification, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice.
Runner mapping: Full VM Tools profile. These runners have unrestricted system access and may participate in high-risk decision pipelines. The governance chain enforces all Art. 9-15 requirements automatically.
Limited Risk (Article 50)
AI systems with transparency obligations: chatbots must disclose they are AI, deepfake content must be labelled, emotion recognition systems must notify users.
Runner mapping: Web Automation profile. Desktop streaming provides real-time transparency. Audit trail captures all browser interactions.
Minimal Risk (Recital 15)
AI systems posing negligible risk: spam filters, AI-powered games, inventory management.
Runner mapping: Locked profile. Read-only filesystem, no network access, minimal risk surface.
Article-by-Article Mapping
Curate-Me’s readiness engine evaluates 8 EU AI Act articles and maps them to platform features:
Article 9 — Risk Management System
Requirement: Establish, implement, document, and maintain a risk management system throughout the AI system’s lifecycle.
Platform feature: The 6-step governance chain acts as a continuous risk management system. Every gateway request passes through:
- Rate limiting — prevents runaway agent loops
- Cost estimation — pre-flight cost check against budgets
- PII scanning — blocks sensitive data before it reaches providers
- Security scanning — detects prompt injection, jailbreaks, data exfiltration attempts
- Model allowlist — enforces approved models only
- HITL gate — routes high-risk operations to human reviewers
# Governance chain is enabled per-org via governance policies
curl -X GET https://api.curate-me.ai/gateway/admin/compliance/score \
-H "Authorization: Bearer $JWT_TOKEN"Article 11 — Technical Documentation
Requirement: Draw up and maintain technical documentation before the AI system is placed on the market.
Platform feature: Immutable audit trail records every gateway request, governance decision, runner lifecycle event, and cost calculation. Events are stored in MongoDB with no automatic deletion.
Article 12 — Record-Keeping
Requirement: Allow automatic recording of events (logs) over the lifetime of the system.
Platform feature: Time-travel debugging and session output recording capture every agent execution step. Enable via feature flags:
RUNNER_TIMELINE_DEBUG— step-by-step replay of agent sessionsRUNNER_SESSION_OUTPUT— full session I/O recording
Article 13 — Transparency
Requirement: Design AI systems so their operation is sufficiently transparent.
Platform feature: Multiple transparency layers:
- Desktop streaming — live VNC viewing of agent actions via Guacamole
- Agent traces — structured traces of every agent decision
- Request logging — full request/response capture in the audit trail
- Cost transparency — real-time cost tracking per request
Article 14 — Human Oversight
Requirement: Design AI systems so they can be effectively overseen by natural persons.
Platform feature: The HITL (Human-in-the-Loop) gate provides three configurable approval queues:
| Gate | Trigger | Use Case |
|---|---|---|
| Cost gate | Request estimated cost exceeds threshold | Prevent expensive runaway operations |
| Confidence gate | Model confidence below threshold | Review uncertain AI decisions |
| Content gate | Sensitive content detected | Review outputs before delivery |
Human reviewers can approve, reject, or modify requests before execution. All decisions are logged to the audit trail.
Additional oversight features:
- Runner emergency stop — one-click session termination
- Model allowlists — control which models each org can use
- Feature flags — instant platform-wide kill switches
Article 15 — Accuracy, Robustness, and Cybersecurity
Requirement: Achieve appropriate levels of accuracy, robustness, and cybersecurity.
Platform feature:
- PII scanning — regex-based detection of API keys, passwords, emails, SSNs, credit cards, and other sensitive patterns
- Model allowlists — restrict usage to tested and approved models
- Upstream resilience — retry logic with exponential backoff for provider failures
- Organization isolation — strict tenant isolation prevents cross-org data access
Article 26 — Deployer Obligations
Requirement: Use high-risk AI systems according to instructions, with appropriate monitoring and cost governance.
Platform feature:
- Daily budgets — per-org daily spending limits
- Per-request cost limits — maximum cost per individual API call
- Real-time cost tracking — Redis accumulator + MongoDB audit log
- Usage dashboards — visualize spend by model, provider, and time period
Article 96 — Record-Keeping Obligations
Requirement: Keep automatically generated logs for a minimum of 6 months.
Platform feature: Audit trail records are retained indefinitely by default. No TTL indexes or cleanup jobs delete records before the 6-month minimum. The readiness engine surfaces the oldest retained event so you can verify the retention window.
Readiness Scoring
Curate-Me’s readiness engine scores your organization’s platform configuration against each article on a 0-100 scale. The overall score is the average across all articles. A score is not a compliance certification — it is a signal about which controls are configured.
# Get readiness score
curl -X GET https://api.curate-me.ai/gateway/admin/compliance/score \
-H "Authorization: Bearer $JWT_TOKEN"
# Get full report with per-article details
curl -X GET https://api.curate-me.ai/gateway/admin/compliance/report \
-H "Authorization: Bearer $JWT_TOKEN"
# Apply one-click remediation
curl -X POST https://api.curate-me.ai/gateway/admin/compliance/remediate/art_9 \
-H "Authorization: Bearer $JWT_TOKEN"One-Click Remediation
Several configuration gaps can be fixed instantly:
| Fix ID | What It Does |
|---|---|
enable_governance_policy | Creates a default policy with rate limits, budgets, and PII scanning |
enable_pii_scanning | Enables PII scanning with blocking action |
enable_hitl | Sets HITL approval threshold to $5.00 per request |
enable_daily_budget | Sets daily budget to $50.00 and per-request limit to $1.00 |
enable_desktop_streaming | Guidance to enable the RUNNER_DESKTOP_STREAMING feature flag |
Exporting Evidence Packs
For audits and regulatory reviews, export the platform-generated evidence the readiness engine relies on:
# JSON evidence pack (machine-readable)
curl -X GET "https://api.curate-me.ai/gateway/admin/compliance/export?format=json" \
-H "Authorization: Bearer $JWT_TOKEN" \
-o compliance-evidence.json
# CSV audit export (spreadsheet-friendly)
curl -X GET "https://api.curate-me.ai/gateway/admin/compliance/export?format=csv" \
-H "Authorization: Bearer $JWT_TOKEN" \
-o compliance-audit.csvEvidence packs include:
- Full compliance report with per-article scores
- Governance policy configuration snapshot
- Remediation history with timestamps
- Audit trail event summary
PII Patterns Detected
The gateway PII scanner detects the following patterns before any data reaches LLM providers:
- API keys (OpenAI, Anthropic, AWS, Google, etc.)
- Passwords and bearer tokens
- Email addresses
- Social Security Numbers (SSNs)
- Credit card numbers
- Phone numbers
- JWT tokens
- Private keys (RSA, EC, etc.)
Detected PII is blocked in-memory and never persisted or forwarded to upstream providers.
Dashboard
The Compliance Dashboard in the Curate-Me console provides:
- Overall compliance score with circular progress indicator
- Per-article breakdown with status badges (compliant, partial, non-compliant)
- One-click remediation buttons for quick fixes
- Evidence export in JSON and CSV formats
- Comprehensive documentation with risk classification, transparency checklists, human oversight mapping, and data governance details
Access it at https://dashboard.curate-me.ai/compliance.
Further Reading
- EU AI Act full text (EUR-Lex)
- European Commission AI Act overview
- Curate-Me Governance Chain — feature flags that control compliance features
- Runner Security Model — security architecture that supports compliance