EU AI Act and Agent Deployment: What You Need to Know
By Diesel
governanceregulationeu-ai-actcompliance
The EU AI Act isn't coming. It's here. The regulation entered into force in August 2024, with enforcement provisions phasing in through 2027. If you're building or deploying AI agents that touch EU citizens, EU data, or EU markets, this applies to you. Even if your company is based in San Francisco.
I've watched teams react to this in three ways: panic, denial, and pragmatic adaptation. Let's skip the first two and get straight to the third.
## What the AI Act Actually Says (Without the Legal Jargon)
The AI Act classifies AI systems by risk level and imposes requirements proportional to that risk. Four tiers, ascending in regulatory burden.
**Unacceptable risk.** Banned outright. Social scoring by governments. Real-time biometric surveillance in public spaces (with narrow exceptions). Manipulation of vulnerable groups. Emotion recognition in workplaces and schools. If your agent does any of these, stop building it.
**High risk.** Heavy regulatory requirements. This includes AI used in critical infrastructure, education, employment, essential services, law enforcement, border control, and administration of justice. Also includes AI that's a safety component of products already covered by EU safety legislation.
**Limited risk.** Transparency obligations. If your system interacts with people, they need to know they're talking to an AI. If it generates synthetic content, that content needs to be labelled. If it's an emotion recognition or biometric categorisation system, users need to be informed.
**Minimal risk.** No specific requirements beyond existing law. Most AI applications fall here. Spam filters, game AI, inventory optimisation. The EU explicitly encourages voluntary codes of conduct for these systems, but doesn't mandate specific compliance measures.
## Where AI Agents Land
Here's where it gets interesting for agent builders. The AI Act was drafted primarily with traditional AI systems in mind: classifiers, recommenders, decision-support tools. Autonomous agents that reason, plan, and take actions are a newer paradigm that doesn't fit neatly into the original categories.
Most AI agents will be classified based on their domain of application, not their architecture. An agent that screens job applicants is high-risk because employment AI is high-risk. An agent that manages power grid operations is high-risk because critical infrastructure AI is high-risk. An agent that helps users find recipes is minimal risk regardless of how sophisticated its reasoning is. The related post on [enterprise governance frameworks](/blog/ai-governance-frameworks-enterprise) goes further on this point.
But agents have properties that amplify risk classification. An agent's ability to take autonomous actions, chain multiple decisions, and operate with reduced human oversight pushes it toward higher risk categories compared to a passive AI system in the same domain.
If your agent makes decisions that materially affect people (employment, credit, insurance, education, healthcare), assume it's high-risk until a qualified legal review says otherwise.
## High-Risk Requirements That Matter for Agent Architecture
If your agent is classified as high-risk, the AI Act requires specific technical and organisational measures. Here's what actually impacts your architecture.
### Risk Management System
You need a documented, ongoing risk management process. Not a one-time risk assessment. A living system that identifies, analyses, evaluates, and mitigates risks throughout the AI system's lifecycle.
For agents, this means continuous monitoring of agent behaviour, regular reassessment as capabilities change, and documented procedures for when risks materialise. Every prompt update, model change, or tool addition should trigger a risk review.
### Data Governance
Training data, evaluation data, and operational data all need to be governed. Relevant, representative, free of errors to the extent possible, and appropriate for the intended purpose.
For agents that use retrieval-augmented generation, this extends to the data in your knowledge base. Outdated, biased, or incorrect documents in your retrieval system can lead to outputs that violate the Act's requirements for accuracy and robustness.
### Technical Documentation
You need comprehensive documentation of your AI system. Design specifications, development methodology, training procedures, performance metrics, known limitations, and the risk management measures you've implemented.
This is the documentation that the market surveillance authority reviews. "We used GPT-4 with a custom prompt" isn't sufficient. They want to see the full system architecture, data flows, decision processes, and validation results.
### Record Keeping (Logging)
High-risk AI systems must automatically log events to enable traceability. The logs must capture the system's operation at a level of detail appropriate to the system's purpose and risk level.
For agents, this means comprehensive audit trails. Every tool call, every decision, every external data access, logged and retained. I wrote about this in detail in my article on auditing agent decisions. The AI Act makes that audit trail a legal requirement, not just a best practice.
### Transparency and User Information
Users need to be informed that they're interacting with an AI system. They need to understand its capabilities and limitations. And for systems that make decisions affecting individuals, the affected person has a right to an explanation. This connects directly to [audit trails and explainability](/blog/auditing-ai-agent-decisions).
For agents, this means your system needs to be explainable. Not just "the model said so." A meaningful explanation of the factors that influenced the decision and how they were weighted.
### Human Oversight
High-risk AI systems must be designed to allow effective human oversight. This doesn't necessarily mean human-in-the-loop for every decision, but it does mean humans must be able to understand the system's behaviour, intervene when necessary, and override its decisions.
For autonomous agents, this is the big one. Full autonomy without human oversight capability doesn't comply. You need monitoring dashboards, intervention mechanisms, kill switches, and escalation paths.
### Accuracy, Robustness, and Cybersecurity
The system must achieve appropriate levels of accuracy and robustness, and be resilient against attacks. For agents, this includes resilience against prompt injection, data poisoning, and adversarial inputs.
The Act doesn't specify exact thresholds, but it does require that you test for these properties and document the results.
## General-Purpose AI Model Obligations
If you're building on top of a general-purpose AI model (which most agent builders are), there are separate obligations for the model provider. Technical documentation, training data transparency, copyright compliance, and for models with "systemic risk" (high compute training runs), additional requirements including red teaming and incident reporting.
As an agent builder using a third-party model, you need to verify that your model provider meets these obligations. Their compliance is a prerequisite for your compliance.
## Enforcement and Penalties
The penalties are designed to get attention. Up to 35 million euros or 7% of global annual turnover for using prohibited AI practices. Up to 15 million euros or 3% of turnover for violations of other requirements. Up to 7.5 million euros or 1% of turnover for supplying incorrect information.
The percentages apply to global turnover, not just EU revenue. And "turnover" means consolidated group turnover. A subsidiary deploying AI in the EU exposes the parent company to penalties based on the entire group's revenue.
## Practical Steps for Agent Builders
Here's what I recommend for any team deploying agents in or for EU markets.
**Classify your system now.** Don't wait for enforcement. Determine whether your agent falls into high-risk or limited-risk categories. If you're unsure, get legal advice. The cost of a legal opinion is trivial compared to a misclassification. This connects directly to [compliance monitoring agents](/blog/compliance-monitoring-ai-agents).
**Build compliance into the architecture.** Logging, monitoring, human oversight, and documentation aren't features you bolt on later. They're architectural requirements. Adding them retroactively is expensive, error-prone, and usually incomplete.
**Document everything.** The AI Act is documentation-heavy. Technical documentation, risk assessments, data governance records, performance evaluations. Start documenting now, even if enforcement is still phasing in.
**Prepare for conformity assessments.** High-risk systems require a conformity assessment before market deployment. Some categories require third-party assessment. Know which category you're in and what the assessment involves.
**Watch the implementing acts.** The AI Act delegates many specifics to implementing acts and harmonised standards that are still being developed. The details of what "adequate logging" or "effective human oversight" mean in practice will become clearer as these standards emerge.
## The Strategic View
The EU AI Act will become a de facto global standard, just as GDPR became the global privacy standard. Companies are already adapting to comply with GDPR everywhere, not just in the EU. The same will happen with AI regulation.
Building compliant agent systems now isn't just a European market requirement. It's preparation for the regulatory environment that's coming everywhere. Every major economy is developing AI regulation, and they're all looking at the EU's framework as a starting point.
The companies that treat compliance as a competitive advantage rather than a burden will be the ones that scale globally while their competitors are still arguing with lawyers.