AI Agent Permissions: Principle of Least Privilege Applied to AI
By Diesel
securitypermissionsleast-privilege
Here's a pattern I see constantly. Someone builds an AI agent. They give it broad tool access because "it needs to be flexible." They connect it to databases, APIs, file systems, cloud consoles. They ship it. Then they act surprised when it does something catastrophic with permissions nobody thought to restrict.
The principle of least privilege has been a security fundamental since the 1970s. Saltzer and Schroeder wrote about it in 1975. Fifty years later, we're making the same mistakes with AI agents that we made with human user accounts in the early days of computing.
Let's not.
## Why Agents Need Stricter Permissions Than Humans
A human user with excessive permissions is a risk. An AI agent with excessive permissions is a multiplied risk. Here's why.
**Speed.** A human might accidentally delete a database table once. An agent can accidentally delete every table in a loop faster than you can reach for the kill switch.
**Context blindness.** Humans understand the consequences of their actions through experience and common sense. An agent doesn't have that. It follows instructions and uses available tools. If the tool is available, it's a valid option from the agent's perspective.
**Attack amplification.** If an agent gets compromised through prompt injection or any other attack vector, the attacker inherits every permission the agent has. Excessive permissions turn a contained breach into a total system compromise. For a deeper look, see [sandboxing](/blog/sandboxing-ai-agents-containment).
**Non-obvious action chains.** Agents chain tools together in ways developers don't always anticipate. Read access to a database plus write access to an API plus network access can combine into data exfiltration even if none of those permissions seem dangerous individually.
## The Permission Model Most Teams Use (And Why It's Wrong)
Most teams start with what I call the "admin by default" pattern. They give the agent access to everything during development because it's easier. Then they ship that same configuration to production because restricting permissions would require actual thought about what the agent needs.
I've reviewed agent architectures where a customer support bot had write access to the production database. Where a code review agent had push access to main. Where a document summariser could send emails. None of these capabilities were needed. All of them were available.
The excuse is always some variation of "but what if it needs to?" What if is not a security policy. If you can't articulate a specific, concrete scenario where the agent needs a permission, the agent doesn't need that permission.
## Implementing Least Privilege for Agents
Here's how I approach permissions in every agent system I build.
### Start With Zero
Begin with an agent that can do nothing. Literally nothing. No tools, no API access, no file system access. Then add permissions one at a time, each justified by a specific use case. Document why each permission exists. If you can't write a sentence explaining why the agent needs it, don't grant it.
### Scope Narrowly
"Database access" is not a permission. "Read access to the customers table, filtered to the current user's records" is a permission. The granularity matters enormously.
Instead of giving your agent a generic HTTP client, give it specific API wrappers that only expose the endpoints it needs. Instead of file system access, give it access to a specific directory. Instead of "can send emails," give it "can send emails to the current user's verified address."
### Separate Read From Write
This one's obvious but constantly violated. An agent that analyses data rarely needs to modify it. An agent that generates reports doesn't need to update the underlying records. Read and write are different permissions. Treat them that way. It is worth reading about [prompt injection](/blog/prompt-injection-attacks-ai-agents) alongside this.
### Use Temporal Scoping
Some permissions should only be active during specific workflow phases. An agent processing a refund needs payment API access during the refund step, not during the initial customer inquiry. Grant permissions for the duration of the task, then revoke them.
This is harder to implement than static permissions, but it dramatically reduces your attack window. Most frameworks support middleware or hook patterns where you can inject and remove tool access at step boundaries.
### Implement Permission Boundaries, Not Just Lists
A permission list says "the agent can call these tools." A permission boundary says "the agent can call these tools, with these parameter constraints, at this rate, during this context."
For example:
- The agent can query the database, but only with parameterised queries (no raw SQL)
- The agent can send up to 3 emails per conversation (rate limit)
- The agent can modify records only for the authenticated user's organisation (scope boundary)
- The agent can create files only in the designated output directory (path constraint)
Boundaries are more work than lists. They're also more secure by orders of magnitude.
## The Runtime Enforcement Problem
Here's where it gets tricky. LLMs don't respect permissions natively. You can tell a model "you don't have permission to do X" in the system prompt, and it might still try. Permission enforcement has to happen at the tool layer, not the prompt layer.
When your agent calls a tool, the tool execution layer should validate:
1. Is this tool in the agent's current permission set?
2. Are the parameters within the allowed boundaries?
3. Is this action consistent with the current workflow state?
4. Has the rate limit been exceeded?
If any check fails, the tool call is rejected before execution. The agent gets an error message explaining why. This is non-negotiable. Prompt-level restrictions are suggestions. Code-level restrictions are enforcement. This connects directly to [data access control in RAG](/blog/rag-access-control-permissions).
## Permission Auditing
Permissions that aren't audited might as well not exist. Log every tool call, every permission check (passed and failed), and every permission change. This gives you:
- **Forensic capability.** When something goes wrong, you can trace exactly what the agent did and which permissions allowed it.
- **Optimisation data.** Permissions that are never used should be removed. If your agent has database write access but hasn't used it in three months, it probably doesn't need it.
- **Anomaly detection.** A sudden spike in permission usage or attempts to use revoked permissions signals a potential compromise.
## The Organisational Challenge
Technical implementation is the easy part. The hard part is convincing teams that restrictions are features, not limitations.
Developers resist permission restrictions because they slow down development. Product managers resist them because they limit flexibility. Executives resist them because they don't understand the risk until it materialises.
The argument I've found most effective: "Would you give a new hire admin access to every system on their first day?" The answer is always no. Your AI agent is a new hire that never gains experience, never develops judgement, and processes thousands of requests per hour. It should have stricter access controls than any human on your team.
Build the walls first. Open doors deliberately. Document every door you open and why. That's least privilege for AI agents. It's not complicated. It just requires the discipline to do it before the incident instead of after.