Back to Resources

Blog | Mar 04, 2026

Treat Your AI Agents Like Interns, Not Code

We need a better way to think about AI agents.

Right now, many teams are thinking about them the wrong way.

We’re seeing more examples of AI being put into important roles and sometimes causing real damage. AWS reportedly experienced an outage linked to an AI-driven tool that destroyed and recreated an environment in production. A Meta superintelligence safety research lab executive had their email inbox wiped out by an autonomous OpenCLAW agent. These aren’t small test cases. These are real systems, with real impact.

The problem isn’t that AI is evil or useless.

The problem is that we’re using the wrong mental model.

Most people think of AI in one of two ways:

  • It’s intelligent, almost like a digital employee.
  • It’s automation, just smarter code.

It’s neither.

AI is not as predictable as normal software. It doesn’t produce the exact same output every time, and it isn't perfectly reliable.

And it’s not truly intelligent. It doesn’t have experience. It doesn’t understand consequences. It doesn’t really know what happens after it takes an action.

AI agents are powerful. They can write, analyze, summarize, and even take actions quickly.

But they are inexperienced.

They don’t understand impact.

They are interns.

What We Should Use AI For

Interns are extremely helpful when used correctly.

They give your team more capacity. They take on real work. They help senior team members focus on the most important problems.

That’s how we should use AI.

Use AI to increase your team’s impact.

Use it to move faster.

Use it to handle work that would typically take too much time.

Ask yourself: what would I give to a bright intern with some guidance?

  • Researching competitors
  • Building sales materials
  • Writing documentation
  • Prototyping new ideas

These tasks benefit from speed and iteration. They need review, but not perfect accuracy on the first try.

Where AI should not be used is in place of clear, predictable automation.

If a task must work the same way every single time, use code.

If you are deploying production systems, running CI/CD pipelines, or making critical system changes, that work should be handled by deterministic systems with strict controls.

You wouldn’t replace your deployment pipeline with an intern.

You shouldn’t replace it with an AI agent either.

How We Should Use AI

The harder question for AI is permissions.

There are two common approaches today, and both have risks.

Option 1: Give the AI broad, independent permissions. 

This can create a privilege problem. Anyone who can access the AI might be able to get it to do things they shouldn’t be allowed to do themselves.

Option 2: Let the AI use the same permissions as the person asking it. 

This feels safer. But it still creates risk. If a user has permission to delete something important, the AI can now do that instantly and at scale. We’ve all seen someone run rm -rf in the wrong folder. Now imagine that happening automatically and quickly.

In both cases, we’re making the same mistake.

We’re assuming the AI will get it right. We assume it will fully understand our instructions. We assume it knows when to slow down or ask questions.

That’s not realistic.

Interns need guardrails. Interns need clear instructions. Interns need approvals for risky actions.

You would never let an intern restart a production cluster without review.

You would never give them unlimited access to critical systems and hope for the best.

AI should be treated the same way.

Give it only the access it needs for its specific job.

Separate environments to limit damage if something goes wrong.

Require human approval for destructive or irreversible actions.

Log and monitor what it does.

Design your systems expecting that it will sometimes misunderstand you.

Because it will.

The Right Mental Model

If you treat AI like code, you’ll be surprised when it acts unpredictably.

If you treat AI like true intelligence, you’ll be surprised when it shows poor judgment.

If you treat AI like an intern, you’ll build the proper guardrails around it.

Let it draft, research, build, and move work forward.

Let it increase your team’s output.

But don’t confuse speed and capability with experience and judgment.

AI can be a powerful force multiplier.

Just don’t hand the intern the keys to production and walk away.