Why Your AI Agents Are Headed for a Crash (And How to Stop It)
As teams race to deploy AI agents, a new kind of technical debt is emerging—one that could make past software failures look tame. Here's what 'agentic debt' means for your career.
Here's a question that keeps me up at night: Why do we keep forgetting the lessons we've already learned?
I'm watching teams sprint toward AI agents with the same breathless energy that defined the early cloud migration days, the microservices gold rush, and every other architectural shift that promised to change everything. And I'm seeing something troubling—a kind of collective amnesia where we're treating AI agents like they're somehow exempt from the architectural principles we fought so hard to establish.
Tracy Bannon, a senior principal software architect at MITRE, has a name for this phenomenon: architectural amnesia. At QCon AI NY 2025, she delivered a talk that should be required viewing for anyone building with AI right now. Her central warning? "Everyone is talking about AI 'productivity.' Very few are talking about the architectural amnesia that comes with it."
The Birth of Agentic Debt
Technical debt is a concept every developer knows intimately—those shortcuts and compromises that speed up initial development but come back to haunt you later. Now we're facing something new: agentic debt.
Bannon describes agentic debt as what happens when autonomy grows faster than architectural discipline. And this isn't just theoretical hand-wringing. According to InfoQ's coverage of her talk, research indicates that a majority of technology decision-makers expect technical debt severity to rise in the near term specifically due to AI-driven complexity.
What fascinates me from a cognitive science perspective is why this keeps happening. We're pattern-matching machines, but we're also terrible at applying old patterns to things that feel new and shiny. AI agents feel different, so our brains tell us the old rules don't apply. They do.
Not All "Agents" Are Created Equal
Part of the confusion comes from language. We're using words like "bots," "assistants," and "agents" interchangeably, but Bannon argues these represent fundamentally different risk profiles.
Bots are scripted responders—think of them as fancy if-then statements that react to predefined triggers. Assistants collaborate with humans and remain under human control (your IDE's autocomplete suggestions, for example). But agents? Agents are goal-driven actors capable of making decisions and taking actions across systems.
That distinction matters. An assistant that suggests code changes operates in a completely different risk universe than an agent that can autonomously commit to production, trigger deployments, or modify database schemas.
Bannon outlined what she calls autonomy patterns that appear across the software development lifecycle:
The progression isn't just about capability—it's about blast radius. The higher you go, the more damage a failure can cause.
What Agentic Debt Actually Looks Like
So what are we actually risking when we move too fast? Bannon connected agentic debt to several familiar failure modes:
None of these are new problems. We've dealt with them in distributed systems, microservices architectures, and cloud environments. But AI agents magnify the stakes. As Bannon noted, "AI does not introduce fundamentally new failure modes, but it magnifies existing ones by accelerating change and increasing the blast radius of mistakes."
That acceleration is the key insight. In traditional systems, architectural mistakes reveal themselves gradually. With autonomous agents, they can compound exponentially.
The Identity Problem You're Probably Ignoring
If there's one takeaway that should change how you architect AI systems today, it's this: identity is foundational.
Bannon emphasized that every agent must have a unique, revocable identity. When something goes wrong (not if, when), you need to answer three questions immediately:
1. What can this agent access?
2. What actions has it taken?
3. How do we stop it?
Without proper identity management, you're flying blind. The industry is starting to recognize this—Microsoft recently introduced Entra Agent ID, and tools like Collibra's AI agent registry are emerging to bring identity governance to autonomous systems. But many teams are still treating agent identity as an afterthought.
The Decision-Making Discipline We Need
Here's where my cognitive science background makes me especially nerdy: Bannon encouraged teams to start with "why" rather than "how" when deploying agents. This maps directly to how our brains actually make good decisions versus impulsive ones.
She described decisions as optimizations that always involve tradeoffs—value versus effort, speed versus quality. Making those tradeoffs explicit before increasing autonomy is the difference between deliberate architecture and technical debt.
Bannon put it bluntly: "We chase visible activity metrics... and quietly starve the work that keeps systems healthy: design, refactoring, validation, threat modeling."
This resonates because it's true not just technically but psychologically. We're wired to respond to immediate, visible wins. Long-term architectural health doesn't trigger the same dopamine hit as watching an agent autonomously complete a task. But guess which one actually matters for your system's longevity?
What This Means for Your Career
If you're building with AI agents right now—and statistically, you probably are or will be soon—this isn't just about avoiding technical debt. It's about positioning yourself as someone who gets it.
The industry is moving from "move fast and break things" to what I'd call "move deliberately and build things that last." The developers and architects who understand governance patterns, identity management, and observability in agentic systems are going to be the ones leading teams, not just writing code.
Gartner identified agentic AI as a top strategic technology trend for 2025. That means every organization is trying to figure this out. Be the person who can architect these systems responsibly.
The Path Forward
Bannon's closing message was optimistic: the core practices of software architecture remain valid. We don't need to learn entirely new disciplines—we need to remember and apply what we already know.
Start by asking better questions:
The recorded videos from QCon AI are expected to be available starting January 15, 2026, according to InfoQ. If you're serious about building with agents, Bannon's full talk should be on your watch list.
The Bottom Line
Architectural amnesia isn't inevitable—it's a choice. We can choose to treat AI agents as something entirely new and repeat every mistake we've made with distributed systems. Or we can apply the hard-won lessons of the past two decades to this new frontier.
The fascinating thing about agentic debt is that it's entirely preventable. We have the patterns. We have the principles. We know what good governance looks like. The question is whether we'll slow down enough to implement it.
Because here's what I keep coming back to: the teams that build disciplined, governed AI agents now will be the ones still running in production three years from now. The ones that sprint without architecture? They'll be the cautionary tales in someone else's conference talk.
Which one do you want to be?