AI Won't Save Your Broken Engineering Process: What the DORA Report Really Shows
The 2025 DORA report reveals a hard truth: 90% of developers now use AI, but it doesn't automatically improve delivery performance. AI amplifies what's already there—strength or dysfunction.
There's a particular kind of silence that follows when you tell an engineering leader that their new AI tools won't solve their delivery problems. I've sat in enough conference rooms to recognize it. The DORA team has now quantified what many of us suspected: AI doesn't fix teams. It amplifies them.
The 2025 DORA State of AI-Assisted Software Development report, drawing on nearly 5,000 survey responses and over 100 hours of qualitative research, delivers this finding with the clarity of well-designed empirical work. Approximately 90% of developers now report using some form of AI assistance in their daily work. More than 80% believe it has increased their productivity. Yet 30% still report little or no trust in the code these tools generate.
This tension—between adoption and trust, between speed and stability—sits at the heart of what the report reveals about AI's actual role in software development.
The Amplifier Effect
According to the DORA research, AI acts as a multiplier of existing organizational conditions. Strong teams with mature DevOps practices, well-defined workflows, and robust platform capabilities use AI to become measurably better. Teams struggling with fragmented processes and unclear development standards find that AI simply accelerates the creation of technical debt.
The data shows a positive relationship between AI adoption and software delivery throughput. Teams are learning where and how these tools work best. But AI adoption continues to show a negative relationship with software delivery stability. The interpretation matters here: AI accelerates development, but that acceleration exposes weaknesses downstream. Without robust control systems—automated testing, mature version control practices, fast feedback loops—increased change volume leads to instability.
This isn't a tool problem. It's a systems problem.
What Actually Determines Success
The report introduces the DORA AI Capabilities Model, identifying seven organizational capabilities that determine whether AI delivers value or chaos. These aren't about the sophistication of your AI tools. They're about the quality of the environment those tools operate within.
First, a clear organizational AI strategy matters. According to the research, organizations that define explicit policies around how AI should be used, governed, and integrated into workflows see better outcomes. This clarity reduces the risks of uncontrolled experimentation.
Second, a healthy data ecosystem proves critical. AI tools rely on access to reliable, well-structured information—internal documentation, architectural knowledge, historical development data. When this information is scattered or poorly maintained, AI struggles to generate meaningful assistance.
Closely related is AI-accessible internal knowledge. Teams that maintain high-quality documentation and searchable knowledge repositories enable AI systems to provide contextual recommendations that align with the organization's architecture and coding standards. The report emphasizes that without this foundation, AI tools operate in a vacuum.
Foundational engineering practices remain essential. Mature version control workflows, disciplined code review processes, and consistent development standards form the backbone of effective AI-assisted engineering. Rather than replacing these practices, AI depends on them. The report makes clear that increased development speed without these foundations creates operational risk.
User-centric development emerges as another critical factor. Teams that maintain strong focus on user outcomes, rather than purely technical outputs, integrate AI more effectively. This orientation ensures AI accelerates the delivery of meaningful features rather than simply increasing code volume.
Platform engineering stands out as particularly important. The research shows that 90% of organizations have adopted at least one platform, and there's a direct correlation between high-quality internal platforms and successful AI adoption. Standardized development environments, deployment pipelines, and infrastructure services allow AI tools to operate within a consistent, predictable ecosystem.
Finally, working in small batches—smaller, incremental changes—improves code review quality, reduces deployment risk, and maintains system stability. When AI tools generate large or complex code changes, these practices become even more important for maintaining control.
The Platform Foundation
The report's emphasis on platform engineering deserves attention. Organizations investing in shared tooling, standardized environments, and well-defined developer workflows experience significantly better outcomes when introducing AI tools. Platforms provide the structured foundation that allows AI to scale across teams while maintaining consistency and reliability.
Without this foundation, AI adoption creates new forms of complexity. Developers may generate larger pull requests, introduce inconsistent coding patterns, or rely on AI suggestions that don't align with established architectural standards. Over time, according to the research, these challenges slow delivery and increase operational risk.
What This Means for Teams
The DORA team's research also reveals seven team archetypes based on cluster analysis—from "Foundational challenges" groups trapped in survival mode with low performance and high burnout, to "Harmonious high achievers" excelling across performance, stability, and well-being. The archetypes provide a diagnostic framework: AI won't move you from one category to another. It will make you more of what you already are.
For teams considering AI adoption, the report suggests starting not with tool selection but with organizational readiness. The Google Cloud blog post announcing the findings recommends clarifying AI policies, connecting AI to internal context, prioritizing foundational practices, fortifying safety nets, investing in internal platforms, and focusing on end users.
These aren't the kinds of recommendations that make for exciting conference talks. They're maintenance work, infrastructure investment, cultural change. But the data suggests they determine whether AI adoption succeeds or simply makes existing problems more visible.
The Work Ahead
I keep returning to that 30% trust figure. Nearly a third of developers using AI daily don't trust the code it generates. They're using it anyway, presumably because the pressure to move faster is real and constant. This creates a particular kind of technical debt—not just in the code itself, but in the relationship between developers and their tools.
The DORA report doesn't prescribe specific tools or technologies. Instead, it offers something more valuable: a framework for understanding what makes AI adoption successful. The answer isn't in the AI. It's in the systems, practices, and culture surrounding it.
For engineering leaders, this research provides both warning and roadmap. AI won't rescue poorly structured development processes. It won't compensate for missing automated tests or unclear architectural standards. It will, however, reveal exactly where those gaps exist—often by making their consequences more severe.
The teams that succeed with AI, according to this data, are the ones doing the unglamorous work: building internal platforms, maintaining documentation, standardizing workflows, investing in automated testing. They're treating AI adoption as organizational transformation rather than tool acquisition.
That might be the most important finding in the entire report.