Predictions of large-scale job disruption from AI—often within the next 12 to 24 months—are becoming increasingly common.
But this widely held assumption is wrong.
Over the past few months, I’ve been writing about “AI That Works,” together with my colleagues. The articles are based on hands-on experience working with a variety of AI tools, and conversations with Atlassian ecosystem peers and customers exploring these technologies.
In those conversations, a consistent pattern shows up in how media reports frame AI replacing humans in white-collar jobs.
As I see it, most of those predictions extrapolate directly from how quickly AI capability is improving. The assumption is simple: if AI can perform a task, the job associated with that task will soon follow.
But that’s not how change happens inside organizations.
Which is why the near-term disruption many are predicting is unlikely to play out at the scale or speed being suggested.
Across every major technological shift—from industrialization to electrification to the rise of computing and eventually the Internet—the limiting factor has never been capability alone. It has been the ability of organizations to absorb, integrate, and restructure around that capability. And that process moves much more slowly.
There is already a growing body of research pointing to this gap. Economists and practitioners alike have shown that the impact of new technologies is consistently delayed by the need for organizational learning, workflow redesign, and system integration. Early AI adoption is showing the same pattern: strong capabilities, uneven outcomes, and friction at the point of implementation.
This article argues that AI will follow that same trajectory. The reasons become clearer when you look at how change actually happens inside organizations.
In the near term, its impact on jobs is likely to be more limited and uneven than current narratives suggest—not because the technology isn’t advancing quickly, but because most organizations are not structured to absorb that change at the same pace.
The real constraint isn’t intelligence.
It’s alignment—across systems, workflows, and teams.
The dominant narrative around AI-driven job disruption follows a fairly simple line of reasoning.
AI capabilities are improving rapidly
Those capabilities map to tasks currently performed by humans
Therefore, those tasks—and by extension, the jobs tied to them—will be replaced quickly
It’s a clean model. It’s intuitive. And on the surface, it makes sense.
But it rests on an assumption that doesn’t hold up in practice: that organizations can absorb new capabilities as quickly as those capabilities emerge.
In reality, capability and adoption operate on entirely different timelines.
AI systems may be able to perform individual tasks with increasing accuracy and speed. But most work inside organizations is not structured as a collection of isolated tasks. It exists within systems—interconnected workflows shaped by dependencies, handoffs, approvals, context, and constraints.
Replacing a task inside that system is not the same as replacing the system itself.
And that distinction matters.
Because even when a task can be automated, the surrounding system often cannot be easily reconfigured to accommodate that change. The work doesn’t disappear—it shifts, fragments, or creates new dependencies elsewhere.
This is where the model starts to break down.
It assumes that tasks are clearly defined, boundaries between activities are clean, and systems can be easily restructured. Most organizations don’t operate that way. They rely on partially defined processes, tacit knowledge, and fragmented systems, with human coordination bridging the gaps.
In that environment, introducing AI doesn’t lead to immediate substitution. It introduces friction.
In many cases, it also exposes where that friction already existed.
Workflows that appeared to function smoothly often rely on human interpretation, informal coordination, and unspoken assumptions. Those elements are hard to see when people are compensating for them in real time. They become much more visible when a system that depends on explicit structure tries to operate within them.
What looks like a limitation of the technology is often a reflection of how the work is actually structured.
New capabilities must be integrated into existing workflows, validated, monitored, aligned with other systems, and adapted to edge cases that weren’t visible at the task level.
All of that takes time.
Which is why extrapolating directly from what AI can do to what organizations will do with it leads to timelines that are consistently too aggressive.
The issue isn’t whether the capability exists. It’s whether the system and the people around it are ready to change.
It’s worth stepping back and asking a fair question.
If this pattern has repeated across multiple technological shifts—and if there’s already research pointing to the same constraints—why do predictions of rapid, near-term disruption keep surfacing?
Part of the answer comes down to perspective.
Many of the strongest predictions about AI-driven change are coming from the people building these systems. From that vantage point, the conclusions are entirely reasonable. They are watching capabilities improve at a remarkable pace. Tasks that were previously considered out of reach are now being handled with increasing reliability. Costs are falling, access is expanding, and the range of potential applications continues to grow.
If you are operating at that layer, it is not a stretch to imagine that widespread substitution is just around the corner.
But that perspective captures only one side of the equation.
What it doesn’t fully account for is how work actually happens inside organizations.
Many of the people making these predictions have deep insight into what the technology can do, but limited exposure to how it operates inside organizations. Meanwhile, the people responsible for implementing change inside organizations experience a very different reality—one shaped by fragmented workflows, unclear ownership, inconsistent processes, and systems that don’t integrate cleanly.
From that vantage point, the constraint isn’t capability. It’s coordination.
This gap in perspective isn’t unique to AI. It shows up in nearly every major technological shift. The people closest to a new capability tend to extrapolate from what the technology makes possible. The people responsible for operationalizing that capability experience the friction of making it work in practice.
Which is why it’s entirely possible to be directionally right about where things are going—and still be wrong about how quickly it will happen.
When you’re building a transformative technology, it’s natural to assume the world will adapt to it. In practice, the technology has to adapt to the world it’s entering—and that world is shaped by systems and constraints that don’t change overnight.
That doesn’t stop progress. But it does stretch the timeline.
If you step back from the current moment, the dynamics around AI adoption are not particularly new.
They follow a pattern that has shown up repeatedly across major technological shifts. Capability advances quickly—but then runs into the slower-moving realities of systems and organizational change.
In some cases, the issue isn’t just that systems are slow to adapt. It’s that the improvements are being applied in the wrong place.
Increasing the speed of execution doesn’t necessarily improve outcomes if the surrounding system is constrained elsewhere. Work can be completed faster without becoming more useful.
Which means some early gains from AI look meaningful at the task level, but have limited impact at the system level.
During the Industrial Revolution, machines replaced specific forms of labor relatively quickly. But reorganizing work around those machines took decades. Roles had to be redefined, workflows redesigned, and entirely new operating models established. The result was not immediate job elimination, but a prolonged transition where old and new systems coexisted.
The same pattern appeared in the transition from horse-drawn transport to automobiles. The technology was viable long before it became dominant. What slowed adoption wasn’t preference—it was the surrounding ecosystem. Roads, fuel distribution, maintenance infrastructure, and regulation all had to evolve. For years, hybrid environments persisted.
Electrification followed a similar path. Early business adopters saw limited gains because they replaced the power source without redesigning the work. Real productivity improvements only emerged after factories reorganized around the new capabilities.
Personal computing expanded knowledge work rather than reducing it. As the cost of producing and processing information dropped, demand increased. New roles emerged. Expectations rose. The nature of work changed, but it didn’t disappear.
Even the internet followed a nonlinear trajectory—early hype, a correction, and then a long period of infrastructure build-out before transformation became visible.
Across all of these examples, the pattern is consistent.
Capability arrives early. Systems lag. Real transformation only happens after those systems are redesigned.
AI does not appear to be an exception. The capabilities are advancing quickly. The environments they depend on are not.
Which suggests that the question isn’t whether AI will drive meaningful change.
It’s whether that change will follow the same trajectory as every major technological shift before it.
If history is any guide, it will. Just not on the timeline many are predicting.
This dynamic isn’t just historical. It’s already visible in how AI is being adopted today.
There’s a growing body of research pointing to the same conclusion: the constraint isn’t just the technology—or even the infrastructure—it’s the systems it has to operate within.
James Bessen has shown that the limiting factor is often not whether a technology works, but how long it takes organizations to learn how to use it effectively. That learning process—building new workflows, developing skills, restructuring roles—can take years.
“New technologies often require extensive organizational learning before their benefits can be realized.”
— James Bessen, AI and Jobs: The Role of Demand
In other words, capability can scale quickly. Organizational understanding does not.
A similar idea appears in Prediction Machines by Ajay Agrawal, Joshua Gans, and Avi Goldfarb. AI reduces the cost of prediction, but value only emerges when organizations redesign how decisions are made.
That’s a much harder problem—and where timelines begin to slip.
It requires rethinking workflows, redefining ownership, integrating across systems, and unlearning how work was done before.
Even in environments where AI is already being used heavily, results are uneven. Ethan Mollick’s work shows that outcomes vary widely depending on how tools are integrated into workflows.
“The biggest variable in AI outcomes is not the model—it’s how people use it.”
— Ethan Mollick, Co-Intelligence
Which leads to a less obvious point.
AI doesn’t just remove work. It reshapes task composition and coordination.
Validation, oversight, and integration become central to making these systems work reliably.
And that gap between demonstration and deployment is where much of the friction lives. As Arvind Narayanan has pointed out, systems that perform well in controlled environments often behave differently in complex, real-world settings. At a broader level, institutional research has consistently found that the impact of automation is mediated by how organizations are structured.
Put simply, technology doesn’t operate in isolation. It is filtered through the systems it enters.
That gap becomes clearer when you look at what actually happens inside organizations.
Most are not structured in a way that allows AI to operate at its full potential. Not because they lack tools or intent, but because the systems that define how work gets done were never designed for this level of consistency, clarity, and integration.
This pattern isn’t unique to AI. Similar dynamics have shown up in other shifts in how work is organized, where adopting new practices or tools without changing the underlying system produces limited results.
In many environments, workflows are only partially defined. They exist as a mix of documentation, informal practices, and institutional knowledge. Work gets done, but not always in the same way twice.
As Atlassian’s Sven Peters has pointed out, most teams aren’t constrained by how fast work can be done, but by how consistently it flows through the system.
AI is significantly less reliable in that environment.
More specifically, it struggles in systems where the flow of work is inconsistent.
Work doesn’t just need to be defined—it needs to move predictably between steps, people, and systems. In many environments, that flow depends on informal coordination, implicit timing, and human intervention to keep things moving.
AI doesn’t handle that well. It assumes a level of continuity and structure that often isn’t there.
It performs best when tasks are well-defined and outcomes are predictable. When those conditions aren’t met, the burden shifts back to humans to interpret, validate, and correct.
Information is fragmented across systems, duplicated, and often out of date. Context—how work connects, who owns it, what it means—is rarely structured.
Without that context, AI can generate outputs, but it cannot reliably situate them within the broader operational context.
So again, the burden shifts.
More validation. More interpretation. More coordination.
Work rarely happens in a single system. It moves across tools, each with its own assumptions. Connecting them is already difficult. Introducing AI increases that complexity.
Small inconsistencies become larger problems when decisions are being automated or augmented.
In many environments, the cost of being wrong is significant. Outputs need to be explainable. Decisions need to be auditable. Responsibility needs to be clear.
So humans stay involved longer than capability alone would suggest.
What tends to get missed in these discussions is what that actually looks like in practice for the people doing the work.
As AI gets introduced into these environments, the work doesn’t just get reduced. It often gets redistributed in ways that are less visible. Time shifts into validating outputs, interpreting edge cases, and coordinating across systems that don’t fully align. The effort doesn’t disappear—it changes shape.
That shift is part of the reason disruption doesn’t happen as cleanly or as quickly as the capability might suggest. The system isn’t just technical. It’s human. And people don’t adapt to new modes of working instantly, especially when the boundaries of responsibility and trust are still evolving.
Finally, there is the reality of change itself.
Even when the technology is ready, organizations don’t transform overnight. People need to understand new workflows. Teams need to align. Incentives and expectations need to shift.
The result is not a clean transition. It’s a prolonged hybrid state.
Old systems and new capabilities coexist. Gains are real, but inconsistent. Friction doesn’t disappear—it moves.
This is what adoption actually looks like.
If the dominant model doesn’t hold up, what replaces it?
AI adoption will look less like sudden disruption and more like gradual, uneven restructuring.
In the near term, most observable impact will happen at the task level. AI will primarily assist within existing workflows in the near term, rather than fully replace them. Gains will be real, but variable.
In many cases, this doesn’t simplify the work—it changes it.
Tasks that were previously execution-heavy become oversight-heavy. The burden moves from doing the work to deciding whether the work is correct, complete, and appropriate in context. That introduces a different kind of cognitive load—one that is harder to standardize and slower to optimize.
Which again slows down how quickly organizations can fully restructure around these capabilities.
Efficiency gains often get absorbed into increased expectations rather than reduced headcount.
In many cases, increased capacity doesn’t reduce the amount of work. It changes what gets done.
As the cost of producing outputs drops, expectations tend to rise. More analysis gets requested. More scenarios get explored. More edge cases get handled. Work that was previously not worth doing becomes standard.
That dynamic absorbs a significant portion of the gains—especially in the early stages of adoption.
Teams will move faster, handle more volume, and expand scope. Capacity will be absorbed, not removed.
At the same time, the nature of work will change.
Some tasks will become less central. Others—especially those involving coordination and judgment—will become more important. New work will emerge around managing and integrating AI outputs.
Over time, organizations will begin redesigning workflows around these capabilities. That’s when more meaningful gains appear.
But that phase requires deeper change. It depends on learning, alignment, and a willingness to rethink how work is structured. It happens unevenly. Some areas move quickly. Others lag.
Over a longer horizon, the cumulative effect can be significant. But it is the result of sustained adaptation, not immediate substitution. Which is why the timeline matters.
In the short term, impact is likely to be limited and uneven. In the long term, it may be broader than expected—but only after systems have had time to evolve.
The conversation around AI and jobs is often framed in terms of replacement.
How quickly can AI perform human tasks?
Which roles are at risk?
When will headcount decline?
Those are understandable questions. But they assume that capability translates directly into change.
A more useful question is this:
How quickly can organizations restructure themselves to make effective use of AI?
That shift changes the conversation.
It moves attention away from what the technology can do and toward how work is organized—how processes are defined, how systems connect, how decisions are made, and how people coordinate.
Because that is where the constraint lives. Not in intelligence. Not in capability. But in alignment.
Alignment between systems that were never designed to work together, between fragmented data, between teams with different assumptions, and between how work is done today and how it would need to be done to fully leverage new capabilities.
Those are not problems that resolve on a 12–24 month timeline. They require time, iteration, and structural change.
~~~~~~~~~~~~~~~~~~~~
None of this suggests that AI won’t be transformative. It will be. But that transformation will move at the pace of organizational change, not the pace of technological advancement. Which means the real risk isn’t that AI will eliminate jobs overnight. It’s that we misunderstand where the friction is—and misjudge both the timeline and the work required to get there.
AI will absolutely change how work gets done. But it will move at the speed of the systems it enters—not the speed at which the technology improves.
This article is part of a series of articles on AI adoption in the Atlassian ecosystem. See the full list of related articles in this index.
See also…
Why 2026 will be the year AI grows up — Synthesizes research on the “AI‑native workforce” and AI generalists, emphasizing that AI amplifies existing cultures and systems instead of fixing them.
AI adoption is rising, but friction persists — Research showing that while nearly all developers save time with AI, those gains are cancelled out by organizational inefficiencies.
How to shift your mindset from “AI as a tool” to “AI as a partner” — Argues that treating AI as a shared team capability, rather than an individual shortcut, is what produces measurable ROI.
Dave Rosenlund _Trundl_
1 comment