AI coding tools are evolving quickly, and many teams are experimenting with Copilot, Claude, and now Rovo Dev inside real software development workflows.
Meanwhile, I’ve been hearing a lot of executive commentary about AI replacing engineers, but the reality inside enterprise environments feels more complicated from where I sit.
I wrote this piece to unpack the difference between code generation and software engineering, and I’d love to hear what others in the community are seeing.
There is no question that AI is changing the way software is developed.
Many engineering teams are already experimenting with AI-assisted coding inside their daily workflows — whether that means Copilot in their IDE, Claude generating scripts, or newer tools like Rovo Dev pulling context from Jira issues and Confluence documentation.
In the past year alone, we’ve watched serious, production-grade tooling emerge inside IDEs, embedded in version control systems, and increasingly connected to enterprise context. Atlassian recently made Rovo Dev generally available in Visual Studio Code with deep context integration across Jira, Confluence, and repositories. GitHub Copilot continues to expand into more environments. Claude models from Anthropic have gained attention among developers partly because of their early focus on security and alignment.
What organizations actually end up doing in practice is using a measured mix of these tools while they watch how rapidly policies, capabilities, and constraints evolve.
That acceleration is real. So is the narrative surrounding it.
Industry commentary has included dramatic projections. For example, Anthropic CEO Dario Amodei suggested that AI could write 90% of code within three to six months and potentially most code within a year. Statements like this travel quickly through executive briefings and analyst reports.
They capture attention.
But they tend to focus on the volume of AI-generated code rather than the way AI is reshaping the discipline of software engineering.
In my opinion, this conflation — between writing code and doing software engineering — is at the root of unrealistic expectations.
Software engineering in real enterprise environments is about far more than the quantity of generated code.
It is about shaping and owning systems over time.
Software Architects translate ambiguous requirements into architecture. They integrate legacy systems with modern services. They design for security, compliance, and reliability. They maintain systems through years of change.
Those are not responsibilities that any current AI system can truly own.
AI models are exceptional at pattern completion. They can scaffold interfaces, generate test stubs, draft automation logic, and refactor boilerplate quickly.
But real production success depends on something deeper:
understanding cross-project dependencies
anticipating edge cases
aligning implementations with business priorities
managing operational risk over time
Code that compiles is not the same thing as code that fits safely into a complex enterprise ecosystem.
Many experienced CIOs and CTOs describe AI-assisted development not as replacement, but as a force multiplier.
This framing captures both the promise and the caveat.
AI can increase output. But it also amplifies weaknesses that already exist in a team’s engineering practices.
When AI generates more code, the organization must still ensure that code is validated, tested, secured, and integrated correctly.
Teams experimenting with AI-generated integrations frequently encounter subtle issues:
automation rules referencing incorrect custom fields
webhook payloads that fail under production load
scripts that technically compile but violate API rate limits
automation loops triggered by status transitions
These are not problems that appear in a demo.
They appear in real environments.
In other words, output does not disappear. It shifts in shape and demand.
Broadly speaking, what I‘m seeing today is tentative adoption of and experimentation with multiple AI coding assistants.
Teams are evaluating multiple tools in parallel.
They compare Copilot, Claude, Rovo Dev, and emerging assistants side by side. They monitor licensing changes, security posture, and policy evolution carefully.
For example, Anthropic recently revised its Responsible Scaling Policy, removing a prior commitment that restricted model deployment without guaranteed safety measures. The company instead shifted toward transparency-focused reporting and ongoing risk monitoring.
For enterprise leaders, the lesson is straightforward.
Vendor commitments evolve.
Security posture, licensing terms, and governance frameworks can change as quickly as the models themselves.
Tools do not stand still — and neither do the risks they introduce.
In some corners of the developer community, a new phrase has gained traction: vibe coding.
The idea is simple.
Describe what you want in natural language, let AI generate the code, iterate until it feels right, and move on.
On a recent trip to the San Francisco Bay Area an executive I spoke with told me he had “vibe coded” a small internal tool that helped him generate reports for his team. He was genuinely excited about it — and understandably so.
But he shared the story with a larger conclusion in mind: if he could build a useful tool himself, perhaps his organization would need fewer software engineers going forward.
That reaction is becoming increasingly common. The ability to generate working code quickly can create the impression that software engineering has suddenly become simple.
But generating a useful script or prototype is very different from designing, integrating, and operating production systems over time.
Sure. For prototypes and side projects, this approach can be remarkably productive. It lowers the friction of getting started and accelerates experimentation.
But enterprise systems are not built on vibes. They are built on traceability, ownership, and the ability to explain exactly why a system behaves the way it does.
The rise of vibe coding does not mean engineering rigor is obsolete.
It means the front edge of development culture is evolving quickly.
Consider a realistic enterprise environment built around Jira, Jira Service Management, and dozens of automation rules interacting with internal systems.
Now imagine most new rules, scripts, and integration components are generated with AI assistance.
Velocity increases.
But so does verification overhead.
Each artifact still requires human review to ensure it:
aligns with architectural standards
satisfies compliance requirements
avoids security vulnerabilities
integrates correctly with existing systems
Even the most capable models make subtle mistakes that only surface during integration testing or operational use.
Productivity metrics that focus on output rarely account for this hidden work.
If organizations reduce engineering headcount while expecting output quality to remain constant, the invisible work — validation, governance, and operational oversight — becomes a liability rather than a solved problem.
Routine coding tasks may well be accelerated by AI.
That will likely change the role of junior engineers.
But senior engineers — those who understand architectural patterns, institutional knowledge, and domain complexity — do not emerge fully formed from generative models.
Removing entry-level roles risks shrinking the talent pipeline, weakening mentorship, and eroding long-term capability.
Atlassian AI advocate Sven Peters recently summarized the concern succinctly:
“The real risk isn’t AI replacing junior developers. It’s leaders believing they don’t need juniors anymore.”
He argues that juniors are not simply “cheaper coders.” Many are already comfortable experimenting with AI tools, iterating quickly, and exploring new approaches to problem-solving.
Those instincts matter in a world where engineers increasingly guide AI systems rather than writing every line themselves.
As Atlassian co-founder Scott Farquhar once told new hires:
“We didn’t hire you to continue what we’ve been doing for years.
We hired you to challenge the way we work and make it better.”
Without juniors today, there are no seniors tomorrow.
Organizations that treat AI as a substitute for developing engineering expertise risk weakening the very pipeline that produces future architects, staff engineers, and technology leaders.
Source: Sven Peters on LinkedIn
Context-aware tools like Atlassian’s Rovo Dev demonstrate how valuable context can be.
Grounding AI suggestions in Jira issues, Confluence documentation, and repository history can improve relevance and reduce error rates.
But context alone cannot replace judgment.
AI can reference tickets.
It cannot arbitrate trade-offs between competing business priorities.
AI can draft a function.
It cannot be accountable for uptime in production.
Context reduces certain risks.
It does not eliminate the need for human stewardship
There is also a practical question that hype-driven narratives rarely address.
Who owns the outcome?
In enterprise environments, systems have clear accountability chains.
When a system fails, someone investigates the issue, explains the root cause, and implements the fix.
AI can generate implementation suggestions.
But it cannot carry operational responsibility. It cannot participate in incident reviews. It cannot justify architectural decisions. It cannot accept accountability for downstream consequences.
Ownership remains human.
And in enterprise software, ownership is not optional.
Technical leaders who are thinking clearly about AI adoption are not proclaiming that engineers will disappear.
They are redesigning workflows deliberately for AI gains.
They measure productivity gains alongside defect rates and operational risk. They keep human review central to the development process. They pilot new tools before rolling them out broadly.
They also monitor vendor policies, licensing commitments, and governance frameworks because they know these tools evolve quickly.
Many organizations are discovering that the real shift is not the elimination of engineers, but a change in what engineers spend their time doing.
Increasingly, engineers define requirements, supervise AI-generated implementations, and ensure system integrity.
AI increases leverage.
It does not eliminate responsibility.
Most investor narratives assume that AI productivity gains will translate directly into reduced engineering headcount.
That assumption models engineering as output volume.
But enterprise software development is not only about output. It is about accountable system design, operational reliability, and long-term stewardship.
The enthusiasm is understandable. But enthusiasm without systems thinking is fragility.
The organizations that succeed in this era will not be the ones that remove engineers fastest. They will be the ones that redesign workflows thoughtfully.
They will use AI to eliminate low-value repetition while increasing architectural discipline. They will invest in governance, validation, and long-term system ownership.
AI-assisted development is a meaningful evolution, but software engineering, in its full and accountable form, remains a human responsibility.
The work does not disappear.
It changes shape.
The question is whether organizations adapt deliberately — or react impulsively.
This article is part of an ongoing AI & Rovo Article Series exploring responsible AI adoption.
Anthropic Drops Flagship Safety Pledge, TIME — A recent shift in Anthropic’s Responsible Scaling Policy that abandons a prior safety guarantee in favor of transparency-focused commitments, illustrating how vendor stances on safety evolve.
Is your organizationexperimenting with AI-assisted development? Do you agree: outcomes vary widely depending on engineering maturity and governance practices.
Where have you seen the biggest real productivity gains?
Where has verification, integration, or governance become the new bottleneck?
How are you thinking about junior developer hiring and mentorship in this new landscape?
I’m curious to hear what others in the community are seeing.
Dave Rosenlund _Trundl_
2 comments