Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

AI-Coding Is Not the Same as Software Engineering (And It Matters)

Untitled design.png

AI coding tools are evolving quickly, and many teams are experimenting with Copilot, Claude, and now Rovo Dev inside real software development workflows.

Meanwhile, I’ve been hearing a lot of executive commentary about AI replacing engineers, but the reality inside enterprise environments feels more complicated from where I sit.

I wrote this piece to unpack the difference between code generation and software engineering, and I’d love to hear what others in the community are seeing.

 


Introduction

There is no question that AI is changing the way software is developed.

Many engineering teams are already experimenting with AI-assisted coding inside their daily workflows — whether that means Copilot in their IDE, Claude generating scripts, or newer tools like Rovo Dev pulling context from Jira issues and Confluence documentation.

In the past year alone, we’ve watched serious, production-grade tooling emerge inside IDEs, embedded in version control systems, and increasingly connected to enterprise context. Atlassian recently made Rovo Dev generally available in Visual Studio Code with deep context integration across Jira, Confluence, and repositories. GitHub Copilot continues to expand into more environments. Claude models from Anthropic have gained attention among developers partly because of their early focus on security and alignment.

What organizations actually end up doing in practice is using a measured mix of these tools while they watch how rapidly policies, capabilities, and constraints evolve.

That acceleration is real. So is the narrative surrounding it.

Industry commentary has included dramatic projections. For example, Anthropic CEO Dario Amodei suggested that AI could write 90% of code within three to six months and potentially most code within a year. Statements like this travel quickly through executive briefings and analyst reports. 

They capture attention.

But they tend to focus on the volume of AI-generated code rather than the way AI is reshaping the discipline of software engineering.

In my opinion, this conflation — between writing code and doing software engineering — is at the root of unrealistic expectations.

 


Code Generation is Not System Design & Ownership

Software engineering in real enterprise environments is about far more than the quantity of generated code.

It is about shaping and owning systems over time.

Software Architects translate ambiguous requirements into architecture. They integrate legacy systems with modern services. They design for security, compliance, and reliability. They maintain systems through years of change.

Those are not responsibilities that any current AI system can truly own.

AI models are exceptional at pattern completion. They can scaffold interfaces, generate test stubs, draft automation logic, and refactor boilerplate quickly.

But real production success depends on something deeper:

  • understanding cross-project dependencies

  • anticipating edge cases

  • aligning implementations with business priorities

  • managing operational risk over time

Code that compiles is not the same thing as code that fits safely into a complex enterprise ecosystem.

 


The Force Multiplier Effect — and Its Consequences

Many experienced CIOs and CTOs describe AI-assisted development not as replacement, but as a force multiplier.

This framing captures both the promise and the caveat.

AI can increase output. But it also amplifies weaknesses that already exist in a team’s engineering practices.

When AI generates more code, the organization must still ensure that code is validated, tested, secured, and integrated correctly.

Teams experimenting with AI-generated integrations frequently encounter subtle issues:

  • automation rules referencing incorrect custom fields

  • webhook payloads that fail under production load

  • scripts that technically compile but violate API rate limits

  • automation loops triggered by status transitions

These are not problems that appear in a demo.

They appear in real environments.

In other words, output does not disappear. It shifts in shape and demand.

 


A Multi-AI Reality

Broadly speaking, what I‘m seeing today is tentative adoption of and experimentation with multiple AI coding assistants.

Teams are evaluating multiple tools in parallel.

They compare Copilot, Claude, Rovo Dev, and emerging assistants side by side. They monitor licensing changes, security posture, and policy evolution carefully.

For example, Anthropic recently revised its Responsible Scaling Policy, removing a prior commitment that restricted model deployment without guaranteed safety measures. The company instead shifted toward transparency-focused reporting and ongoing risk monitoring.

For enterprise leaders, the lesson is straightforward.

Vendor commitments evolve.

Security posture, licensing terms, and governance frameworks can change as quickly as the models themselves.

Tools do not stand still — and neither do the risks they introduce.

A Brief Note on “Vibe Coding”

In some corners of the developer community, a new phrase has gained traction: vibe coding.

The idea is simple.

Describe what you want in natural language, let AI generate the code, iterate until it feels right, and move on.

On a recent trip to the San Francisco Bay Area an executive I spoke with told me he had “vibe coded” a small internal tool that helped him generate reports for his team. He was genuinely excited about it — and understandably so.

But he shared the story with a larger conclusion in mind: if he could build a useful tool himself, perhaps his organization would need fewer software engineers going forward.
 
That reaction is becoming increasingly common. The ability to generate working code quickly can create the impression that software engineering has suddenly become simple.

But generating a useful script or prototype is very different from designing, integrating, and operating production systems over time.

Sure. For prototypes and side projects, this approach can be remarkably productive. It lowers the friction of getting started and accelerates experimentation.

But enterprise systems are not built on vibes. They are built on traceability, ownership, and the ability to explain exactly why a system behaves the way it does.

The rise of vibe coding does not mean engineering rigor is obsolete.

It means the front edge of development culture is evolving quickly.

 


The Verification Overhead That Productivity Metrics Miss

Consider a realistic enterprise environment built around Jira, Jira Service Management, and dozens of automation rules interacting with internal systems.

Now imagine most new rules, scripts, and integration components are generated with AI assistance.

Velocity increases.

But so does verification overhead.

Each artifact still requires human review to ensure it:

  • aligns with architectural standards

  • satisfies compliance requirements

  • avoids security vulnerabilities

  • integrates correctly with existing systems

Even the most capable models make subtle mistakes that only surface during integration testing or operational use.

Productivity metrics that focus on output rarely account for this hidden work.

If organizations reduce engineering headcount while expecting output quality to remain constant, the invisible work — validation, governance, and operational oversight — becomes a liability rather than a solved problem.

 


The Junior Developer Question

Routine coding tasks may well be accelerated by AI.

That will likely change the role of junior engineers.

But senior engineers — those who understand architectural patterns, institutional knowledge, and domain complexity — do not emerge fully formed from generative models.

Removing entry-level roles risks shrinking the talent pipeline, weakening mentorship, and eroding long-term capability.

Atlassian AI advocate Sven Peters recently summarized the concern succinctly:

“The real risk isn’t AI replacing junior developers. It’s leaders believing they don’t need juniors anymore.”

He argues that juniors are not simply “cheaper coders.” Many are already comfortable experimenting with AI tools, iterating quickly, and exploring new approaches to problem-solving.

Those instincts matter in a world where engineers increasingly guide AI systems rather than writing every line themselves.

As Atlassian co-founder Scott Farquhar once told new hires:

“We didn’t hire you to continue what we’ve been doing for years.
We hired you to challenge the way we work and make it better.”

Without juniors today, there are no seniors tomorrow.

Organizations that treat AI as a substitute for developing engineering expertise risk weakening the very pipeline that produces future architects, staff engineers, and technology leaders.

Source: Sven Peters on LinkedIn

 


Context Is a Differentiator — Not a Substitute for Judgment

Context-aware tools like Atlassian’s Rovo Dev demonstrate how valuable context can be.

Grounding AI suggestions in Jira issues, Confluence documentation, and repository history can improve relevance and reduce error rates.

But context alone cannot replace judgment.

AI can reference tickets.

It cannot arbitrate trade-offs between competing business priorities.

AI can draft a function.

It cannot be accountable for uptime in production.

Context reduces certain risks.

It does not eliminate the need for human stewardship

 


The Accountability Gap

There is also a practical question that hype-driven narratives rarely address.

Who owns the outcome?

In enterprise environments, systems have clear accountability chains.

When a system fails, someone investigates the issue, explains the root cause, and implements the fix.

AI can generate implementation suggestions.

But it cannot carry operational responsibility. It cannot participate in incident reviews. It cannot justify architectural decisions. It cannot accept accountability for downstream consequences.

Ownership remains human.

And in enterprise software, ownership is not optional.

 


What Responsible Adoption Actually Looks Like

Technical leaders who are thinking clearly about AI adoption are not proclaiming that engineers will disappear.

They are redesigning workflows deliberately for AI gains.

They measure productivity gains alongside defect rates and operational risk. They keep human review central to the development process. They pilot new tools before rolling them out broadly.

They also monitor vendor policies, licensing commitments, and governance frameworks because they know these tools evolve quickly.

Many organizations are discovering that the real shift is not the elimination of engineers, but a change in what engineers spend their time doing.

Increasingly, engineers define requirements, supervise AI-generated implementations, and ensure system integrity.

AI increases leverage.

It does not eliminate responsibility.

 


A Perspective on the Investment Narrative

Most investor narratives assume that AI productivity gains will translate directly into reduced engineering headcount.

That assumption models engineering as output volume.

But enterprise software development is not only about output. It is about accountable system design, operational reliability, and long-term stewardship.

The enthusiasm is understandable. But enthusiasm without systems thinking is fragility.

The Real Opportunity

The organizations that succeed in this era will not be the ones that remove engineers fastest. They will be the ones that redesign workflows thoughtfully.

They will use AI to eliminate low-value repetition while increasing architectural discipline. They will invest in governance, validation, and long-term system ownership.

AI-assisted development is a meaningful evolution, but software engineering, in its full and accountable form, remains a human responsibility.

The work does not disappear.

It changes shape.

The question is whether organizations adapt deliberately — or react impulsively.

 


📚 Further Reading

This article is part of an ongoing AI & Rovo Article Series exploring responsible AI adoption.

See also:

 


Discussion

Is your organizationexperimenting with AI-assisted development? Do you agree: outcomes vary widely depending on engineering maturity and governance practices.

  • Where have you seen the biggest real productivity gains?

  • Where has verification, integration, or governance become the new bottleneck?

  • How are you thinking about junior developer hiring and mentorship in this new landscape?

I’m curious to hear what others in the community are seeing.

2 comments

Kris Klima _K15t_
Community Champion
March 6, 2026

@Dave Rosenlund _Trundl_ 

Can I please have a slab of rock and a chisel to carve that in stone? :) 

I'm glad you included the concept of context here. Not just on the mere information/data level but on the organizational, management, and judgement level.

Yes, it's important to 'set the scene' for any vibe coding or content generation attempt.

Being a linguist and a writer in a multilingual environment, I would always annoy anyone who asked me 'how do you translate [word]' with 'tell me the whole sentence'. But then the sentence would need a different translation in the context of the paragrah, page, book...

When I see LLMs struggle, even when given the context, with basic things (on-prem vs cloud Confluence, for example), even in closed garden environments (where Rovo operates), it's obvious that the contextual awareness of LLMs is not great. As the contextual complexity increases, the demands on human oversight increases expenentially.

Combined with a strong penchant for filling the gaps with nonsense...

The center of gravity will shift from production to verification. Which would be fine, we won't be digging, we'll be checking. But as you said, for verification you need seniors who know what's going on. And you will eventually run out of the current generation of architects and engineers. Thre is a precedent, mainframes had the same problem in reverse. They were to disappear 25 years ago. The did not. And you had a huge generational skill gap. Guys in their 60s trying to crash-course feed 40 years of experience and continuous development into 20-somethings.

Personal anecdote on contextual struggles of LLMs: I often use LLMs to write a piece of text for me in Czech. The prompt is always in English. I get my Czech text ... and 3 LLMs I tried always switch to conversing with me in Czech. Every time. No matter how many times, in previous such instances, I said that it needs to talk to me in English. I tried reasoning - my prompt's in English, I'm asking for help with Czech... why do you flood me with Czech. "Oh, I'm sorry, you're right, I'll use English in our future communication." Except it doesn't :) The one I use the most, ChatGPT, remembers the name of my fake band, my musical and guitar preferences, but not the elementary communication pattern.

Like # people like this
Dave Rosenlund _Trundl_
Community Champion
March 7, 2026

I agree, @Kris Klima _K15t_ ☝🏻 

AI is good, and getting better (rapidly) at many things. It’s helping me be way more productive than I was without it. 

However…

There’s a bit of irony in the fact that the AI companies are staffing up aggressively on the software engineering front, while their founders are telling us we don’t need software engineers any more. 

And your example illustrates that this is true in other areas, too. 

Working in tech has taught many things… One of them is, “good enough” almost always is not.

Like Kris Klima _K15t_ likes this

Comment

Log in or Sign up to comment
TAGS
AUG Leaders

Atlassian Community Events