Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

When AI Stops Helping and Starts Hiding Risk

I think we may be creating AI workslop.png

The Risks of Over-Reliance on Rovo (and Friends)


If you're a project management leader in an Atlassian-powered organization, you may already be experimenting with Rovo — and other generative AI platforms.

Maybe you’re drafting updates or tracking progress in Atlassian Home, summarizing pages in Confluence, or watching Rovo take its first steps across your workflows, often working alongside other generative AI tools like ChatGPT, Claude, or Cursor.

These capabilities are often helpful. But there’s a question we need to ask:

What happens when "good enough" AI work becomes the default?

I’m not making an anti-AI argument here. I’m bullish on Atlassian’s AI direction.

In fact, I use Rovo and other generative AI tools daily and I rely on AI for editing, image generation, and research for my community articles.

But as AI assistants become more embedded in how we work, we need to look closely at how much trust we’re placing in them — and what happens when that trust becomes lazy, uncritical, or invisible.

In this piece, I’ll be building on two earlier articles I co-wrote with my colleague and fellow Atlassian Community Champion, Denis Boisvert. In Rovo Adoption Should Start with Privacy & Security we outlined the foundational risks and data boundaries teams should understand before rolling out generative AI. In When to Use Rovo (and When Not To) we focused on how to match AI capabilities to the right use cases.

This article takes the next step: What happens when helpful becomes habitual — and we stop noticing where AI ends and human thinking begins?

 


 

The State of Generative AI in the Atlassian Customer Base

From my perspective, Rovo is in its early adolescence — and evolving quickly. But most of us are still in the early learning stages. Sure, some are further along, but we’re all students in this new AI era.

What I’m seeing “out there in the trenches” is Atlassian customers cautiously testing this new frontier. They’re trying Rovo’s automation and summarization features, usually with limited guidance on usage policy and lingering questions about data governance and model boundaries.

In the Rovo Adoption Should Start with Privacy & Security article, Denis and I outlined why every team needs a baseline understanding of what Rovo can see, should see, and must never see.

Right now, many organizations are testing Rovo and Confluence AI in pockets, often without formal acceptable use guidelines. Teams struggle to track what’s human-authored versus AI-generated and aren’t always sure how to review or validate what AI creates. It’s an exciting but uneven landscape. In that ambiguity, over-reliance creeps in.

This isn’t just an Atlassian-specific concern. Across industries, we’re seeing early signs of generative AI overuse or blind trust.

Harvard Business Review recently called out the rise of “AI workslop” — floods of auto-generated content that look polished but undermine productivity and clarity. The Financial Times has reported on the psychological toll of always-on AI tools in the workplace. And think tanks like the World Economic Forum have warned that AI adoption is outpacing policy, skill development, and ethical guidelines in many organizations.

Where Rovo (and Friends) Shine

Rovo is strong at cross-product search and natural language questions. In Confluence, it can help teams draft, summarize, and polish content quickly. In Jira, it can reduce the pain of field population and repetitive work. In Atlassian Home, it can turn scattered goals and status updates into a more digestible feed.

Rovo can remove friction and save time. It can’t, however, fully navigate context, team alignment, or leadership.

When we forget that, I worry if Rovo is helping — or hindering.

The Slippery Slope of Over-Reliance

Here’s what I’m noticing and hearing from other PMs: weekly updates copied from Atlassian Home via AI with minimal revision; Confluence pages created by AI, then barely reviewed; decisions leaning on Rovo summaries that miss nuance.

When these habits settle in, teams begin defaulting to AI instead of using AI as a tool. That default chips away at what makes Atlassian tools powerful in the first place: clarity, ownership, and shared understanding.

Five Risks Leaders Can’t Ignore

These are not hypotheticals; they’re showing up today in AI-assisted work:

Loss of context: AI can connect information, but it can't read organizational history, politics, or intent.

Stale or biased data: If your Confluence space is messy or outdated, Rovo may amplify the wrong content.

Accountability drift: When AI drafts everything, who owns the result? Who checks it?

Documentation debt: Auto-generated content adds volume but not always clarity or value.

Compliance blind spots: In regulated environments, relying on unreviewed, unversioned AI output is risky.

These echo broader industry findings, including Harvard Business Review’s look at AI “workslop” — the flood of low-quality AI-generated content that adds overhead instead of clarity. Productivity drops when output goes up and critical thinking goes down.

Dev Teams Aren’t Immune

While most of this article focuses on project and content workflows, we’re seeing similar risks emerge in software development teams.

Tools like Rovo, Cursor, and Claude are streamlining common development tasks — generating code, suggesting patterns, writing tests, and even explaining legacy systems. They’re undeniably powerful — and we’ve seen the risks firsthand when they’re used without checks.

When developers start relying too heavily on AI suggestions:

  • Code quality can degrade subtly, as context-specific nuances are missed.

  • Security risks increase, especially when AI confidently generates flawed patterns or insecure defaults.

  • Knowledge transfer stalls, since teams may stop documenting or reviewing the “why” behind what the bot wrote.

  • Debugging becomes harder, because no one remembers how a piece of logic got there — or who validated it.

We’ve seen these effects firsthand. AI is saving time on the surface. But it’s creating hidden technical debt beneath.

It’s the same theme: AI is useful, but only when embedded in a process that includes human judgment, peer review, and intentional architecture.

What to Watch For (and Talk About)

Red Flags in Everyday AI Use

Patterns tend to emerge across teams:

  • Status updates that start to sound generic, robotic, or oddly phrased
  • Pages that get published but not read, with no clear owner or reviewer
  • Developers or agents saying, “the bot wrote that.” Instead of taking ownership
  • Tasks auto-assigned to the wrong team, or missing crucial context

Individually, these are manageable. But when they start stacking up, they suggest your team may be trusting the tool more than the process.

Practical Guardrails for PMs and Team Leads

You don’t need a company-wide AI policy to build smarter habits. Start here:

  • Treat AI as a first draft, not the final word. Make it obvious when AI was involved. Label AI-generated content in Confluence pages, Jira work items, and internal docs so there’s no ambiguity.
  • Build review into your process. Look at AI-created content together in retros, grooming sessions, or planning. Make improvement part of the loop.
  • Define local working agreements. Don’t wait for IT or legal to hand you rules. Decide what’s okay (and what’s not) for your team, and write it down.
  • Emphasize judgment over speed. AI is fast. That’s its strength,but also its trap. Make space for human context, pushback, and validation.

The Real Risk Zone

The World Economic Forum and others have warned that AI adoption is outpacing policy, training, and cultural readiness. That gap — between what the tool can do and what your team knows how to use wisely — is where risk lives.

It’s up to leaders to close it.

Culture > Capability

The most important thing about AI in the Atlassian ecosystem isn’t the tech; it’s how your team uses it. Do they question it? Improve it? Ignore it? Defer to it?

Healthy teams build a culture where AI is useful but never blindly trusted, where humans stay in the loop, and where AI output is a starting point, not a deliverable.

The point isn’t just what the tools can do, but what we expect of each other when we use them.

AI Assistants Aren’t Architects

Rovo can fetch, format, summarize, and even write. It doesn’t know your customer, your team dynamics, or the “why” behind your roadmap. That’s your job.

AI is here to assist. When the assistant starts acting as the decision-maker, your biggest risks aren’t technical — they’re cultural.

Start the conversation in your next team retro: Where are we using AI today? Where might we be relying on it too much?

Just don’t forget to keep humans in the loop! 

 


 

Dave Rosenlund is an Atlassian Community Champion and the founder of the virtual Atlassian Community Events (ACE) chapter, CSX Masters — fka ITSM/ESM Masters. He’s also a founding leader of the Program/Project Masters chapter and part of the Boston ACE leadership team. In his day job, he works with an amazing cast of colleagues at Platinum Atlassian Solution Partner, Trundl.

2 comments

Aaron Geister
Contributor
January 16, 2026

While I agree with 99% of this and love this article because it touches on the truth of the situation. The one thing I want to point out is are we configuring our agents correctly. Are we setting our AI enabled environments correctly to support the work we are doing. 

As Atlassian as Stated its AI and Human working together for the better outcomes. I have made my own mistakes with AI as we all learn to use it better. What can we change as Humans to better the experience and how we use it.

The Human touch will always be needed.

__ Jimi Wikman
Community Champion
January 18, 2026

Great article Dave!

Comment

Log in or Sign up to comment
TAGS
AUG Leaders

Atlassian Community Events