If you're a project management leader in an Atlassian-powered organization, you may already be experimenting with Rovo — and other generative AI platforms.
Maybe you’re drafting updates or tracking progress in Atlassian Home, summarizing pages in Confluence, or watching Rovo take its first steps across your workflows, often working alongside other generative AI tools like ChatGPT, Claude, or Cursor.
These capabilities are often helpful. But there’s a question we need to ask:
What happens when "good enough" AI work becomes the default?
I’m not making an anti-AI argument here. I’m bullish on Atlassian’s AI direction.
In fact, I use Rovo and other generative AI tools daily and I rely on AI for editing, image generation, and research for my community articles.
But as AI assistants become more embedded in how we work, we need to look closely at how much trust we’re placing in them — and what happens when that trust becomes lazy, uncritical, or invisible.
In this piece, I’ll be building on two earlier articles I co-wrote with my colleague and fellow Atlassian Community Champion, Denis Boisvert. In Rovo Adoption Should Start with Privacy & Security we outlined the foundational risks and data boundaries teams should understand before rolling out generative AI. In When to Use Rovo (and When Not To) we focused on how to match AI capabilities to the right use cases.
This article takes the next step: What happens when helpful becomes habitual — and we stop noticing where AI ends and human thinking begins?
From my perspective, Rovo is in its early adolescence — and evolving quickly. But most of us are still in the early learning stages. Sure, some are further along, but we’re all students in this new AI era.
What I’m seeing “out there in the trenches” is Atlassian customers cautiously testing this new frontier. They’re trying Rovo’s automation and summarization features, usually with limited guidance on usage policy and lingering questions about data governance and model boundaries.
In the Rovo Adoption Should Start with Privacy & Security article, Denis and I outlined why every team needs a baseline understanding of what Rovo can see, should see, and must never see.
Right now, many organizations are testing Rovo and Confluence AI in pockets, often without formal acceptable use guidelines. Teams struggle to track what’s human-authored versus AI-generated and aren’t always sure how to review or validate what AI creates. It’s an exciting but uneven landscape. In that ambiguity, over-reliance creeps in.
This isn’t just an Atlassian-specific concern. Across industries, we’re seeing early signs of generative AI overuse or blind trust.
Harvard Business Review recently called out the rise of “AI workslop” — floods of auto-generated content that look polished but undermine productivity and clarity. The Financial Times has reported on the psychological toll of always-on AI tools in the workplace. And think tanks like the World Economic Forum have warned that AI adoption is outpacing policy, skill development, and ethical guidelines in many organizations.
Rovo is strong at cross-product search and natural language questions. In Confluence, it can help teams draft, summarize, and polish content quickly. In Jira, it can reduce the pain of field population and repetitive work. In Atlassian Home, it can turn scattered goals and status updates into a more digestible feed.
Rovo can remove friction and save time. It can’t, however, fully navigate context, team alignment, or leadership.
When we forget that, I worry if Rovo is helping — or hindering.
Here’s what I’m noticing and hearing from other PMs: weekly updates copied from Atlassian Home via AI with minimal revision; Confluence pages created by AI, then barely reviewed; decisions leaning on Rovo summaries that miss nuance.
When these habits settle in, teams begin defaulting to AI instead of using AI as a tool. That default chips away at what makes Atlassian tools powerful in the first place: clarity, ownership, and shared understanding.
These are not hypotheticals; they’re showing up today in AI-assisted work:
Loss of context: AI can connect information, but it can't read organizational history, politics, or intent.
Stale or biased data: If your Confluence space is messy or outdated, Rovo may amplify the wrong content.
Accountability drift: When AI drafts everything, who owns the result? Who checks it?
Documentation debt: Auto-generated content adds volume but not always clarity or value.
Compliance blind spots: In regulated environments, relying on unreviewed, unversioned AI output is risky.
These echo broader industry findings, including Harvard Business Review’s look at AI “workslop” — the flood of low-quality AI-generated content that adds overhead instead of clarity. Productivity drops when output goes up and critical thinking goes down.
While most of this article focuses on project and content workflows, we’re seeing similar risks emerge in software development teams.
Tools like Rovo, Cursor, and Claude are streamlining common development tasks — generating code, suggesting patterns, writing tests, and even explaining legacy systems. They’re undeniably powerful — and we’ve seen the risks firsthand when they’re used without checks.
When developers start relying too heavily on AI suggestions:
Code quality can degrade subtly, as context-specific nuances are missed.
Security risks increase, especially when AI confidently generates flawed patterns or insecure defaults.
Knowledge transfer stalls, since teams may stop documenting or reviewing the “why” behind what the bot wrote.
Debugging becomes harder, because no one remembers how a piece of logic got there — or who validated it.
We’ve seen these effects firsthand. AI is saving time on the surface. But it’s creating hidden technical debt beneath.
It’s the same theme: AI is useful, but only when embedded in a process that includes human judgment, peer review, and intentional architecture.
Patterns tend to emerge across teams:
Individually, these are manageable. But when they start stacking up, they suggest your team may be trusting the tool more than the process.
You don’t need a company-wide AI policy to build smarter habits. Start here:
The World Economic Forum and others have warned that AI adoption is outpacing policy, training, and cultural readiness. That gap — between what the tool can do and what your team knows how to use wisely — is where risk lives.
It’s up to leaders to close it.
The most important thing about AI in the Atlassian ecosystem isn’t the tech; it’s how your team uses it. Do they question it? Improve it? Ignore it? Defer to it?
Healthy teams build a culture where AI is useful but never blindly trusted, where humans stay in the loop, and where AI output is a starting point, not a deliverable.
The point isn’t just what the tools can do, but what we expect of each other when we use them.
Rovo can fetch, format, summarize, and even write. It doesn’t know your customer, your team dynamics, or the “why” behind your roadmap. That’s your job.
AI is here to assist. When the assistant starts acting as the decision-maker, your biggest risks aren’t technical — they’re cultural.
Start the conversation in your next team retro: Where are we using AI today? Where might we be relying on it too much?
Just don’t forget to keep humans in the loop!
Dave Rosenlund is an Atlassian Community Champion and the founder of the virtual Atlassian Community Events (ACE) chapter, CSX Masters — fka ITSM/ESM Masters. He’s also a founding leader of the Program/Project Masters chapter and part of the Boston ACE leadership team. In his day job, he works with an amazing cast of colleagues at Platinum Atlassian Solution Partner, Trundl.
Dave Rosenlund _Trundl_
Global Director, Products @Trundl
Boston
203 accepted answers
2 comments