A poorly curated knowledge base produces suspect answers
Team ‘26 in Anaheim has just ended, and the more I think about the service-management-related announcements, the more I keep coming back to the same uncomfortable realization:
A lot of organizations are nowhere near ready for the kind of AI-enabled operational workflows Atlassian is now offering with agentic AI.
Why? Because the operational foundations underneath many environments are still messy.
Service management teams already know this. They’re still fighting the same battles they have been fighting for years: stale KB articles, ownership that only partially exists in documentation, CMDB data nobody fully trusts, and operational workflows that make sense only because experienced people know how the organization actually works.
That last part matters more than many AI conversations admit.
People compensate for operational gaps constantly. Experienced operators notice when something feels wrong. They recognize outdated remediation guidance, know which service relationships are inaccurate, and understand where the documented workflow differs from operational reality.
A human knows that the "on-call" contact in the system is actually on sabbatical, or that a specific legacy server has a quirk that is not in the runbook.
AI systems do not compensate the same way.
Once AI systems begin participating directly in operational workflows—like the new AI-native Incident Command Center showcased in Anaheim—the quality of that operational context starts mattering a lot more than the quality of the AI itself.
The real story in AI-native operations is the loss of the human compensation layer. AI does not magically solve weak operational foundations; it amplifies them.
This is still the same "crap in / crap out" problem the industry has always struggled with, but with a very different risk profile: AI can operationalize bad context much faster than humans ever could.
An AI agent operating against stale documentation or inaccurate dependencies may simply continue reasoning forward as if the context is trustworthy.
One thing that stands out to me during the Team ’26 keynotes is that service management platforms may actually be some of the safer places for operational AI adoption.
Not because the environments are cleaner, but because service management platforms already contain the governance structures organizations need: approvals, audit trails, escalation models, and permission boundaries.
Most productivity-focused AI conversations revolve around summarization or chat.
The moment AI systems begin participating in remediation workflows, incident response, or change execution, organizations start asking different questions:
Can we trust this system?
Can we explain what it did?
Can we roll it back safely?
Who is accountable when it gets something wrong?
Service management teams already think this way. That mindset may end up becoming a significant operational advantage.
The Model Context Protocol (MCP) announcements are especially interesting through this lens.
Atlassian seems to be responding to reality instead of fighting it:
AI interaction is not going to live inside a single interface.
Organizations are already standardizing elsewhere—Claude, Cursor, Copilot, or internal tooling. By making the Atlassian Rovo MCP Server generally available, Atlassian is signaling that the interface is no longer the primary value layer.
The operational context itself is the value.
This is where the Teamwork Graph becomes strategically important. It is not just storing information; it is modeling relationships across the operational layer itself: systems, ownership, dependencies, workflows, and historical context.
When you make that context portable via MCP, you allow AI systems to reason across your organization’s operational reality.
But that brings us back to the foundation: if those relationships are modeled incorrectly inside the graph, your "portable context" is just portable noise.
AI is not going to reduce the importance of service management maturity.
It is going to raise it.
Once operational context becomes executable infrastructure instead of passive documentation, weak operational discipline becomes much harder to hide behind tribal knowledge and human compensation layers.
The organizations that succeed with AI-native service management will not be the ones with the flashiest tools.
They will be the organizations that did the hard work of cleaning up the operational foundations underneath them.
This is the second article in a "Beyond ITSM" series from the Atlassian Community Champions behind the virtual Atlassian Community Events chapter called CSX Masters.
Stay tuned for more.
See also…
Dave Rosenlund _Trundl_
2 comments