by @Denis Boisvert (Trundl) & @Dave Rosenlund _Trundl_
Organizations considering Rovo (and generative AI in general) frequently focus on the wrong risk first. The core question isn’t whether AI will provide incorrect results.
The core question is whether your internal data is stored, reused, or exposed without your control.
Enterprise Gen-AI incidents are going to stem from data governance failures: broad permissions, leaky integrations, unclear retention policies, or hidden sub-processors.
The risk: internal content becomes easier to discover and share once AI is implemented, especially across multiple connected systems.
The fix: responsible adoption means insisting on zero-retention processing, contractual and technical clarity on training boundaries, tenant isolation, and a hard look at permissions and connectors.
Atlassian’s commitment: Atlassian states that customer inputs and outputs in Rovo are not used to train, fine-tune, or improve Atlassian or third-party models.
They also state that the LLM providers used by Rovo do not retain your inputs/outputs or use them to improve their services.
For many non-regulated organizations, those commitments may be sufficient. For regulated or privacy-sensitive environments, you still need to validate auditability, data residency, sub-processors, connector scope, and any custom AI integrations you introduce.
We wrote this article based on conversations with dozens of Atlassian customers and research into secure enterprise AI adoption. We are not AI experts, but we are committed to learning and sharing what we learn.
Generative AI models are trained on massive datasets. If enterprise content — source code, internal strategy, PII, regulated customer data — ends up in a training set or retained outside your organization’s boundaries, the risk is obvious: sensitive data can be surfaced later, unintentionally or maliciously.
Enterprise-grade AI adoption therefore depends on architectural and contractual guardrails:
Zero-retention inference: data exists only during the inference window and is not stored for later reuse.
Clear training boundaries: vendors must explicitly state whether inputs/outputs are used for training or service improvement.
Tenant isolation and auditability: organizations need proof of who accessed what, when, where, and why.
Data residency: regulated businesses must keep AI-handled data in approved regions.
These are no longer “nice-to-haves.” They are table stakes for buying decisions in 2026 and beyond.
Rovo is Atlassian’s cross-product AI layer: search, chat, and agents over Jira, Confluence, etc. — and connected third-party tools.
That cross-product plus connectors design creates value but also carries risks.
Atlassian’s current policy is clear: customer data submitted to or generated by Rovo is not used to train, fine-tune, or improve Atlassian models.
The same “no training / no retention for training” rule applies to third-party LLM providers used by Rovo.
Atlassian’s approach to AI begins with a clear boundary: customer data is not used to train any model—period. This applies to Atlassian's internal models and any third-party LLMs engaged via the Rovo platform.
In straightforward terms: your prompts and Rovo’s outputs are used to answer your question, but they are not kept for training.
Rovo does not rely on a single model. Atlassian documents a hybrid architecture that can select among multiple LLM families to optimize latency and task fit.
Models Atlassian says may be used include:
Model selection is handled automatically by Atlassian’s orchestration layer; customers can’t currently pin a specific model.
Security implication: hybrid orchestration can reduce risk (by routing some tasks to Atlassian-hosted models) but increases the importance of understanding the provider chain. You should validate which services are in scope for your tenant and contract.
Rovo now supports data residency. With residency enabled, in-scope Rovo data remains stored in your pinned region, aligned with Jira/Confluence residency settings.
Residency was rolled out gradually through 2025 and is now generally available across supported regions.
Security implication: for regulated customers, residency is a gating control. Confirm your required region is supported before rollout.
Rovo activity is now captured in Atlassian’s organization audit log. Logged events include:
Audit logs can be filtered by product (“Atlassian Rovo”) and exported via Atlassian’s audit log tooling.
Security implication: this enables Security Information and Event Management (SIEM)* ingestion and incident investigation, but note that some Rovo audit depth may depend on your Atlassian Guard configuration.
*SIEM is a cybersecurity framework — software (or sometimes hardware / managed service) — that consolidates security-related data from across an organization’s IT environment. The term refers to a merger of two earlier concepts: Security Information Management (SIM) and Security Event Management (SEM).
Rovo’s managed AI path looks aligned with modern privacy expectations. The biggest remaining risks are classic governance risks amplified by AI.
We increasingly see people trust generative AI more than they trust their own neighbors. That’s a problem—not because AI is inherently untrustworthy, but because humans often treat it as if it were. People need to learn how to interact with Gen-AI the way they learn to communicate with any new human: with context, caution, and boundaries.
In practice, many users type quickly and overshare without thinking. With AI, that habit becomes a security risk. The safest technical architecture in the world won’t help if employees paste sensitive data into prompts that don’t require it. Prompt hygiene—sharing only what’s necessary—is now part of enterprise security culture.
Rovo is permission-aware: it can only surface what a user is entitled to see in Jira, Confluence, and connected sources.
That sounds safe — until you remember most organizations carry years of accidental openness:
AI doesn’t need to leak data to create an incident.
It only needs to make already-exposed data easier to find.
Rovo’s value increases with third-party connectors — Google Drive, Slack, GitHub, SharePoint, etc. But every connector:
Treat each connector as a mini vendor review, not a convenience toggle.
Atlassian’s no-training/no-retention commitments apply to the managed Rovo/Atlassian Intelligence path.
If you:
…you own that security surface: retention, encryption, residency, and access controls.
Atlassian’s Trust Center and Rovo privacy docs are better than most vendors in how plainly they state their data rules.
Still, enterprise security requires verification. Here are the right questions for procurement and risk review:
The AI landscape is changing rapidly. We suggest performing the verify step on a regular basis (at least once a quarter).
To help with the trust but verify step we suggest you develop a scorecard that minimally contains the following verification elements:
We see this as a minimal baseline and urge you to build upon it, based on your own environment and your organization's level of risk adversity.
While anticipating and avoiding wrong answers is a valid AI concern, it's not the biggest risk—it’s silent data misuse or over-exposure.
Rovo’s managed AI path appears to meet modern enterprise expectations: no training on customer data, non-retaining providers, hybrid model orchestration, auditability, and residency controls.
However, AI amplifies the risks that exist in the environments it’s deployed into. If your permissions are messy or your connectors are uncontrolled, Rovo will surface sensitive information exactly as designed—to the wrong people.
Adopt Rovo like you’d adopt any enterprise data plane:
clean your access model,
vet your integrations,
monitor usage,
and treat Atlassian AI as infrastructure that you own.
Denis Boisvert and Dave Rosenlund are Atlassian Community Champions with experience on both the customer and partner sides of the Atlassian ecosystem. While neither claims to be an AI expert, both are deep in learning mode. That’s why this article was peer-reviewed by multiple Rovo explorers.
This is the first in a series of explorations. Feedback, questions, topic suggestions, and even pushback are not only welcome—they’re encouraged.
Dave Rosenlund _Trundl_
Global Director, Products @Trundl
Boston
202 accepted answers
1 comment