Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

AI Privacy & Security: Rovo Adoption Should Start with Governance

AI Data Gov II.png

Why data governance is your first line of defense


by @Denis Boisvert (Trundl) & @Dave Rosenlund _Trundl_  

Executive Summary

Organizations considering Rovo (and generative AI in general) frequently focus on the wrong risk first. The core question isn’t whether AI will provide incorrect results.

The core question is whether your internal data is stored, reused, or exposed without your control.

Enterprise Gen-AI incidents are going to stem from data governance failures: broad permissions, leaky integrations, unclear retention policies, or hidden sub-processors.

The risk: internal content becomes easier to discover and share once AI is implemented, especially across multiple connected systems.

The fix: responsible adoption means insisting on zero-retention processing, contractual and technical clarity on training boundaries, tenant isolation, and a hard look at permissions and connectors.

Atlassian’s commitment: Atlassian states that customer inputs and outputs in Rovo are not used to train, fine-tune, or improve Atlassian or third-party models.

They also state that the LLM providers used by Rovo do not retain your inputs/outputs or use them to improve their services.

For many non-regulated organizations, those commitments may be sufficient. For regulated or privacy-sensitive environments, you still need to validate auditability, data residency, sub-processors, connector scope, and any custom AI integrations you introduce.

We wrote this article based on conversations with dozens of Atlassian customers and research into secure enterprise AI adoption. We are not AI experts, but we are committed to learning and sharing what we learn.


The General Landscape: Why Training and Retention Matter

Generative AI models are trained on massive datasets. If enterprise content — source code, internal strategy, PII, regulated customer data — ends up in a training set or retained outside your organization’s boundaries, the risk is obvious: sensitive data can be surfaced later, unintentionally or maliciously.

Enterprise-grade AI adoption therefore depends on architectural and contractual guardrails:

  • Zero-retention inference: data exists only during the inference window and is not stored for later reuse.

  • Clear training boundaries: vendors must explicitly state whether inputs/outputs are used for training or service improvement.

  • Tenant isolation and auditability: organizations need proof of who accessed what, when, where, and why.

  • Data residency: regulated businesses must keep AI-handled data in approved regions.

These are no longer “nice-to-haves.” They are table stakes for buying decisions in 2026 and beyond.

Atlassian's Policy, and What It Means

Rovo is Atlassian’s cross-product AI layer: search, chat, and agents over Jira, Confluence, etc. — and connected third-party tools.

That cross-product plus connectors design creates value but also carries risks.

Data Use + Model Training Boundaries

Atlassian’s current policy is clear: customer data submitted to or generated by Rovo is not used to train, fine-tune, or improve Atlassian models.

The same “no training / no retention for training” rule applies to third-party LLM providers used by Rovo.

Atlassian’s approach to AI begins with a clear boundary: customer data is not used to train any model—period. This applies to Atlassian's internal models and any third-party LLMs engaged via the Rovo platform.

In straightforward terms: your prompts and Rovo’s outputs are used to answer your question, but they are not kept for training.

The Hybrid LLM Architecture

Rovo does not rely on a single model. Atlassian documents a hybrid architecture that can select among multiple LLM families to optimize latency and task fit.

Models Atlassian says may be used include:

Model selection is handled automatically by Atlassian’s orchestration layer; customers can’t currently pin a specific model.

Security implication: hybrid orchestration can reduce risk (by routing some tasks to Atlassian-hosted models) but increases the importance of understanding the provider chain. You should validate which services are in scope for your tenant and contract.

Data Residency + Locality

Rovo now supports data residency. With residency enabled, in-scope Rovo data remains stored in your pinned region, aligned with Jira/Confluence residency settings.

Residency was rolled out gradually through 2025 and is now generally available across supported regions.

Security implication: for regulated customers, residency is a gating control. Confirm your required region is supported before rollout.

Auditability

Rovo activity is now captured in Atlassian’s organization audit log. Logged events include:

  • Chat started
  • Agent created/updated/deleted
  • Bookmarks and definitions created/updated/deleted
  • Third-party connectors created/updated/deleted

Audit logs can be filtered by product (“Atlassian Rovo”) and exported via Atlassian’s audit log tooling.

Security implication: this enables Security Information and Event Management (SIEM)* ingestion and incident investigation, but note that some Rovo audit depth may depend on your Atlassian Guard configuration.

*SIEM is a cybersecurity framework — software (or sometimes hardware / managed service) — that consolidates security-related data from across an organization’s IT environment. The term refers to a merger of two earlier concepts: Security Information Management (SIM) and Security Event Management (SEM).

The Risks That Matter Most in Real Deployments

Rovo’s managed AI path looks aligned with modern privacy expectations. The biggest remaining risks are classic governance risks amplified by AI.

The Human Layer: AI Trust and Prompt Hygiene

We increasingly see people trust generative AI more than they trust their own neighbors. That’s a problem—not because AI is inherently untrustworthy, but because humans often treat it as if it were. People need to learn how to interact with Gen-AI the way they learn to communicate with any new human: with context, caution, and boundaries.

In practice, many users type quickly and overshare without thinking. With AI, that habit becomes a security risk. The safest technical architecture in the world won’t help if employees paste sensitive data into prompts that don’t require it. Prompt hygiene—sharing only what’s necessary—is now part of enterprise security culture.

Risk #1: Permission Sprawl Becomes AI-Accelerated Disclosure

Rovo is permission-aware: it can only surface what a user is entitled to see in Jira, Confluence, and connected sources.

That sounds safe — until you remember most organizations carry years of accidental openness:

  • “Everyone can view” Confluence spaces
  • Legacy Jira project roles
  • Old shared drives connected through OAuth
  • External guests never removed

AI doesn’t need to leak data to create an incident.
It only needs to make already-exposed data easier to find.

Risk #2: Connectors Widen the Data Plane

Rovo’s value increases with third-party connectors — Google Drive, Slack, GitHub, SharePoint, etc. But every connector:

  • expands what Rovo can index and retrieve
  • introduces another retention and sub-processor chain
  • increases blast radius if scopes are too broad

Treat each connector as a mini vendor review, not a convenience toggle.

Risk #3: Custom AI Integrations = Shared Responsibility

Atlassian’s no-training/no-retention commitments apply to the managed Rovo/Atlassian Intelligence path.

If you:

  • build custom agents that call external AI endpoints
  • use Marketplace/Forge apps that route data to non-Atlassian models
  • add third-party tools with their own AI layers

…you own that security surface: retention, encryption, residency, and access controls.

Trust but Verify

Atlassian’s Trust Center and Rovo privacy docs are better than most vendors in how plainly they state their data rules.

Still, enterprise security requires verification. Here are the right questions for procurement and risk review:

  1. Provider enforcement: Are zero-retention policies technically enforced at each LLM provider, or primarily contractual?
  2. Connector scope: What data sources are connected, with what OAuth scopes, and who approved them?
  3. Permission drift: Do Jira/Confluence permissions reflect least privilege today, or legacy openness?
  4. Audit completeness: Are Rovo events exported and monitored in your SIEM?
  5. Sub-processor mapping: Can you map each Rovo AI feature to the providers in Atlassian’s AI documentation and sub-processor lists?

The AI landscape is changing rapidly. We suggest performing the verify step on a regular basis (at least once a quarter).

Rovo Readiness Scorecard (What to Validate Before Deploying)

To help with the trust but verify step we suggest you develop a scorecard that minimally contains the following verification elements:

Atlassian-owned controls (vendor baseline)

  • No model training on customer data; no retention for training.
  • Hybrid LLM routing with Atlassian-hosted/open-source options in the stack.
  • Rovo activities logged in Atlassian audit log.
  • Data residency supported and generally available.

Customer-owned controls (your gating items)

  • Permissions cleanup: review Confluence spaces, Jira projects, and guest access.
  • Connector governance: approve scopes, classify connected data, and review third-party retention policies.
  • Monitoring: export audit events to SIEM; alert on anomalous AI usage patterns.
  • Custom AI endpoint review: if any external model is involved, validate its retention and training terms separately.
  • Legal/security involvement early: AI adoption is a data-governance program, not a feature rollout.

We see this as a minimal baseline and urge you to build upon it, based on your own environment and your organization's level of risk adversity.

Summary

While anticipating and avoiding wrong answers is a valid AI concern, it's not the biggest risk—it’s silent data misuse or over-exposure.

Rovo’s managed AI path appears to meet modern enterprise expectations: no training on customer data, non-retaining providers, hybrid model orchestration, auditability, and residency controls.

However, AI amplifies the risks that exist in the environments it’s deployed into. If your permissions are messy or your connectors are uncontrolled, Rovo will surface sensitive information exactly as designed—to the wrong people.

Adopt Rovo like you’d adopt any enterprise data plane:

  • clean your access model,

  • vet your integrations,

  • monitor usage,

  • revisit your SIEM practices (more) regularly

and treat Atlassian AI as infrastructure that you own.

 


Denis Boisvert and Dave Rosenlund are Atlassian Community Champions with experience on both the customer and partner sides of the Atlassian ecosystem. While neither claims to be an AI expert, both are deep in learning mode. That’s why this article was peer-reviewed by multiple Rovo explorers.

This is the first in a series of explorations. Feedback, questions, topic suggestions, and even pushback are not only welcome—they’re encouraged.

 

1 comment

Susan Waldrip
Community Champion
December 11, 2025

@Dave Rosenlund _Trundl_ and @Denis Boisvert (Trundl), great article! I'll be sharing this with the AI working groups in my organization, it provides people a lot of things to consider AND to *know* before just releasing tools to "help make things faster and easier". I appreciate the time that you, Denis, and the Rovo folks took to put this together and share it!

Like Dave Rosenlund _Trundl_ likes this

Comment

Log in or Sign up to comment
TAGS
AUG Leaders

Atlassian Community Events