Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

Do you trust AI in your work environment? Why or why not?

Mariia_Domska_SaaSJet
Atlassian Partner
April 17, 2026

Lately, I’ve seen more people talking about turning off Atlassian AI features, which got me thinking about the bigger picture.

I understand why companies worry about data security and following their own rules. But I’m more curious about how this affects people personally.

In my own experience, I explore AI in the Atlassian environment, and, to be honest, sometimes I worry that what I do might accidentally affect the larger work environment.

What concerns you most about using AI at work? 

Is it because of things like: 

  • trusting what the AI produces?
  • feeling like you’re losing control over your work processes?
  • worrying about your own data privacy?
  • or is there something else that bothers you?

I’d love to hear your honest thoughts and real experiences. Even if your company is okay with using AI, do you still feel unsure about it? If so, what makes you hesitate?

P.S. One more question: With Atlassian’s recent changes about using metadata for AI training, do you see this as a positive move, or does it make you more concerned?

4 comments

Comment

Log in or Sign up to comment
Richard Scholtes
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Champions.
April 17, 2026

There already have been cases of uncontrolled behaviour of AI models or Agents based on them. Some were even in national television news as they were affecting governments. So yeah it stays a touchy topic in itself.

My company is heavily regulated and therefore has some standards of controls and checks to uphold which could easily be undermined by an AI engine that get's changed in it's backend model or how it uses data.

Therefore - also yes - the latest move of using the data is massively concerning. I will have to check very thoroughly which data is going where to prevent my company from shutting my Atlassian Site down. This seems to be such a short-sighted move by Atlassian. 

I wonder if they thought: "If we show them, that we encrypt everything and can't access those data without reason, it should be fine, right?" because that's what their explanations look like.

But the problem is another one. In that particular moment, when the answer to the question "Is my data used for training an AI model?" is answered with "yes", the grade of data confidentiality which can be allowed on the platform plummets down to "internal". Why? Because nobody in their right mind would want an AI to make suggestions to anyone in the world based on your confidential or even secret data. This is your leverage, this is your business case.

So what would that mean for a company? The usage of the platform gets restricted and one evaluation later you will realize "if we only use it for that, it's pretty pricy". Another budget talk later, Atlassian is gone from your tool matrix and replaced by a platform, that simply doesn't do this.

So, my question is: Why would you - as Atlassian - make such a move, that drives people away from you instead of bringing them in? What desperation we don't know of is driving them?  

Like # people like this
Mariia_Domska_SaaSJet
Atlassian Partner
April 17, 2026

Hi, @Richard Scholtes 

Thank you all for sharing your thoughts. I began with a simple question, but your answers explored deeper issues and brought up more complex challenges. I really appreciate the insight you brought to our discussion.

Alexander Ruetzler
Contributor
April 17, 2026

In our environment, the main reason we cannot activate Atlassian AI features is not a lack of interest in AI, but a security and regulatory constraint.

Atlassian has done a lot in terms of data residency and transparency, and for “classic” Atlassian Cloud data (Jira, Confluence, etc.) we can pin in‑scope data to a region like the EU or Germany. However, once we look specifically at Atlassian Intelligence / Rovo, the picture becomes more complex:

  • Data residency ≠ fully regional AI processing
    Data residency ensures that in‑scope Atlassian Cloud data is stored in the selected region. But Atlassian’s own AI documentation makes it clear that, by default, data for AI processing can be transferred outside the site to third‑party LLM providers (e.g. OpenAI) in order to generate a response.
  • Multi‑region sub‑processors
    Atlassian’s sub‑processor list and AI transparency information show that the LLM layer uses globally distributed providers (OpenAI, AWS Bedrock, Google, and Atlassian‑hosted models). From a security/governance point of view, that means the LLM execution layer is only partially under Atlassian’s direct regional control and may involve US or other non‑EU infrastructure, even if the core Jira/Confluence data is pinned to the EU.
  • Enterprise option still not region‑aligned
    For Cloud Enterprise, Atlassian offers the option to use only Atlassian‑hosted LLMs so no data is sent to external LLM providers. That’s a positive step, but those Atlassian‑hosted LLMs are currently documented as running in a US data center, i.e. still not aligned with an EU‑only processing requirement.

For highly regulated financial institutions (and especially in an EEA / GDPR context), this leaves a residual risk at the AI/LLM layer that we cannot simply “accept away” right now, even with DPA, SCCs and the other contractual safeguards Atlassian provides. That’s why, in our case, Atlassian AI features are currently not enabled.

So to answer your question:

  • I do trust Atlassian to be transparent and serious about security and privacy.
  • But given our regulatory requirements, we need stronger guarantees around where AI processing happens (not just where data is stored) before we can responsibly turn these features on. Until Atlassian’s AI infrastructure is better aligned with strict regional processing expectations, our hands are tied, regardless of how attractive the functionality is.

From a user perspective, that can feel frustrating, because the decision is less about “trusting AI output” and more about “can we legally and contractually justify the AI processing geography for our data”.

Like Mariia_Domska_SaaSJet likes this
Mariia_Domska_SaaSJet
Atlassian Partner
April 17, 2026

hi, @Alexander Ruetzler 

Thank you for your responses. I really appreciate that you took the time to share your thoughtful insights.
Chris
Contributor
April 17, 2026

Our reason for not using it is because of GDPR regulations. We cannot allow any customer data to even pass through the United States, as we don't fully trust that it's not going to be parsed in some way. It's 100% about the data privacy of our customers. I've been over this with our Atlassian representative so it's not a matter of perception or being overly cautious. The penalties for GPDR violations can be absolutely crippling to company of any size, and are not worth the risk.

Like # people like this
Mariia_Domska_SaaSJet
Atlassian Partner
April 17, 2026

Hi, @Chris 

Thanks for sharing this. I didn’t realize how strict these GDPR rules can be. I’ll look into this more since it’s an important point I hadn’t really thought about before.
Barbara Szczesniak
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Champions.
April 17, 2026

My biggest concern is how much electricity and water to cool the data centers are used. Are the effects on the environment and the people who live near these data centers worth me not having to use my own brain and time to come up with a better way to structure the content on a page?

TAGS
AUG Leaders

Atlassian Community Events