I'd like to start the AI Beta program but myself and our legal team are concerned about this statement in the terms that apply to Your Data linked here.
It states that Atlassian has rights to our data to process, copy, distribute, export and display... can someone please specify what this means? We would like to start the AI beta, but I would like to ensure that our data is not shared outside of our organization and that all existing security remains in place when it comes to our proprietary data.
I would also like to know whether Atlassian is using OpenAI and which parts of OpenAI given that OpenAI does retain some data depending on how data is processed through the platform.
2. Terms that apply to Your Data. You retain all right, title, and interest (including all intellectual property rights) in and to any Input or Output. To the extent you have any right, title, or interest in Input or Output, you grant us a worldwide, perpetual license to access, use, process, copy, distribute, perform, export, and display, Input and Output in connection with improving Atlassian Intelligence, consistent with the Atlassian Trust Center. You will not, and you will direct End Users not to, represent that Output from Atlassian Intelligence was human-generated.
Hi,
We had the same questions the answers we got back completely conflict with what it says though.
And ultimately I've not been able to get approval to use it as a result.
Pasted the responses they gave us incase it helps.
Good morning, and thank you for your patience. My senior colleague had some time to review the case. I appreciate you bringing these concerns to us, and we’ve taken your feedback to the product team for Atlassian Intelligence product.
The first thing to know is that all the AI features will follow the existing data security and privacy guidelines outlined in the Atlassian Trust Center. There is no exception rule for the new AI features. If you note a potential conflict, always remember that the Trust Center information will take priority. Quoting the relevant excerpts for ease of reading:
Does Atlassian send customer data to OpenAI’s platform to train its services?
The data you submit and the responses you receive via Atlassian Intelligence are not used to fine-tune or improve OpenAI’s models or service. Each data request is sent to OpenAI individually, over an SSL-encrypted service, to process and send back to Atlassian.
Does Atlassian Intelligence use my data to serve other customers?
The data you submit and the responses you receive are used only to serve your experience. They are not used to train models across customers or shared between customers.
Does OpenAI store Atlassian customer data?
No, OpenAI does not store the data you submit or the responses you receive.
Regarding the license provided to Atlassian:
Terms that apply to Your Data. You retain all right, title, and interest (including all intellectual property rights) in and to any Input or Output. To the extent you have any right, title, or interest in Input or Output, you grant us a worldwide, perpetual license to access, use, process, copy, distribute, perform, export, and display, Input and Output in connection with improving Atlassian Intelligence, consistent with the Atlassian Trust Center.
We can confirm that Atlassian doesn’t use customer data to train the AI. However, as we gather data on how our non-AI products are used in order to improve the product, we do the same for our AI. The same rules apply to our use of this data as in the Trust Center.
In Atlassian Technical Support, we are unfortunately not qualified to speak directly our the legal documents. You can try to reach out regarding more specifics here:
https://www.atlassian.com/company/contact/general-inquiries#/
Alternatively, if you want further clarification on the matter, we recommend that you have a lawyer go over the information in our policies and determine if the risk to them is too great, and you can opt out of the feature as a result.
We also had this -
I have looked over the legal page, and I must provide the disclaimer that I am not a lawyer, however, under the link provided regarding the "license" provided to Atlassian:
2. Terms that apply to Your Data. You retain all right, title, and interest (including all intellectual property rights) in and to any Input or Output. To the extent you have any right, title, or interest in Input or Output, you grant us a worldwide, perpetual license to access, use, process, copy, distribute, perform, export, and display, Input and Output in connection with improving Atlassian Intelligence, consistent with the Atlassian Trust Center.
Here, the specific item to take note of is "consistent with the Atlassian Trust Center." which means that any restrictions on the use of their data that applies normally to their content is consistent here. Anywhere where the customer believes a conflict exists, there is none because that qualifier indicates that the trust center information takes precedence with regards to any conflict. Just as we collect data regarding the use of our non-AI products and use that to make improvements, so too are we taking data to make improvements regarding our AI and all the same restrictions to our use of that data applies as per the documents.
Ultimately, we might be able to assure them that we will not be using customer data to train the AI in whatever capacities they are most worried about if the system doesn't do that today, but there is no telling how we might change Atlassian Intelligence in the future. I would recommend that they get a lawyer to go over the information in our policies and determine if the risk to them is too great and they should opt out of the feature or abandon our products as a result.
All things considered, they really can only trust the law to protect them here, and they can only do that if they can trust our policies. Our policies are on the web site which you linked. We are support engineers, not lawyers, and cannot really advise them further on this matter with any real degree of confidence.
They can try to reach out regarding more specifics here:
https://www.atlassian.com/company/contact/general-inquiries#/
They pretty much echo that there is no conflict, saying:
All the AI features will follow the existing data security and privacy guidelines. There is no exception rule for the new AI features.
So, if adding input or output into the Atlassian Intelligence system would pose a potential violation of those policies, we would be legally on-the-hook.
Maybe they will expound on this by the time you see this.
When it comes to customers having concerns regarding the policies and any legal implications, all we have traditionally been able to do is point them to those pages. We do maintain compliance and have information on that here:
https://www.atlassian.com/trust/compliance/resources
If the customer in question requires that our products maintain a level of compliance, they can find that information there, however, Atlassian Intelligence does poke some holes as of currently which is outlined on the page here:
https://www.atlassian.com/trust/atlassian-intelligence
For example, customers that require HIPAA, SOC 2, or ISO compliance will find that Atlassian Intelligence is not currently up to standard.
So, yes, customers that are in regulated industries may be unable to use the feature at this time as it may not meet their compliance standards, as could be the case for all of our cloud offerings.
NEXT STEPS
Honestly, I am not sure that there is any more that we can do here. It is not strictly normal for PMs to talk to customers, although I have known some of them in the past to talk with customers directly to get feedback on customer use-cases.
In this case, it really just seems that the customer is not understanding that our use of their data is limited by the other policies as outlined. If that is not enough for them, then they shouldn't use the product.
If this was my case, I would let the customer know that we have taken their feedback to the product team for Atlassian Intelligence product. I might also emphasize that the document that they have pointed to also indicates in the same statement that the "license" that we are given is in deference to the statements made in the trust pages. Their specific concerns addressed here:
here are the relevant excerpts:
Does Atlassian send customer data to OpenAI’s platform to train its services?
The data you submit and the responses you receive via Atlassian Intelligence are not used to fine-tune or improve OpenAI’s models or service. Each data request is sent to OpenAI individually, over an SSL encrypted service, to process and send back to Atlassian.
Does Atlassian Intelligence use my data to serve other customers?
The data you submit and the responses you receive are used only to serve your experience. They are not used to train models across customers or shared between customers.
Does OpenAI store Atlassian customer data?
No, OpenAI does not store the data you submit or the responses you receive.
So, according to the one line item that is giving them cause for concern, it says that the "license" that we have must be consistent with those rules.
Still waiting for the other 2 topics opened around this question to be properly answered by someone at Atlassian.
Dont need the term doc quoted back at me, just need an answer thats a yes or no to "is our data used across the whole platform and thus exposed to other customers"