Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

Moving from AI "No" to AI "Go" — A Practical Governance Framework

Hello Atlassian Community! I'm @Alan Ross , Chief Security Architect here at Atlassian.

Most security teams I talk to today are dealing with the same tension: employees want to use AI tools now, while security, legal, and procurement still need time to review them. When the default answer is simply "wait," people often find their own workaround. That creates exactly the kind of visibility gap security teams are trying to avoid.

At Atlassian, we’ve developed a four-phase AI governance framework to manage that tension. Instead of treating AI adoption as a one-time allow-or-block decision, the framework lets teams move from controlled experimentation to production use in stages. We aren't lowering the bar for security; we are giving people a safe place to experiment early and applying deeper reviews only as a tool gets closer to real corporate data.

The Four Phases of AI Adoption

The framework separates AI adoption into four practical stages, making it easier to move quickly when risk is low and slow down when a tool starts to touch more sensitive systems or data.

Phase

Isolation Level

Data Permitted

Review Type

 

01 Experiment

Managed browser

No corporate data

None required

02 Testing

Total isolation (VDI)

Synthetic data only

Light intake

03 Proof of Concept

Managed pilot

Limited corporate data

Accelerated review

04 Production

Integrated

Full corporate data

Full business case

 

Why This Works

The core advantage of this model is proportionality. A low-risk sandbox should not trigger the same overhead as a production deployment that handles sensitive internal or customer data. Splitting the journey into stages helps teams avoid both extremes: over-reviewing harmless experimentation and under-reviewing real operational risk.

Getting Started

Most organizations do not need a large multi-year program to begin. A staged rollout can start with a few concrete steps:

  • Weeks 1-2: Stand up managed access. Launch a managed browser application or profile with basic DLP controls to give employees an approved place to test public AI tools immediately.
  • Weeks 3-4: Define a light intake process. Create a short checklist covering terms of service and data retention.
  • Months 2-3: Build reusable synthetic data. Start with platforms like Jira or Confluence to test realistic workflows without exposing live records.

I’ve detailed our entire approach—including our design principles and how we handle the "observability layer" for AI agents—in our latest whitepaper: "From Sandbox to Agents: A Practical Governance Framework for Security Leaders."

I’d love to hear from you in the comments: How is your organization balancing the "need for speed" with AI vs. the need for security oversight?

Visit the Atlassian Trust Center  for more information.

0 comments

Comment

Log in or Sign up to comment
TAGS
AUG Leaders

Atlassian Community Events