Are You Secure? Science, Red Teams and Findings at Atlassian

 

Is your organisation secure?

How can you tell?

Maybe you’re secure when all of your systems are patched, or when all of your staff do security training. Does that work?

Of course these things don’t mean that your organisation is secure. So the question remains: How can you tell if you’re secure?

Say you defined what “secure” meant in some terms. “We will be secure when we do x, y and z”. When you met that definition, would you really be secure in the general sense of the word? You would not. There would always be something new, unexpected, or overlooked that would cause you to adjust your definition. You would never be “done”.

It seems impossible for any organisation to know if it’s secure or not in a yes/no sense. The reason for this is “you don’t know what you don’t know”. Information security by its nature includes unknowns - things you either know (or don’t know) that you don’t know. Managing unknowns is hard. People naturally want to deal with knowns, concrete things, things we can touch. Our everyday vocabulary even lacks good terms for talking about unknowns.

We can use estimates (range + confidence) to express uncertainty in a meaningful way - can we apply this to security? Say we estimate how secure we are, and then estimate how secure we need to be. Can we just look at them next to each other and as long as one is higher than the other, then we’re OK? Then we’d be “secure”?

No - sorry again. Estimates are good (necessary even) but they won’t save you when the bad guys come knocking. Even if you put a lot of time into making high-quality estimates of your risk vs. risk tolerance, and then work long and hard to move A below B, it still would not be enough. It’s something you need to do, yes - but it’s not enough by itself.

Why not? Biases. “You can’t grade your own homework” as Micah Zenko puts it in this 30-minute talk about Red Teaming. An organisation’s own conception of its own security posture is inevitably tainted by biases, among them:

  • Status Quo bias where what you’re doing becomes what needs to be done;

  • Groupthink where everyone tends to agree with everyone else; and

  • Blindspot bias where you know there’s bias but it only applies to others.

So even if you think you’re doing a great job, you’re still going to have gaps. You can do everything right, catalog and manage all the risks, and be humming along with all the gauges, stoplights and pie charts on your Executive Security Dashboard™ a comforting shade of green, while unbeknownst to you someone who isn’t afraid to use View Source is covertly ransacking your data.

There are always going to be things you didn’t know, things you thought you knew but didn’t, thought someone else was doing, thought you were ready for, thought you were watching, thought someone else would be watching, etc etc. And there’s no good way to know what they are or how bad they are.

There’s no good way to know…

…unless you use SCIENCE.

Science and Red Teaming

What on earth does science have to do with red teaming? This:

  • Question: Are we secure?

  • Hypothesis: We are secure!

  • Prediction: A gang of attackers won’t be able to compromise us because we are secure.

  • Test: Employ an independent gang of attackers! Press the “go” button!

  • Analysis: Here are all of your passwords! Or not.

A Red Team conducts experiments on your security. Their job is to test the hypothesis that the organisation is is secure. They do this by imitating attackers and hacking their own organisations. People do a million things in the name of security (including: threat modelling, finding and fixing vulnerabilities, bug bounties, reporting, telling people not to click on things, tuning detections, going to conferences, keeping vast logfiles, scanning, training, staffing, etc) but nothing can “prove” they’re are secure. A Red Team, on the other hand, takes the environment resulting from all of these activities and puts it to the test. They try to disprove the hypothesis that the organisation is secure, by hacking it.

Similar to the old saying “the proof of the pudding is in the eating”, you can say that “the (dis-)proof of the security is in the hacking”. A Red team applies an experimental methodology to security. They test and exercise your defences so you can improve them.

Some related security activities include:

  • Threat modelling where you list threats to a system then decide on appropriate controls & mitigations

  • Penetration testing where you test an information system to find weaknesses

  • Bug bounty where you pay external security researchers real dollars to find vulnerabilities

The major differences between red teaming and these activities are that a Red Team 1. is as independent as you can make them; 2. operates continuously; and 3. covers a scope and methods that match the real scope & methods an attacker would have. Red teaming isn’t a “better” or “more advanced” method than these other things, it’s different and complimentary, and (like any tool) needs to be used on the right problems in the right ways at the right times in order to pay off for an organisation.

There is a lot of material out there about what red teaming is and where it fits in an organisation’s array of good information security practices. My personal favourite is Daniel Meissler’s work on the topic (lots of links from there).

So in summary red teaming is an excellent way for you to both understand how secure you are, and to move you actual level of security in the right direction. A red team both tests your security and exercises your security capabilities.

So: Are you secure? Not sure - but you can use a Red Team to test it, learn and improve!

Would you be able to respond in a way that stops an attacker? You can test that too!

This unique dynamic makes having a Red Team an attractive proposition for many organisations.

Findings

Atlassian has a Red Team, and I’m on it. I’m not an operator though - I’m more of a program manager-type responsible for operation design, setting goals, solving problems and ultimately making sure the company is getting good value for having a Red Team. Part of that is keeping track of what we find and what’s being done about it.

A Red Team is a great way to learn about gaps in your security. But if you don’t keep track of the gaps you find and what you’re doing about them then it’s hard to say if the Red Team is really making a difference.

Say for example that in the course of an operation the Red Team successfully uses a new persistence technique. Afterwards we get together with relevant stakeholders and go through what we did and how we did it. Let’s say in this case we decide that Team A is going to reconfigure an allowlist that will prevent this technique from working.

We need to keep track of what the Red Team found (“this persistence technique was experimentally proven to work”), and what’s being done about it (“Team A will reconfigure this allowlist to prevent it”), and then keep track of whether or not that’s actually been done over time. Afterwards, when the fix is complete, we might go back and validate it by repeating the technique.

That’s a simple example. Often we find things that are broader in scope and more difficult to fix. Keeping track of these over time can be especially challenging because things change (projects get cancelled, staff change jobs, technology changes, etc) or people just forget and move on. If this happens then the value of the experiment - all that trouble you went through to legally break into your own employer - can be lost.

The Atlassian Red Team uses Jira to track our findings and what’s being done about them. It gives us a record of the gaps we find in our various activities, who’s doing what about them, and what the status of the fixes are over time. We do this by tracking findings as issues and using issue links to “point” to the fixes. The essence of this approach is to track the thing you find independently of the thing that’s being done to fix it. If you have a Red Team and use Jira you might find this useful, and this same approach should work for any audit or assessment function.

Here’s the detail of how we do it:

We typically discover findings during a Red Team operation. We track them using a “Finding” issue type in our Red Team Jira project. Any team member can create them. Findings start in “Draft” status and can include recommendations for action.

At the end of each operation, after the reporting is done, we meet to review the draft findings, look for duplicates, and make sure each finding is coherent and complete. Once we’re satisfied that we have a complete set we move them to “Looking for Link” status.

Then I work on finding out who’s doing what about the findings. Most of the Red Team’s operations involve a security incident, which has a post-incident review, during which the appropriate people agree and commit to actions. Otherwise I just find the right people and work with them to agree on appropriate mitigations. We use Jira’s native issue links to link our Red Team findings to other tickets (as “has action” or similar) and Confluence pages. Once a finding is linked to an action we move it into a “Linked” status.

Sometimes (rarely) we can’t marry up a finding to an action. In this case we move the finding to a “Link not found” status. The benefit here is that we haven’t lost track of what we found even if we can’t figure out what’s being done about it yet.

Our findings workflow looks like this:

Screenshot 2024-06-14 at 11.54.35.png

 

Periodically I review our findings and follow up on them as needed. Usually this is simply a matter of following the outbound links and seeing where those tickets are up to. Sometimes I need to ping people to work out what happened with given piece of work. When I’m doing this I often discover new related efforts that address the finding, and I link those too. Sometimes I discover that a fix got completed on time and I send kudos. Other times I might discover that it’s been delayed or cancelled and I can enquire further as needed.

Over time this process gives us a solid record what our Red Team “scientists” discover during their “security experiments” and who’s committed to doing what to address is. The finding tickets and the issues they link to make it easy for me to show the Red Team’s impact and value to the business.

If you have a Red Team, an assessment or audit function, or if you do post-incident reviews, then I hope you find this approach informative and useful. If you have any questions please drop a comment here or email me at jseverino@atlassian.com.

Happy sciencing!

0 comments

Comment

Log in or Sign up to comment
TAGS
AUG Leaders

Atlassian Community Events