The majority of Atlassian’s business runs on Amazon Web Services (AWS). Due to the large scale of our infrastructure, we allow for teams to manage their own changes without a centralised review. Atlassian operates on a “trust, but verify” model: We promote a set of best practices and guidelines for teams to follow and we then check that these best practices are being implemented. Where the target is missed, we help the team readjust.
The most widely known example are S3 buckets that are publicly available and can be accessed by anyone. Countless companies have been caught off guard by accidentally putting confidential information in public buckets. It has prompted Amazon to offer additional safeguards in the form of bucket-level overrides to deny any sort of public object, acknowledging the severity of this problem.
At Atlassian, we have added a new tool to our vulnerability management belt so we can assist teams in following the best practices we have established: Cloud Conformity – a startup from Sydney which specialises in continuously scanning the configuration of cloud infrastructure. While they offer support for multiple cloud providers as well as checks for all five pillars of the well-architected framework, we use the tool for its “Security” checks for AWS.
Nearly all of our AWS accounts are being scanned on an hourly basis and the results are reported to the security team. To enable our developers to move fast and remove security as a gatekeeper we didn’t stop there, though. Instead, we integrated Cloud Conformity with our vulnerability pipeline which files Jira tickets for any findings we discover through these scans. Our developers live and breathe Jira day in, day out, so surfacing this information here is much more natural for them than having to look for these findings in some third party tool or needing security as an intermediary.
Anyone who has ever tried to deploy a security scanner inside an organisation knows that they are never set-and-forget. Instead, they require fine-tuning to ensure they only produce meaningful results. Every enterprise environment is different and particularly at scale, edge cases exist that scanners would not anticipate. For example, our internal PaaS enforces a set of best practices that have been developed in collaboration with the security team. Some of the configurations that come out of this are secure in this context, but the scanner will still report on them because they generally wouldn’t be. As a result, we spent some time refining the set of rules we care about.
In our first iteration, we decided to focus on our highest severity AWS accounts. These accounts hold our customers' data or manage our infrastructure, for example our CI/CD. In addition, we narrowed down the initial set of rules to those we consider high severity. We then spent some time working closely with those teams that own these important AWS accounts to ensure all findings provide a meaningful security benefit. Based on this feedback, we adjusted the configuration of our rules to fit right into our organisation. Only for this subset of accounts & rules are we creating Jira tickets, as we have verified the quality of these findings.
The next iteration has already started and is expanding out the scope of accounts having Jira tickets created as well as including more rules that are being reviewed. Eventually, all our AWS accounts will be under our security SLA and every check will have been reviewed and configured to the specifics of our environment
We also continue working closely with Cloud Conformity, who are responsive to our feedback and quickly fix any bugs we discover in their product. They are great at including our feature requests in their roadmap and always keep us informed on when work is starting on anything we care about. This way, we keep increasing the value their service provides to us which directly translates into an ever increasing security posture.
When the security researcher “benmap” presented at DEF CON 27 recently, the community learned just how vulnerable public EBS volumes can leave a company, reminding everyone that not just S3 buckets can be made public and contain sensitive information. Naturally, we investigated our own environment for such public volumes. Since we Cloud Conformity was already actively scanning all of our accounts, we were able to perform a fast investigation that gave a complete picture of all public volumes and we could quickly confirm that none of them contained any sensitive information. In addition, we will be alerted to any future volumes that are being made public and can ensure we are not exposing any sensitive information through them.
As a helpful side-effect these scans provide a forcing function for teams to go into their own environments and clean up any stale resources left over from development experiments. Atlassian enables our developers to iterate quickly, try out new features and innovate on our services. As a security team, we are responsible for making sure that these experiments happen within a suitable environment and in a way that don’t put customer data at risk. Part of this responsibility is making sure that unused resources are being cleaned up and Cloud Conformity helps us achieve this. We notify developers about resources with insecure configurations and sometimes developers realise they do not need those resources any more and delete them.
With a tool like Cloud Conformity in our arsenal, we now have ongoing assurance that our cloud infrastructure is in a good and secure state. We go beyond just vulnerabilities and use it to actually enforce best practices, which ensures our cloud security posture is best of breed.