Cloud computing is undoubtedly a seismic shift to the technology world, unlocking efficiencies and innovation like never before. In today’s challenging climate, organizations increasingly believe cloud-first and cloud migration strategies are the best way to sustain agile, rapid development to extract more value from limited resources, support demand increases and deliver new revenue-generating services.
However, the shift to cloud computing is also driving another key change, which isn’t often discussed – cloud has made infrastructure a part of the application.
This shift carries significant ramifications for how we practice security. On the whole, security tools and practices today are designed for central IT and security teams, and are built to address those teams’ skills and environments. In cloud apps, decisions around network access, OS patching, access permissions, and more, are made by developers, not IT. They’re made individually for every application, not centrally. And they’re made constantly as part of the development process, not at specific review gates.
And yet, the security implications of these decisions remain the same. An open port can compromise a cloud VPC just as it could a data center network segment. An unpatched container can be hacked just like a bare-metal machine. The same risks apply, and are often magnified by the scope of the app.
For that reason, we need to rethink how to tackle these threats, but this time with an app context — different teams, different processes, different skills.
In this post, I describe how the scope of the application grew to encompass infrastructure, and dig into the security ramifications. I believe this perspective is helpful when designing your security practices, picking your tools, and organizing your teams.
Style note: I’m using the word “cloud” to represent not only cloud computing, but also containers, serverless, and many following technologies. I also talk about “before and after” cloud, while in practice few enterprises are on a journey in between. This simple view is intentional, to help paint the bigger picture — I’m well aware this is a complicated world!
Applications in the pre-cloud era
Before the cloud, applications were built on a hefty IT stack. I’ll refer to this period in the past tense, though in practice most enterprises still operate primarily in this fashion.
You had a data center where central IT carefully managed capacity and allocated resources. Companies had to manage their amount of rack space, buy servers and bring them online, manage hardware failures and track which server was used by whom. If an application needed a server, it required paperwork and approval, as another server meant either spending more money or not giving a server to someone else.
When virtualization came along, another IT layer came in, typically vSphere, managing virtual machines on top of the physical ones. This increased efficiency, but didn’t change the core process — capacity was still limited and shared, so getting a server meant filing a ticket, and a central IT group had to manage both server capacity and the virtualization layer on top of it.
Beyond the servers, IT also managed networks. Those were configured using physical switches and routers, and focused less on capacity and more on access controls — which users can access the network and which networks can interconnect. Networks are often complicated — so, again, central IT played a key role, managing complicated communication permissions across the data center, as well as allocating bandwidth.
Beyond hardware, IT also managed central resources. One such resource was golden images for virtual machines. These golden VMs held approved software, scrutinized for legal, security and overall quality purposes. Central IT also monitored these VMs, updating them for vulnerability fixes or corporate policy changes. Applications were installed over these VMs, often manually, and were restarted as necessary to accommodate such updates.
Managed services were another example of a central resource. For instance, it is possible IT managed a big central Oracle database, which the applications required to function. One or more DBAs (Database Administrators) would manage indexes and tables, working with different application teams as needed to tune them to their needs.
On top of all of those, sat the application itself. Applications were made up of code and libraries, and had to be deployed in very specific surroundings to work. Any change to their hardware, base VMs, DB usage, CDN or any other aspects required filing a ticket, and waiting.
Back then, that made sense, because the resources were limited and had to be shared. Adding physical capacity — whether its servers, network or storage — took substantial time and money. Implementing or extending a central application required significant effort from central IT, another shared resource where growing capacity is slow and expensive. And so, if one application got a bigger share, another app got less — a zero-sum game.
Applications in the post-cloud era
Then, the cloud arrived, and eliminated these constraints.
Hardware capacity became a non-issue. Developers only need access to a cloud account, and can then provision as many servers as their budget allows. These servers elastically grow and shrink using self-serve and software-driven controls, without any involvement from central IT.
Networks are no longer dependent. Application teams can create their own Virtual Private Cloud (VPC), which the cloud platform ensures is separated from the rest. Access to these networks is granularly set, based on the application’s needs, and configurable purely in software — self-serve.
While cloud VMs are less centrally-managed than their data center predecessors, containers truly severed this tie. Instructions for building containers are typically defined in a source code repo and built with the app, making it hard for central IT to get visibility into them and, practically, impossible for IT to patch them. Even centrally-managed “golden images” lose their appeal, as patches to those images don’t apply until an app is rebuilt, and developers increasingly rely on external base images.
Central applications were replaced by easy-to-use services built into the cloud platform, such as databases, authentication, messaging, and many more. Unlike most central apps, these services are API-driven and designed for self-serve provisioning and consumption by dev teams. Packaged containers replaced smaller applications, easily consumed out of Docker Hub to be just another microservice amidst the app’s topology. In both cases, the need to file a ticket and wait for limited IT resources to provision your app is nowhere to be seen.
Lastly, DevOps teams (sometimes named SRE or Platform) were created, replacing central IT with aligned ops teams. These teams don’t attempt to control the infrastructure that applications use, but rather provide tools and services, such as Kubernetes, that allow developers to independently operate these infrastructure layers built into their apps.
Securing infrastructure as application
Over time, the cloud eliminates the need for most centrally-managed infrastructure. Instead, that infrastructure becomes a part of the application itself. This trend will inevitably continue, as we see CDNs, API gateways, middleware and more become a part of the application, improving the dev team’s independence and speed. This change started in the public cloud, but in practice evolved into the private cloud too, as it mimics the same practices.
The security concerns, however, have not gone away. An unpatched container can be hacked just as easily as a neglected VM or bare metal machine. A needlessly open port can give an attacker access to sensitive data regardless of where it’s hosted; and unencrypted data in the database can see it compromised very quickly, especially if stored on a shared service. The same attack vectors apply and we should keep an eye on those security concerns while taking measures to protect our applications.
What needs to change is how we defend against these infrastructure threats. Today’s solutions and practices are designed for central IT teams, not independent application teams. They are sometimes retrofitted into a shape that separate teams may use, but it’s quite rare that they truly fit that use-case.
We need to embrace a new perspective, built on this new reality of “infrastructure as application.” Such rethinking is a big task, and not one I can summarize in a few simple bullets, but here are some examples of changes to consider:
In The Secure Developer Episode #51, I interviewed Altassian’s CISO Adrian Ludwig and asked him about the company’s approach to cloud security and the value of a dedicated security team. Adrian’s career began in the world of browsers, flash, Android, so he has never had to deal with the non-cloud world.
“Atlassian has a fairly consistent and pretty well-managed, both corporate and product infrastructure, all in the cloud, so we have a good inventory of it. It originates with developers who add new instances. We added PaaS capabilities so it’s easier for developers to add new things with the platform, which has built-in security services. The reason developers use it is that they get benefits such as debugging, tracing, and a whole bunch of scaling infrastructure that they wouldn’t otherwise have. So when you couple security benefits with developer benefits, it becomes harder to use a non-platform approach.”
For many, the transition won’t happen overnight. In a decade, pre-cloud apps will be deemed legacy, like mainframe surroundings today, with their legacy security controls. Now is the time to start building your new security generation.
As long-standing partners, Snyk and Atlassian can help you get started on this journey. We are both committed to making the software delivery and development process easier and safer.
Snyk’s new Code Insights support for Bitbucket Cloud is one of many examples that supports this shared mission. Snyk’s new integration with Code Insights allows Bitbucket Cloud users to view the results of Snyk’s security scanning as part of their natural development flow. Snyk’s Atlassian integrations help enterprise developer teams quickly and easily embed security throughout the DevOps lifecycle, in full alignment with the security team’s compliance and governance requirements.
How have your security practices changed as more workloads move to the cloud? Do you have a dedicated cloud security team?