Jump to Content
Containers & Kubernetes

Exploring container security: How containers enable passive patching and a better model for supply chain security

December 10, 2018
Dan Lorenc

Software Engineer, Container Tools

Maya Kaczorowski

Product Manager, Container Security

Adopting containers and container orchestration tools like Kubernetes can be intimidating to anyone, but if you’re on the security team, it can feel like yet another technology that you’re now responsible for securing. We talk a lot about how to secure containers and avoid common containers security pitfalls (for example, in the other blog posts in this series), but did you know that you can use containers to improve your overall security posture?

Containers give you a software supply chain

With a monolithic application running on a virtual machine, developers usually make changes by SSH-ing into the machine or pushing code changes manually. This is not only hard to debug, but it’s also a very informal process—the next time the developer needs to make a change, they can just SSH into the VM again to debug, patch, update, restart, or otherwise adjust the app. That’s not a great security story, and it’s really difficult on the ops team, because they don’t know what exactly is running anymore.

With containers, things are a bit different. Containers have a defined development pipeline, also known as a software supply chain. You write your code and ensure that it meets your requirements for build, test, scan, and whatever else, before you deploy it. Further, code can be intercepted at any step in the chain if it doesn’t meet your requirements.

Containers let you patch continuously, automatically

Even today, many security attacks that occur in the wild, especially for containers, are ‘drive-by’ attacks—attackers looking for deployments with known vulnerabilities that they can exploit. And those vulnerabilities are rarely zero days—we’re talking vulnerabilities that have been around for years, left unpatched. Like wearing sunscreen, scanning for vulnerabilities (and patching them) is one of those boring best practices you really should be doing. (May we recommend Container Registry Vulnerability Scanner?)

But patching containers is different than patching VMs. Containers are meant to be immutable, meaning they don’t change once they’re deployed; instead of SSH-ing into the machine, you rebuild, and redeploy, the whole image. This happens very often as containers are short-lived; Sysdig estimates that 95% of containers live for less than a week. But wait…that’s really often! If you look at traditional patch management, Patch Tuesday comes just once a month. Maybe if you’re extra busy, you might also have to manage some weekly patch sets. You might still need Sunday 2 a.m. maintenance windows to apply your patches (and there’s a poor soul who has to stay up for this), but there’s simply not enough time in the day or coffee in the world to deal with deployments that only live one week!

Here’s the thing though. With containers, you don’t patch live containers, you patch the images in your container registry.  By doing this, the fully patched container image can be rolled out or rolled back as one unit, so that the patch rollout process becomes the same as your (obviously very frequent) code rollout process, complete with monitoring, canarying, and testing. This way, your patch rolls out using your normal process, in a predictable way. An alternative (though less preferable because it happens on an unpredictable schedule) is to let the rollout happen ad hoc. Then, the next time your container dies, Kubernetes spins up another container to compensate, and any patches you’ve applied will naturally roll out to your infrastructure. Depending on the lifespan of your containers, you should be fully patched in a matter of days.

Containers mean you can actually tell if you’re affected by a new vulnerability

Since containers are immutable, they give you content addressability—they’re stored in such a way that you are able to retrieve a container based on its contents. This means you actually know what’s running in your environment, for example which images you deployed.

What does this mean from a security point of view? Suppose that when you scan your image, it’s fully patched, and so you deploy it. At a later point in time, a new vulnerability is discovered. Rather than scanning your production clusters directly, you can just check your registry to see which versions are susceptible.

This also simplifies your patch management by decoupling decisions and processes about when to patch from actual patching. Instead of trying to answer, “Is my container patched?” your security team can ask, “Is my container image patched?” Then, your ops team can ask, “Is my (patched) image running?” This also lets you answer the inevitable question from your CISO: “Are we affected?”

Containers made Google more secure (and more reliable)

Thankfully, you don’t just have to take our word for it. Google's infrastructure is containerized, based on our Borg container orchestration system (the inspiration for Kubernetes), and we use it to deploy services and security patches on upwards of four billion containers per week.

By now it should be obvious how that’s possible—by patching continuously, and deploying patched containers. In the event of a disruptive incident, for example hardware maintenance or a critical security patch, we use something called live migration. For GCP workloads, live migration is basically a blue/green deployment, where the new workload is deployed alongside the existing workload, and a load balancer gradually moves traffic over until it’s fully handled by the new instance. This means you can effectively patch a running containerized workload, with no downtime, without the user noticing. This is what let us patch Heartbleed in 2014 with no downtime, and more recently Spectre/Meltdown.

In short, using containers allows you to easily patch your infrastructure, with no downtime, and do so quickly in the event that you’re affected by a newly discovered vulnerability. Better yet, you can automate all the boring patching stuff you never liked doing anyway. If you're serious about the security of your production system, make sure your infrastructure team is using containers to make patching your production environment safer, faster and easier. For more resources and to learn more, visit our container security page.

Posted in