As organizations continue to pursue their digital transformations, their IT infrastructures are expanding in both size and diversity. Many are seeing the addition of two new technologies in particular: containers and Kubernetes.
As organizations continue to pursue their digital transformations, their IT infrastructures are expanding in both size and diversity. Many are seeing the addition of two new technologies in particular: containers and Kubernetes.
Containers
Kubernetes defines containers as “technology for packaging an application along with its runtime dependencies.” Containers are especially useful in that those dependencies make behavior repeatable across different operating systems, thereby decoupling them from the host infrastructure. This property makes it easy to deploy containers across different cloud environments and other types of IT infrastructure.
That’s not the only benefit of containers—it’s easier to create container images than virtual machine images. As a result, containers help to improve the agility of the overall application creation and deployment process. As the result of image immutability, organizations can execute container image rollbacks quickly and easily. Such rollbacks make the build and deployment phases more fluid as part of a continuous development flow. Contributing to this dynamism is the fact that containers enable microservices, which allow organizations to break applications into smaller pieces and manage them more dynamically.
Kubernetes
There would be no Kubernetes without containers. As noted elsewhere on its website, Kubernetes addresses the problem of trying to manage many containers at once and ensuring that there’s no downtime. For example, administrators need to make sure that one container will start if another unexpectedly stops. The best way to do this is by using Kubernetes to run those distributed systems and account for such unexpected incidents.
Kubernetes comes with many benefits. First, it’s capable of load balancing and distributing network traffic to ensure that a particular deployment is stable. Second, it allows administrators to orchestrate their storage by automatically mounting public cloud providers and other systems. Finally, it’s capable of replacing and/or killing containers that don’t respond as well as managing sensitive information such as passwords that are designated by administrators.
Acknowledging the Challenges of Containers and Kubernetes
The technologies discussed above introduce their fair share of security challenges. With containers there’s the issue of container images, software packages that contain everything needed to run an application. Developers commonly use container registries as a means of pulling down container images to meet their application creation needs. Organizations put themselves at risk if they’re not familiar with the sources from which they’re pulling down those container images. Unfamiliar container registries are the equivalent of suspicious software distribution websites. There’s no telling what a container image could do in that scenario—it could easily conceal malware that’s capable of stealing the organization’s sensitive information. Alternatively, organizations could unknowingly pull vulnerable container images that attackers could then weaponize to access their victims’ Kubernetes environments.
There’s also the issue of container visibility. Containers don’t generally last long. By design, containers are meant to spin up and wind down according to an organization’s evolving business needs. That might work from the standpoint of agility and adaptability, but it’s not so great in terms of security. A constantly changing environment makes it all the more difficult for organizations to maintain visibility over their containers. If they don’t know what’s there, they can’t come up with a suitable defense strategy.
Kubernetes suffers from its fair share of security issues as well. The default configuration settings for Kubernetes don’t limit communication for pods and containers. That’s because the purpose of Kubernetes is to help connect different components in the name of speed and agility—not to isolate them. In the absence of different configurations, admins can make it possible for attackers to compromise the entire Kubernetes environment after gaining access to a single container.
Operationalizing Security Measures in Your Kubernetes Environment
The challenges discussed above are obstacles with which organizations haven’t dealt with before. Even so, the challenges themselves are nothing different. StackRox made this point clear:
The advent of containers and Kubernetes hasn’t changed the security mission. Your goal is still to make it difficult for bad actors to break into your applications and its infrastructure – and if they succeed, to catch them and stop them as quickly as possible. The tools and methodologies, however, must adapt to fit the needs of DevOps practices and cloud-native principles.
Administrators can take this observation to heart by operationalizing security measures in their Kubernetes environment. They can do this by using the following guidelines:
- Embed security earlier into the container lifecycle. Administrators need to integrate security early into the container lifecycle. This will allow DevOps personnel and developers to build and deploy secure applications.
- Use Kubernetes-native security controls to reduce risk. Kubernetes comes with a number of security controls that administrators can use to enforce their security policies. These measures will help admins to ensure secure network communication.
- Leverage Kubernetes’ context to prioritize remediation efforts. It takes a lot of time for administrators to triage potential security incidents. That’s why admins need to use context like vulnerability information to prioritize their remediation efforts.
The Kubernetes site has even more information on how to operationalize your Kubernetes security.