Go Beyond Default Kubernetes Configs for Greater Security

Source:-containerjournal.com

Some organizations might be inclined to go with Kubernetes’ default configurations with the hope that these settings will provide a reasonable degree of security. But that wouldn’t be a good idea; on the contrary, organizations could actually put themselves at risk by implementing default settings in their Kubernetes environments, as these standard options don’t elevate security to where it needs to be. Fortunately, organizations can address these shortcomings by choosing some custom settings that provide them and their data with greater security.

Lack of Security a Concern in Kubernetes’ Default Configs

The security issues associated with Kubernetes’ default configurations trace all the way back to why most organizations decide to use this container orchestration system in the first place. Kubernetes security platform provider StackRox identified this motivation when it surveyed 540 IT professionals for its “State of Container and Kubernetes Security Report.” As reported by TechRepublic, 39% of respondents told StackRox that their organizations had chosen to deploy containers and Kubernetes for the benefit of developing and releasing applications to the market more quickly.

Interestingly enough, security concerns prevented many organizations from fulfilling that incentive. Nearly half (44%) of organizations had decided to delay the deployment of their apps into production because of security issues. Those concerns, therefore, robbed organizations of the agility for which they had been hoping.

These findings from StackRox point to an important consideration: Because agility is top-of-mind for organizations, Kubernetes’s default configurations are designed to serve the speed of app development and release. They’re not necessarily programmed to keep security in mind.

Take the Kubernetes network policies as an example. These policies function in a way that’s similar to firewall rules in that they govern the communication between pods and endpoints. By default, those policies allow pods to communicate with every component in an organization’s Kubernetes environment. They do not limit that communication in any way—a configuration that could spell trouble to organizations if an attacker managed to compromise a pod or otherwise infiltrate their environment.

Another example of Kubernetes’ default configurations not working in the interest of security is the way in which this container orchestration system manages secrets. Depending on what these secrets contain, attackers could expose organizations’ encryption keys, intellectual property or other sensitive data. This is possible because of the way in which Kubernetes currently governs secrets. As its developers note on its website, a user who is empowered to create a Pod that uses a secret can expose that secret by running a pod, and anyone with root permission on any Kubernetes node can read any secret from the API server. These risks demonstrate how an attacker can expose an organization’s data just by doing some research and working to compromise the accounts of privileged individuals within the organization.

What Organizations Should Be Doing

As the above examples illustrate, Kubernetes’ default configurations aren’t suitable for security-minded organizations. These entities should, therefore, consider adopting custom settings that are more secure for their Kubernetes environments. These options should include the following:

Reserve “watch” and “list” requests for most privileged components: These requests enable a user to view the values of all secrets that are stored within a namespace. As a result, it’s a good practice to not enable these requests on every system uniformly. Admins should reserve these capabilities for their environments’ most system-level elements, per Kubernetes’ list of secrets management best practices.
Set stricter network policies: In response to the weaknesses identified above, StackRox urges that organizations create stricter network policies that keep security top-of-mind. The company recommends organizations begin by creating a network policy designed to keep pods isolated and to apply a “deny all” communication policy to all new pods by default. From there, organizations can use labels to begin whitelisting those pods that need to communicate with the internet. They can also enable inter-pod interaction by allowing communication only between pods within the same namespace.
Set up pod security policies: Organizations can limit the risks posed by malicious pods and containers by using pod security policies. To do this, they first need to enable the “PodSecurityPolicy” admission controller via kube-apiserver –enable-admission-plugins=PodSecurityPolicy. Doing so will allow Kubernetes admins to create a set of conditions with which running pods will need to comply, as noted by Replex. They have various options for applying security to their pods. As an example, this restrictive pod security policy requires that all users run without privileges. It also disables privilege escalation and implements other security policies.
Kubernetes admins’ work doesn’t end once they’ve settled on their secrets management configurations, created strict network policies and enabled pod security policies, however. They need to then monitor how these policies are shaping their employer’s Kubernetes environment and whether all components are behaving as intended. To that end, organizations should consider seeking out a security solution that can automatically monitor the environment for deviations in behavior and configuration. This step will ensure that organizations can promote security within their Kubernetes environments on an ongoing basis and well into the future.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x