3 tips to keep Kubernetes safe at scale

Source:-techrepublic.com

As more companies adopt and scale their Kubernetes systems, security has to be a major point of interest.

Kubernetes containers are now highly prevalent in multi-cloud environments and are being deployed widely across a variety of industries. In a survey last year, vice president of product marketing for Sumo Logic Kalyan Ramanathan wrote that the open-source container operating system was “dramatically reshaping the future of the modern application stack.”

KubeCon 2019, a Kubernetes conference in November 2019, drew more than 12,000 people, and about 400 attendees reported that their company planned to run at least 50 clusters in production in the next six months.

SEE: What is Kubernetes? (free PDF) (TechRepublic)

But with the expansion of adoption have come security concerns, which reared their heads throughout 2019 in a series of security flaw vulnerabilities discoveries. The first was found at the end of 2018 and involved a privilege escalation flaw that allowed any user to establish a connection through the Kubernetes application programming interface server to a backend server.

Many companies, like Capital One and Walmart, used the conference to advertise their goals and meet with experts while hunting for Kubernetes-trained talent. Kamesh Pemmaraju, head of product marketing a Platform9, said the number of clusters and nodes being used were increasing, with some respondents telling Platform9 that their company was running hundreds of nodes in only one or two clusters. The survey also highlighted the fact that many companies are running Kubernetes on both on-premises and public cloud infrastructure.

NeuVector CTO Gary Duan and Platform9’s co-founder and CTO Roopak Parikh spoke to TechRepublic about three ways to leverage Kubernetes’ security capabilities at scale.

“Kubernetes has some basic security features–primarily around securing its own infrastructure. These include role-based access controls (RBACs), secrets management, and pod security policies (which are more focused on resource management),” Duan said.

“However, Kubernetes should not be deployed in business-critical environments or for applications that manage sensitive data without dedicated container security tools. As we’ve seen in the past year, the Kubernetes system’s containers themselves–such as the critical api server–can have vulnerabilities that enable an attacker to bring down the orchestration infrastructure itself,” he added.

SEE: Deploying containers: Six critical concepts (free PDF) (TechRepublic Premium)

1. Role-based access control
Parikh said his team at Platform9 is seeing a lot more production applications than last year and a lot more data or other applications being deployed on Kubernetes. Companies should look at Kubernetes as a complex distributed system made up of multiple components that need every layer secured when put into production.

“At the application level, we highly recommend putting in network policies and connecting Kubernetes clusters to an authentication provider that’s within the enterprise, so at a directory or one login. A lot of times what we have seen is, you can secure all your systems all you want, but it’s also important to secure the keys, as in the passwords you use, with multi-factor authentication and use certificates to make sure people cannot get in through those easy doors,” Parikh said in an interview.

“As always, security is about multiple layers. There is role-based access control, network policies and a few others. When your applications are running, how and where are you storing some of the secrets? If your application needs access to a database, the application needs to connect to the database using some credentials for where you are storing it.”

Kubernetes operators should make sure users have the correct roles and assign users to different spaces so that specific users are associated with specific applications. These roles would also be scalable as systems are deployed and upgraded, Parikh added.

SEE: Security Response Policy (TechRepublic Premium)

In general, each application in a cluster should be segregated or isolated so that the person in charge can decide what users can see different parts of the system.

Duan noted that a layered security approach was best for in-depth defense. Cloud-native security tools should be deployed to secure the entire lifecycle, from the CI/CD pipeline to run-time, because traditional security tools don’t work in Kubernetes environments, Duan said.

“This starts with vulnerability scanning during build and in registries, and then carries on into production. True defense in depth is not possible without deep network visibility and protection, such as with a Layer 7 container firewall. Such a container-focused firewall will be able to detect and prevent network based attacks, probes, scanning, breakouts, and lateral movement between containers using container micro-segmentation techniques,” Duan said, adding that as companies expand their deployments, they face the management headache of managing and securing dozens or even hundreds or separate Kubernetes cluster, with some across public and private clouds.

Having global multi-cluster management with federated global security policies enforced centrally becomes a critical issue, he said.

“Layering a service mesh such as Istio or Linkerd on top of Kubernetes can also improve security by encrypting pod-to-pod communications. Service meshes have other benefits that DevOps teams are excited about. But again, this introduces additional attack surfaces into the infrastructure which must be secured,” Duan said.

2. Insecure code checks
Parikh said monitoring code was vital to keeping Kubernetes secure at scale. If your company is running third-party applications or even internal applications, how do you know that the code that’s there is secure or not? System operators should ask themselves what version they are running and what kind of security vulnerabilities are there.

“You may want to scan it to figure out if there is a insecure code or if you’re using an older version of Python library. There are tools out there which can help you find out those details and give you a report on that. Based upon that, you can either deny or allow running those applications in the form of containment,” Parikh noted.

Duan echoed those comments, adding that ultimately Kubernetes provides the mechanisms for automation that should be leveraged by security tools.

Companies have the ability to declare security policy as code, where the application behavior of new services being deployed is captured in a standard yaml file and deployed natively by Kubernetes as a custom resource definition. Security teams also have to make sure all of the patches are up to date.

“With the server and operating system that you’re running, you need to make sure it’s patched and that it doesn’t have any vulnerabilities associated with the operating system itself. You’re using technology like App Armor or to making sure you’re giving the least privileges to the components that are running outside the Kubernetes as well as on the Kubernetes itself,” Parikh said.

“You need to make sure that every component you install is patched and up to date.”

3. Checking exposed ports
Duan said it is critical to define all the integration points in the pipeline where security should be automated and then build a roadmap for complete security automation. Initially, a few steps may be automated, such as triggering vulnerability scans and alerting on suspicious network activity.

According to Parikh, security teams need to protect applications that can connect to the outside world using load balancers or ports that can be opened up on your Kubernetes cluster themselves. Teams need to check whether they are configured correctly and if they are exposing it through the right security grids.

This is especially key with API servers to the external world. Parikh said in the logs of Platform9’s customers they have seen a lot of services trying to figure out if they are running PHP.

“Someone figured out that something was exposed to the internet by other companies and we saw in the logs some people trying to probe our servers to figure out if those ports were open because insecure ports were open. Kubernetes itself has an API server, and in the past we have seen exposed ports,” Parikh said.

“Are you securing your host policies to make sure you have the correct firewall running? If you’re running in a public cloud, make sure you’re using the correct security group and allowing only the actual action you need to. Are you making sure your Kubernetes components have the right kind of authentication or authorization so that only some users are able to login?”

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x