4 reasons you should use Kubernetes
As most modern software developers can attest, containers have provided us with dramatically more flexibility for running cloud-native applications on physical and virtual infrastructure. Containers package up the services comprising an application and make them portable across different compute environments, for both dev/test and production use. With containers, it’s easy to quickly ramp application instances to match spikes in demand. And because containers draw on resources of the host OS, they are much lighter weight than virtual machines. This means containers make highly efficient use of the underlying server infrastructure.
So far so good. But though the container runtime APIs are well suited to managing individual containers, they’re woefully inadequate when it comes to managing applications that might comprise hundreds of containers spread across multiple hosts. Containers need to be managed and connected to the outside world for tasks such as scheduling, load balancing, and distribution, and this is where a container orchestration tool like Kubernetes comes into its own.
An open source system for deploying, scaling, and managing containerized applications, Kubernetes handles the work of scheduling containers onto a compute cluster and manages the workloads to ensure they run as the user intended. Instead of bolting on operations as an afterthought, Kubernetes brings software development and operations together by design. By using declarative, infrastructure-agnostic constructs to describe how applications are composed, how they interact, and how they are managed, Kubernetes enables an order-of-magnitude increase in operability of modern software systems.
Kubernetes was built by Google based on its own experience running containers in production, and it surely owes much of its success to Google’s involvement. Google has some of the most talented software developers on the planet, and it runs some of the largest software services by scale. This combination ensured that Kubernetes would became a rock-solid platform that can meet the scaling needs of virtually any organization. This article explains why Kubernetes is important and why it marks a significant step forward for devops teams.
An infrastructure framework for today
These days, developers are called on to write applications that run across multiple operating environments, including dedicated on-prem servers, virtualized private clouds, and public clouds such as AWS and Azure. Traditionally, applications and the tooling that support them have been closely tied to the underlying infrastructure, so it was costly to use other deployment models despite their potential advantages. This meant that applications became dependent on a particular environment in several respects, including performance issues related to a specific network architecture; adherence to cloud provider-specific constructs, such as proprietary orchestration techniques; and dependencies on a particular back-end storage system.
PaaS tries to get around these issues, but often at the cost of imposing strict requirements in areas like programming languages and application frameworks. Thus, PaaS is off limits to many development teams.
Kubernetes eliminates infrastructure lock-in by providing core capabilities for containers without imposing restrictions. It achieves this through a combination of features within the Kubernetes platform, including Pods and Services.
Better management through modularity
Containers allow applications to be decomposed into smaller parts with clear separation of concerns. The abstraction layer provided for an individual container image allows us to fundamentally rethink how distributed applications are built. This modular approach enables faster development by smaller, more focused teams that are each responsible for specific containers. It also allows us to isolate dependencies and make wider use of well-tuned, smaller components.
But this can’t be achieved by containers alone; it requires a system for integrating and orchestrating these modular parts. Kubernetes achieves this in part using Pods—typically a collection of containers that are controlled as a single application. The containers share resources, such as file systems, kernel namespaces, and an IP address. By allowing containers to be collocated in this manner, Kubernetes removes the temptation to cram too much functionality into a single container image.
The concept of a Service in Kubernetes is used to group together a collection of Pods that perform a similar function. Services can be easily configured for discoverability, observability, horizontal scaling, and load balancing.
Deploying and updating software at scale
Devops emerged as a method to speed the process of building, testing, and releasing software. Its corollary has been a shift in emphasis from managing infrastructure to managing how software is deployed and updated at scale. Most infrastructure frameworks don’t support this model, but Kubernetes does, in part through Kubernetes Controllers. Thanks to controllers, it’s easy to use infrastructure to manage the application lifecycle.
The Deployment Controller simplifies a number of complex management tasks. For example:
- Scalability. Software can be deployed for the first time in a scale-out manner across Pods, and deployments can be scaled in or out at any time.
- Visibility. Identify completed, in-process, and failing deployments with status querying capabilities.
- Time savings. Pause a deployment at any time and resume it later.
- Version control. Update deployed Pods using newer versions of application images and roll back to an earlier deployment if the current version is not stable.
Among other possibilities, Kubernetes simplifies a few specific deployment operations that are especially valuable to developers of modern applications. These include the following:
- Horizontal autoscaling. Kubernetes autoscalers automatically size a deployment’s number of Pods based on the usage of specified resources (within defined limits).
- Rolling updates. Updates to a Kubernetes deployment are orchestrated in “rolling fashion,” across the deployment’s Pods. These rolling updates are orchestrated while working with optional predefined limits on the number of Pods that can be unavailable and the number of spare Pods that may exist temporarily.
- Canary deployments. A useful pattern when deploying a new version of a deployment is to first test the new deployment in production, in parallel with the previous version, and scale up the new deployment while simultaneously scaling down the previous deployment.
Unlike traditional, all-inclusive PaaS offerings, Kubernetes provides wide latitude for the types of applications supported. It doesn’t dictate application frameworks (such as Wildfly), restrict the supported language runtimes (Java, Python, Ruby), cater to only 12-factor applications, or distinguish “apps” from “services.” Kubernetes supports a wide variety of workloads, including stateless, stateful, and data-processing workloads. If an application can run in a container, it should run well on Kubernetes.
Laying the foundation for cloud-native apps
Not surprising given the interest in containers, other management and orchestration tools have emerged. Popular alternatives include Apache Mesos with Marathon, Docker Swarm, AWS EC2 Container Service (ECS), and HashiCorp’s Nomad.
Each has its merits. Docker Swarm is bundled tightly with the Docker runtime, so users can transition easily from Docker to Swarm; Mesos with Marathon is not limited to containers, but can deploy any kind of application; AWS ECS is easier to access by current AWS users. However, Kubernetes clusters can run on EC2 and integrate with services such as Amazon Elastic Block Storage, Elastic Load Balancing, Auto Scaling Groups, and so on.
These frameworks are starting to duplicate each other in features and functionality, but Kubernetes remains immensely popular due to its architecture, innovation, and the large open source community around it.
Kubernetes marks a breakthrough for devops because it allows teams to keep pace with the requirements of modern software development. In the absence of Kubernetes, teams have often been forced to script their own software deployment, scaling, and update workflows. Some organizations employ large teams to handle those tasks alone. Kubernetes allows us to derive maximum utility from containers and build cloud-native applications that can run anywhere, independent of cloud-specific requirements. This is clearly the efficient model for application development and operations we’ve been waiting for.