How containers and Kubernetes change deployment and DevOps
Source – techtarget.com
Containers, a technology that easily packages multicomponent applications for deployment, are well-suited for enterprises. Those same organizations likely find Kubernetes even more exciting.
Kubernetes is an orchestration tool that extends — and at the same time simplifies — management of containers to support large, distributed resource pools and application component redeployments.
The proliferation of containers and Kubernetes has some enterprises wondering if these technologies will change DevOps, in particular the tools organizations use to standardize and automate configurations. Kubernetes in particular has a lot of buzz, arguably even more than Docker, the most popular container software. Kubernetes has yet to fully address all of the issues with container deployment. Nevertheless, there are three reasons to believe containers and Kubernetes will shift how organizations implement DevOps.
1. Deployment that limits complexity
Containers are becoming the preferred framework for application deployment, over models such as VMs or physical stacks. The VM model is too complex to deploy and redeploy componentized applications. With containers, users define deployment units, not just pieces of applications. The underlying container architecture — whether managed with Docker, Kubernetes or another tool — frames deployments in a useful network context, keeps track of the pieces and generates replacements as needed.
Tools for IT automation, popular in DevOps shops, can be adapted to container deployment. However, because these tools are typically designed to support everything from bare-metal servers to serverless functions, there’s a lot of unneeded flexibility when you apply them to containers alone. That flexibility has a collateral complexity dimension that works against the much-touted simplicity of containers.
2. Pooled resources abstract everything below the OS
Containers and Kubernetes have an architected model of pooled resources that greatly simplifies the application deployment process. For maximum simplicity and flexibility, the user sets up an application to run on an abstracted resource pool, not a specific set of servers.
Kubernetes defines clusters of resources that are treated by the software as a single virtual server. You can build a cluster from a single rack of servers or from an interconnected set of data centers — and everything in between. No matter how you build it, the operational processes are almost identical.
Automated tools for DevOps are not inherently designed to work with these pooled resources for containers specifically. Enterprises must ask whether these tools having more flexibility than Kubernetes offset that old problem of software complexity.
DevOps tools are generally configuration management systems — like Chef or Puppet — focused on the deployment process. Configuration management in a virtual, containerized world isn’t obsolete, but managing infrastructure is increasingly unnecessary.
IT shops that adopt containers move from managing configurations to running virtualized resources that are abstracted from the server configurations. If configuration management tools are the primary bridge between traditional and new IT in the enterprise, as Chef CEO Barry Crist said in 2015, what should enterprises do with the bridge once they’re crossed it and moved on?
3. Kubernetes is operations-centric
DevOps is supposed to facilitate the transition from development to operations. Yet, operations-focused organizations often say that DevOps tools are disproportionately developer-centric. For example, tools like Chef and Puppet require the IT operations users to learn a programming language. As a result, newer DevOps products — like Ansible — overtly address complaints about the programmer bias with simple formatting for configurations.
Kubernetes is an operations-centric tool. The containerization process — handled by Docker — prepares applications from the development side, so Kubernetes orchestration can focus on deployment and redeployment.
Limitations for Kubernetes
It might sound like IT deployment with DevOps-centric automation as we know it is doomed, but there are limitations to the containers and Kubernetes model that vex enterprise users.
Users most frequently complain about Kubernetes’ networking and load balancing. Kubernetes, and container systems in general, have a simplified network model called the application subnet, where each deployment takes place inside an IP subnet with a private IP address. No application is visible to the outside world — including users — and must be exposed explicitly when the admin translates private addresses to VPN or internet addresses. While applications can be scaled under load by instantiating additional components, load balancing has to take into account the translation approach.
Most experts recommend going outside of Kubernetes to manage the network and load balancing. DevOps tools can account for application addressing and networking, as well as load balancing, so these tools coexist with containers and Kubernetes.
Enterprises that migrate existing applications to containers also rely on DevOps-centric tools. Kubernetes falls short because, rather than learn new operations practices and tools, enterprises continue to use the DevOps tool set that already is in place from the application’s previous hosting environment, such as bare metal or VMs.
Vendors for containers and Kubernetes are addressing these flaws. In 2018, Red Hat moved to acquire CoreOS, which offers tools and features to support management of large resource pools that can include VMs and even bare metal; plus a Kubernetes project, Tectonic, that integrates this resource support into Kubernetes.
A more universal vision of resource pools combined with a more general model of Kubernetes orchestration, could decide the future of DevOps. If Red Hat utilizes CoreOS’ assets fully, it could shift the industry decisively toward Kubernetes.