Kubernetes Container Management Is Not Application Management
Application performance management is different than container management in myriad ways. Don’t confuse the two.
It’s scary how quickly widespread adoption was achieved for container technology in the world’s mission-critical business applications. Consider that it took about 10 years after the introduction of Java for the programming language to become a near-ubiquitous standard (followed shortly by C#/.Net). In stark contrast, enterprise-ready Docker just turned 5 years old, and it’s been pretty much everywhere for a while.
Why so relatively quick? Are containers that much easier to deploy, control and manage than traditional Java app servers were? Uh, no! In fact, containers create a whole mess of control and management issues (especially performance management) that even the well-established application performance management (APM) companies struggled to create any visibility from inside containerized applications.
But when DevOps teams considered how useful and powerful containers were for deploying and operating microservice applications, collectively they figured it was worth the pain. To achieve the efficiencies of deployment, they were willing to put up with the management headaches associated with operating them, from over-committing infrastructure to application hangs and a lack of visibility, even with (or especially with) APM solutions.
And then came orchestration; more specifically, along came Kubernetes. While not the first or only system designed to manage containerized systems, Kubernetes simply rolled over the alternatives. How quickly did that occur? Quicker than you can spell Kubernetes—and I mean spelling it “K – 8 – S.” In what seemed to be one announcement on top of another, every cloud service provider announced its own enterprise version of Kubernetes. These enterprise distributions were announced by IBM, Microsoft, Red Hat, Amazon, Google and others; VMware announced it even before its cloud was available. Other orchestration tools simply didn’t know what hit them, and really didn’t stand a chance.
Kubernetes Becomes King of the Hill
Having a standard, especially when it comes to managing infrastructure (and especially application infrastructure), makes everybody happy. Thus, Kubernetes continued to win over the community—or to be more specific, communities—of developers and operations. K8S provided some semblance of control in the wild, wild west of containers and microservices. As a bonus, the multitude of distribution choices satisfied another requirement of modern application delivery teams—the desire to avoid vendor lock-in—pretty much for any part of their environment.
But even with the enterprise versions of K8s (which now includes newer distributions Pivotal PKS and Rancher in its ranks), container “management” is missing two major aspects of delivering applications that meet service requirements. The first gap is performance management; specifically, performance monitoring of the services running in Kubernetes. The second is a lack of understanding how a specific set of services, hosts, deployments and pods contribute toward the overall composition of a specific application. There’s a little chicken-and-egg problem there, but the salient point is that the orchestration system manages resources across containers—not applications and not application performance.
Orchestration certainly makes containers more malleable and enterprise-ready, but without visibility into application performance, service owners can’t prove (or even know) how well the application is actually servicing customer needs. To answer that question, the operations, DevOps, and/or development teams must have a way to tie application performance directly to Kubernetes (and Kubernetes orchestrated applications).
Ignoring Application Performance is Asking for Trouble
One could choose to simply ignore questions of how the application is performing: Is the app meeting its promised service levels (SLAs)? Does it scale appropriately? Is the response time fast enough? After all, if your applications are running properly, wouldn’t you be able to tell through the overall infrastructure usage and container status? (Hint: That’s a trick question—the answer is, “Not on your life!”)
Think of the last time you were in a meeting with the CIO (or maybe you’re the CIO meeting with the application owner). When she asked how the apps were doing, I’m guessing the answer, “Well, our resource usage is normal,” somehow would not be a satisfactory answer.
The onus for application performance management has shifted over the years. When the first APM tools appeared, it was simply a matter of survival to keep the apps running, getting visibility into production code running on Java application servers. Only the largest of organizations could afford (or even consider) to purchase APM solutions.
Today, even the tiniest startups rely on their business applications to conduct their business. Application performance and availability, troubleshooting and optimization are not nice-to-have luxuries of the Fortune 50—every ops team and development team has the need to see the performance of services and applications.
A Lesson to Learn From Capacity Management (and Capacity Planning)
One of the interesting anecdotes I like to use when discussing the evolution of IT operations is to talk about the difference between the “old school” practice of capacity planning and the ITIL/ITSM practice of capacity management.
Capacity planning was the exercise of figuring out the configuration, number and size of servers (physical servers, no less) needed to execute the application being delivered to operations. The answer was static—CPU/core counts, I/O bandwidth, storage size, memory size—and a count. The planner was (mayyyyybe) thinking about the concept of providing resources for the application to run properly, but all they had to go on was an estimate of compute resources needed by the programmer.
Then along came the concept of capacity management. First of all, capacity management was not an exercise with a hard beginning and end. Capacity management was fluid, always in play. Why? Sure, the capacity manager determines the exact same resource configuration—CPUs, memory, hard drive, network. The difference is the phrase that follows the answer, “In order to meet our scalability targets (1,000 concurrent users) and performance targets (all transactions less than half a second) …”
Capacity management is an always-active, always-questioning and always-reporting discipline. It requires adjustments to “the answer” depending on what’s happening with the applications.
Kubernetes and Application Performance Management
To ensure that orchestrated microservice applications from single mobile apps to complex financial suites are running at optimal performance levels, it’s critical to include application performance to the Kubernetes mix. Analogous to capacity management, K8s with APM provide resource allocation and management to meet your application scalability and performance targets. You can’t meet them if you’re not measuring them.
The final piece of the puzzle is picking an APM tool that can handle the three most difficult aspects of orchestrated container applications:
Visibility into containers, their stack and the performance characteristics of the services running on them (throughput, error rate and latency).
Observability in Kubernetes, understanding how the orchestration system is related to the services running on it.
Dealing with the complexity and constant change in microservice environments, being able to filter out the noise and show stakeholders everything they need to see about components and services they’re responsible for.
If you’re going to run microservices, you’re probably running them in containers. And if your application is critical to your business, you should be orchestrating your containers. Kubernetes is a good choice since enterprise versions are included with almost any cloud provider. Always keep an eye on your application’s performance with a modern application monitoring solution that supports the observability of your Kubernetes-orchestrated environments.