How devops changes monitoring
Source – computerworld.com.au
Devops is in many ways a modern evolution born out of the old way of doing things. Waterfall development methodologies were too slow, deployments to production were too infrequent, and the traditional separation of developers and operators was an obstacle to change. By combining a little philosophy and a lot of tools, devops unlocks dramatic increases in speed and efficiency.
Devops brings more automation to every stage of the application lifecycle, and the time to market for new applications is reduced significantly. Yet all of these changes have significant repercussions. With tectonic shifts in the requirements for developing, testing, and deploying applications, the needs of modern monitoring systems are changing as well.
Monitoring is one of the more overlooked areas when adopting devops methodologies. With a relatively static codebase, operations didn’t need the most sensitive of monitoring solutions. What was used generally offered insight only into basic statistics in production environments.
With frequent code changes becoming the new normal, organizations need a more comprehensive and real-time view of the production environment. Features such as real-time streaming, historical replay, and great visualizations become mission-critical components of application and service monitoring.
Monitoring for modern environments
Devops is speeding up the entire application lifecycle, from development to QA to production. Relatively static production applications are now being updated as frequently as multiple times a day. This leads to many challenges, some old and some new.
Developers have had to adapt by writing more comprehensive automated tests for their code, so that QA is as automated as possible. QA has become dependent on continuous integration, which automatically runs all of the unit and integration tests whenever new code is committed. Monitoring systems are now becoming more aware of every part of the devops toolchain.
Before devops, new application updates would be carefully administered by highly skilled technicians. Continuous deployment, by stark contrast, builds on all of the automation in the devops toolchain to move code into production whenever it passes all its tests.
If this sounds very “Wild West,” like something to be tried and tested only on the smallest and least important applications out there, you should know that Facebook has long been a proponent of these kinds of agile deployment systems. It’s well-known programming lore at Facebook that if your code breaks the application, it will be tracked back to you through the source control history and you will be held responsible.
It reminds me of the story of bridge builders in the Roman era. If you built a bridge in those times and it collapsed and killed someone, you were put to death. No wonder many of those bridges survived for so long. Adding personal accountability tends to increase the overall quality of whatever you are building.
But organizations can’t blindly trust a black box that automatically deploys code that hopefully works. Properly implemented monitoring systems can provide much-needed insight, helping you turn a would-be Wild West automated mess into a NASA control center.
Today’s monitoring systems have more real-time insight into every piece of the application stack than ever before. Developers of modern applications are writing API-driven code, which means those same APIs are now available to monitoring systems. In addition, many monitoring services have code hooks into the application logic itself.
Moreover, monitoring services have widened their focus from production environments to the entire application stack. This includes the compiling stage, the state of unit tests, integration tests, how well the code performs under load, and more. Google’s code deployment monitoring services are even known to watch its project management software, looking for and flagging individual files that have statistically more bug reports than others, marking them as hot spots to look out for in the future.
Proper monitoring in devops is proactive, not just reactive. It finds ways to improve the quality of your applications before problems even show up. Because this monitoring also watches the tools themselves, it can help improve the devops toolchain by highlighting areas that might need more automation.
When you have a complex application being updated and deployed multiple times a day and undergoing rapid QA cycles, you want to be able to pinpoint problems as quickly as possible. Sophisticated monitoring becomes a first line of defense against downtime. Thus monitoring has had to evolve to take all of the new data into account.
How can you tell the difference between old school monitoring services and devops-ready monitoring services? If you do not know what you are looking for, you could find yourself facing significant downtime and missing the boat on new, agile methodologies.
Choosing a monitoring system
Obviously, lots has changed in modern application lifecycle development and deployment, but many monitoring vendors remain stuck in the past. For every monitoring vendor that is actually devops friendly, a dozen don’t fit the bill. That makes it vital to know what to look for when evaluating monitoring solutions.
Modern devops architectures for complex applications have lots of data to track. No longer is it sufficient to track only the most simple statistics such as RAM, CPU, and disk I/O. Now your monitoring solution should be API-aware and feed data directly from the applications themselves. In order to make sense of it all, the table stakes you should be looking for in a modern monitoring system is real-time streaming data, historical replay, and great visualization tools.
The visualization tools in particular are important for understanding the state of all of your applications in a holistic way. Being able to pinpoint problems in an agile devops environment often comes down to the quality of your visualization tools. Tracking down problems by inspecting individual log files when there are so many moving pieces is not an efficient operations strategy.
The next most important way to evaluate a monitoring system is by the quantity and quality of the modular integrations. How many programming languages can the monitoring system be plugged into? Many older monitoring systems can only do a few, with a focus on Java and .Net, even though higher-level programming languages are becoming increasingly important to enterprise application development. You’ll want a monitoring system that can tie into popular scripting languages such as Python, Ruby, PHP, and Go.
Is the monitoring software able to hook directly into configuration management tools like Puppet and Chef? How many databases can it interface with? Can it talk to PaaS software like Cloud Foundry and OpenShift? How about Docker and Linux containers?
In fact, just looking at how well-supported Docker is within the monitoring software should tell you a lot about its support for other modern tools. Although Docker is one of the newest devops tools, it’s also one of the fastest growing in terms of adoption.
Enterprise software is still dominated by traditional tools and frameworks such as Java, .Net, Oracle, IIS, WebSphere, and Microsoft SQL Server. But to assume that monitoring need support only those stalwarts is a big mistake. The new software stacks are heavy with Linux, PHP, Python, Ruby, Perl, Go, Nginx, Apache, Redis, Memcached, MySQL, and PostgreSQL. Even large-scale production software uses these tools. Facebook famously is built on PHP. Google uses a lot of Python and Go.
The future of enterprise software is going to be a lot more diverse than it is today. The leading monitoring vendors are savvy to these changes. They have embraced the polyglot future and have built tools that can scale with the speed of agile devops deployments.
Due to increasingly compressed application lifecycles, proper real-time monitoring has become a critical cornerstone of running devops tools. Understanding all of the moving pieces and how they fit together will give you a leg up when trying to evaluate which monitoring solution is right for your organization.