6 ways to reduce deployment risk without adding cost

Source:-techrepublic.com

Anyone can push code to production more often, but doing it without impacting users or driving costs up, is the trick.

The drumbeat of the age is more software, faster–that includes more frequent deploys. The classic software literature says that every deploy should have complete testing. This either drives up the cost of testing or reduces the test coverage.

Let’s talk about other ways to reduce deployment risk.

SEE: Policy pack: Workplace ethics (TechRepublic Premium)

Independent deploys
When the programmers at Etsy and IMVU invented continuous deployment, they learned to break the application into tiny pieces. Etsy, in particular, was using PHP. As long as the programmer did not change a code library or the database, they could deploy a single web page and only that web page. This arguably eliminated the need to test the entire system as a system end-to-end.

Today’s systems are more likely to be composed of microservices along with a static front-end. Set the programmers free to deploy just their code frequently by making the sub-systems deploy independently. At the same time, expect the programmers to support that code in production.

In order to support their own code, teams likely need advanced monitoring.

Monitoring
It was Ed Keyes, back in 2007, who first claimed that “Sufficiently Advanced Monitoring is Indistinguishable from testing.” Amazingly, Ed made that claim a year before anyone had spoken the term “DevOps” out loud.

The classic implementation of monitoring is that it is something the “operations” people do, mostly looking at second-order effects: CPU, memory, and disk use. Advanced monitoring is more about capturing the user experience–the number of 400-series web errors, how long the pages are sitting on the server, aggregate values passed into web pages (by count per minute), and so on. Imagine each member of a development team “watching the monitors” as they perform an independent rollout. If a programmer performs a rollout of a web service and the numbers act unpredictably, such as if 404 errors spike, the programmer can roll the change back. The time that a bug lives on production moves from two weeks (under a scrum team) to two hours, without adding significant cost.

The best monitoring may be to take some of those key automation scripts, chop them down to size, and run them in production.

Continuous testing of production
Synthetic transactions is a fancy term for taking tests, running them in production, and monitoring the results. That might be an entire user journey, from create account to login, search, all the way to checkout. Imagine performing that operation, over and over again, all the time, in production. Perhaps you skip checkout; you might just log in continuously. Then add code to track how long operations are taking. When people complain about the speed of login, you have real data about how long the experience is taking, not just how long things take on the server.

Best of all, you can likely do it by reusing test tooling. That’s not quite free, but it will be a fraction of the cost, and you can do it for a fraction of the features–just the “hot spots” that are frequently problems or core features.

Feature flags and canary rollouts
When I mention quick deploys and monitoring, I also implied the need for quick rollback. Feature flags push features into configuration, making them easy to turn on and off. Canary rollouts roll a feature to a small percentage of the user base. That group could be internal users or “power users,” people who want features but will take the risk of a few defects, and are committed to the product. Sending a new feature to canary users gives them the opportunity to complain if they find a problem–similar to the canary in a coal mine.

While the initial implementations of feature flags led to more complex code, where every flag required an “if” statement and two different blocks of code, that is not necessarily the case today. Asa Schachar, a developer advocate at Optimizely, suggests that a system designed as feature flags that are on or off can actually push some decisions into configuration, reducing accidental complexity.

Inspire and lead
It took David Hoppe, a senior developer with Itentional, to point out the obvious. As he put it “How about help the team care about the product?”

Unless the team cares about something: The customer, the product, even their own pursuit of excellence, it is unlikely that any of the techniques above will have much impact. Personally, the consulting assignments I have been the most satisfied with, the ones where there was the most long-term impact after I left, all involved helping the team to generate their own continuous improvement–something I call the “skill snowball.”

The proof for the techniques
The good news to the techniques above is that there is data. A few years ago Nicole Forsgren started a research effort that would become the state of DevOps report. In that project, she does an annual survey which looks at how organizations structure the work, how they perform, and draw correlations. Forgren published the results of her 2017 survey in the book Accelerate: The Science of Lean Software and Devops.

It’s no surprise that most of my recommendations here also appear in the book, but not inspire and lead, likely because it is so hard to quantify. I have worked with teams that admit they like to go home at five o’clock and get their meaning from their families. What folks don’t admit is that they don’t care–even when their very behavior seems designed to reduce effectiveness.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x