Continuous delivery in DevOps: making releases dull and reliable

Source:- devopsonline.co.uk

There is some healthy tension between a good DevOps team and the concept of continuous delivery. In the past, a simple agile team would build products and let the delivery manager worry about what happens in production. With the coming together of Dev and Ops, as the portmanteau term suggests, things are different.

The more operationally minded engineers will look closely at the performance of the product in the live environment and push back on design assumptions that are contradicted when real people use a service or product. A good DevOps team will always make clear the risks involved with breaking changes and try to strengthen the processes that mitigate these.

The story of the fat release

What we understand initially by continuous delivery is simply that the team no longer does traditional release planning, with many stories rolled into one fat release. These were nervous times for an enterprise as a release may have held many vital changes for stakeholders.

The risk of having to roll changes back in case of problems would weigh heavily as the operations team carefully deployed the build overnight. With so many changes being deployed at once, problems were almost guaranteed over the following days.

With the increasing ways to make progressive releases, this form of software delivery life cycle (SDLC) is no longer favoured; now teams aim at regular small releases. This helps with pleasing customers quicker, as well as issue response being managed much more rapidly too.

The DevOps team is the key to this – the conveyer belt no longer stops at the delivery manager but goes all the way to the customer. Smarter use of observability and monitoring helps the team see how a live product is actually used, helping to predict what is likely to be needed in the near future.

The modern DevOps team started to form from the success of agile teams. While managers allowed enthusiastic developers to experiment with agile, they had no intention of letting their vital operations team succumb to chaos. If you make your money from properly running software, then control of operations was key.

Even as the operations teams were run from offshore, there was still a belief that the silo was necessary. As massive scale tech companies, such as Google, Amazon and Facebook, dealt with asset-improving changes daily without fuss, leadership teams rightly wondered if something was amiss.

Code is eating the world

As the very tools that operations engineers were using to deploy were themselves programmable, engineers who wrote code also started looking at the scripts from Puppet, Chef and Ansible. These scripts were code, thus could be placed in a code repository and treated the same way as development code.

Similarly, operations engineers started looking at the primitive environments that development engineers tested in, which often used a blackboard-like-simplicity to ignore problems often seen with live environments. No development engineer was asking what would happen to a system when a million users tried to use it simultaneously after a marketing blitz.

Jez Humble described DevOps as “a cross-disciplinary community of practice dedicated to the study of building, evolving and operating rapidly-changing resilient systems at scale.”

With the entry of cloud systems, it is now easier than ever to break down the silos and to make these cross-disciplinary teams work. The common goal is to automate as much as possible and to improve the system all the time. The teams maintain a set of environments that allow components to be built collaboratively, tested, and customer journeys verified before release.

In terms of the build pipeline, live just becomes the final environment that the service or application is placed in. The popularity of containers and orchestration makes it much simpler to understand these environments, making them easier to adopt. A cross-disciplinary team avoids excess specialism in favour of generalist tools that can be applied widely and experimented with.

Continuous delivery in DevOps

Like the Henry Ford revolution, a car becomes the inevitable product of a healthy assembly line. But his intense commitment was to systematically lower costs and introduce technical and business innovations, which by their very nature favours Continuous Integration and Continuous Delivery.

DevOps can also help close the security loop. Traditionally, the agile development model treated security as a story to be done when required, with operational engineers noticing problems but unable to have a timely influence.

This meant that industries such as banking found agile particularly questionable. Now penetration testing and similar practices can be done within the pipeline, with feedback coming from the people who understand why they are doing it. Suspicious user patterns in live can be used to challenge simplistic assumptions as stories are being discussed, making development engineers aware of issues as they design.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x