Data-Driven Decision Making – Product Development with Continuous Delivery Indicators
Continuous Delivery is the standard technique for modern software development. However, measuring Continuous Delivery is not yet a very widespread practice.
Continuous Delivery Indicators introduced by Steve Smith in “Measuring Continuous Delivery” are helpful to measure Continuous Delivery in terms of stability and speed.
Introducing Continuous Delivery Indicators to development teams induces a new way of thinking about deployment pipelines as entities assembling teams’ value streams to be optimised for stability and speed.
Continuous Delivery Indicators provide a development process optimization framework where a development team can set stability and speed goals as well as track progress.
The introduction of Continuous Delivery Indicators to the development teams needs to be accompanied by a near real-time tool that transparently performs the raw data collection and indicators’ calculations.
The Data-Driven Decision Making Series provides an overview of how the three main activities in the software delivery – Product Management, Development and Operations – can be supported by data-driven decision making.
Software product delivery organizations deliver complex software systems on an evermore frequent basis. The main activities involved in the software delivery are Product Management, Development and Operations (by this we really mean activities as opposed to separate siloed departments that we do not recommend). In each of the activities many decisions have to be made fast to advance the delivery. In Product Management, the decisions are about feature prioritization. In Development, it is about the efficiency of the development process. And in Operations, it is about reliability.
The decisions can be made based on the experience of the team members. Additionally, the decisions can be made based on data. This should lead to a more objective and transparent decision making process. Especially with the increasing speed of the delivery and the growing number of delivery teams, an organization’s ability to be transparent is an important means for everyone’s continuous alignment without time-consuming synchronization meetings.
RELATED SPONSORED CONTENT
Digital Transformation Game Plan – Download Now (By O’Reilly)
5 Signs Your Cache + Database Architecture May Be Obsolete
Achieve better user experience with Application Performance Monitoring and Troubleshooting
A NoSQL Database Architecture for Real-Time Applications
6 Popular Alternatives to Slack
LaunchDarkly Feature Management Platform. Dynamically control the availability of application features to your users. Start Free Trial.
In this article, we explore how the activities in Development can be supported by data from Continuous Delivery Indicators and how the data can be used for rapid data-driven decision making. This, in turn, leads to increased transparency and decreased politicization of the product development organization, ultimately supporting better business results, such as user engagement with the software and accrued revenue.
We report on the application of the Continuous Delivery Indicators in Development at Siemens Healthineers in a large-scale distributed software delivery organization consisting of 16 software delivery teams located in three countries.
Process Indicators, Not People KPIs
In order to steer Development in a data-driven way, we need to have a way of expressing the main activities in Development using data. That data needs to be treated as Process Indicators of what is going on, rather than as People Key Performance Indicators (KPIs) used for people evaluation. This is important because if used for people evaluation, the people may be inclined to tweak the data to be evaluated in favorable terms.
It is important that this approach to the data being treated as Process Indicators instead of people evaluation KPIs be set by the leadership of the product delivery organization in order to achieve unskewed data quality and data evaluation.
Continuous Delivery Indicators
The main activity in Development is building products. In that context, once it is known what to build, one of the central questions in Development is “how to build the product efficiently?”
The efficiency of the development process can be measured by analysing the value stream of a software development team. The value stream is Code → Build → Deploy and can be seen on the team’s deployment pipeline(s).
It is possible to measure the speed at which the value flows through the value stream. Likewise, it is possible to measure the stability of the value flow. The so-called Continuous Delivery Indicators of Stability and Speed are doing exactly that. The indicators are defined in “Measuring Continuous Delivery” by Steve Smith.
With the Continuous Delivery Indicators of Stability and Speed, a team is enabled to see their value stream and the bottlenecks therein at a glance. They can select the biggest bottleneck and invest time to eliminate it (in line with the Theory of Constraints).
The Continuous Delivery Indicators of Stability are Build Stability and Deployment Stability. The Continuous Delivery Indicators of Speed are Code Throughput, Build Throughout and Deployment Throughput. (Note: Steve Smith put forward a definition of speed as a combination of lead time for a change and interval of doing the changes. This is an extension of just looking at the lead time proposed in “Accelerate” by Nicole Forsegren, Jezz Humble and Gene Kim.) All Indicators should be generated on an ongoing basis for each team’s deployment pipeline to visualise the team’s value stream.
The Build Stability Indicator consists of the Build Failure Rate and Build Failure Recovery Time.
The Deployment Stability Indicator consists of the Deployment Failure Rate and Deployment Failure Recovery Time.
The Code Throughput Indicator consists of the Master Branch Commit Lead Time and Master Branch Code Commit Frequency.
The Build Throughput Indicator consists of the Build Lead Time and Build Frequency.
The Deployment Throughput Indicator consists of the Deployment Lead Time and Deployment Frequency.
All the Indicators are defined in such a way that low values are good, i.e. indicate good delivery.
Build Stability Indicator:
low Build Failure Rate is good.
low Build Failure Recovery Time is good.
Deployment Stability Indicator:
low Deployment Failure Rate is good.
low Deployment Failure Recovery Time is good.
Code Throughput Indicator:
low Master Branch Commit Lead Time is good.
high Master Branch Code Commit Frequency (measured as the interval between two successful commits – the lower the interval the better) is good.
Build Throughput Indicator:
low Build Lead Time is good.
high Build Frequency (measured as the interval between two successful builds – the lower the interval the better) is good.
Deployment Throughput Indicator:
low Deployment Lead Time is good.
high Deployment Frequency (measured as an interval between two successful deployments – the lower the interval the better) is good.
Following this, when plotting the Continuous Delivery Indicators on graphs, it is easy to differentiate good behavior from bottlenecks. Wherever the graphs go asymptotic to the X axis, it demonstrates good behaviour. Wherever the graphs show high values, it demonstrates bottlenecks in the team’s value stream.
That is, with the graphs demonstrating the Continuous Delivery Indicators, a development team can easily uncover bottlenecks in their Dev / Test / Deploy processes and make data-driven prioritization decisions on where to improve technically to make the Dev / Test / Deploy processes more efficient.
A team that uses the Continuous Delivery Indicators to optimize their Dev / Test / Deploy processes deliberately invests in technical improvements like Test Automation, Deployment Automation, TDD Process Improvement, and BDD Process Improvement in areas where it helps most. A team that does not use the Continuous Delivery Indicators might either not invest in their development process optimization, do it only after major outages, or do it based on the opinions of the key people in the team.
With the Continuous Delivery Indicators defined, it was clear to us that its introduction to the organization of 16 development teams could only be effective if sufficient support could be provided to the teams.
We developed an in-house proprietary tool that helps the development teams see the bottlenecks in their value stream easily.
The tool can visualize the team’s value stream using the information available in the team’s code repository, Continuous Integration build agents and deployment environments of the deployment pipeline. We query all that information from Azure DevOps services such as Azure Repos and Azure Pipelines. Default Azure DevOps settings to store the information for 30 days were overridden by us to keep 365 days of history.
The goal for a software delivery team is to have a stable and fast Value Stream. Following this, the tool displays the value stream in two parts: the Value Stream Stability (on the left hand side) and the Value Stream Speed (on the right hand side).
In addition to the value stream visualization, the tool automatically performs the value stream analysis. On its own, it detects the biggest bottlenecks in the stability and speed of the value stream and displays them in red. This way, the teams can be focused on the bottlenecks immediately without the need to understand the wealth of other data available.
Moreover, automated suggestions for relieving the bottlenecks are provided. These help with the resolution of the biggest bottlenecks in the stability and speed of the team’s value stream.
Besides the automated suggestions to resolve the bottlenecks, we also want to connect the tool with the teams’ Slack channels. This way, the teams would receive at the end of the month a tool screenshot with an overview of the stability and speed of their value stream for the month. For more detailed information, a link back to the tool would be provided.
We will also run small workshops with all the teams to familiarize them with how the Continuous Delivery Indicators can be used for stability and speed goal setting and tracking.
We introduced the Continuous Delivery Indicators generally to an organization of 16 development teams working on “teamplay” – a global digital service from the healthcare domain by Siemens Healthineers (more about “teamplay” can be learned at Adopting Continuous Delivery at teamplay, Siemens Healthineers).
It required quite a bit of explanation as it induced a new way of looking at software development through the lens of a value stream analysis, which is not something done routinely in the software domain. It was about a new way of thinking about the deployment pipeline as a carrier, or assembly line, of a value stream to be optimized for stability and speed.
There was a lot of feedback from the team related to data collection to ensure the raw data was sound. For example:
Take into account only Master Branch builds, not Pull Request builds
Include Pull Request lead times specifically to understand whether there is a bottleneck in wait times for Pull Request reviews
In addition to accumulated Deployment Lead Time and Deployment Interval across all pipeline environments, provide these for each pipeline environment individually
The Continuous Delivery Indicators have been rolled out in depth to a small selection of development teams by now. With the overview visible in the Continuous Delivery Indicators Tool, the teams were positively surprised to see for the first time ever their own value stream (code → build → deploy) in terms of stability and speed.
A key insight here is that a development team, while working, is not necessarily aware of how well the value flows through their deployment pipeline in terms of stability and speed. That is, the developers are not aware how their ways of working influence the value flow.
The Continuous Delivery Indicators Tool provides an overview of the value flow and acts as a real-time feedback loop to the stability and speed of the development team’s development process.
One team, once having seen their Deployment Stability Indicator, became aware of the frequent Deployment Failures and, at the same time, very fast Deployment Failure Recovery Times. The team was able to significantly reduce the Deployment Failures within the next few days. The awareness of the team was the key enabler of the value stream improvement here. The improvement itself was not difficult to implement.
Below is an example how a team got an overview of the stability and speed of their value stream and drilled down into the identified bottleneck details afterwards.
A team’s overview of the stability and speed of their value stream. The biggest bottlenecks are displayed in red:
The identified bottlenecks in the team’s value stream are:
Deployment Failure Rate
Deployment Failure Recovery Time (median)
Master Branch Commit Lead Time (median and standard deviation)
A drilldown into the Deployment Failure Rate and Deployment Failure Recovery Time bottlenecks is displayed below:
On the Deployment Failure Rate trend, we can see that from September onwards, the failure rate has been steadily increasing, reaching nearly 100% recently. Likewise, the Deployment Failure Recovery Time has significantly increased recently. This means that the deployment pipeline is staying mostly red as of late. This is the biggest bottleneck to tackle by the team to enable the value flow towards production environments on the team pipeline.
A drilldown into the Master Branch Commit Lead Time bottleneck is displayed below:
On the Mainline Commit Lead Time trend, we can see improvements in recent days compared to the past. So, the goal here would be to keep the status quo.
A drilldown into the BuildInterval bottleneck is displayed below:
On the Build Interval trend, we can also see good improvements in recent weeks compared to the past. Also here, the goal would be to keep the status quo.
Our teams need more experience with the Continuous Delivery Indicators in order to consistently use the data at hand as an input for prioritization. The data comes in different forms:
Most / in-between / least stable pipelines in terms of
Fastest / in-between / slowest pipelines in terms of
Lead times between pipeline environments
Intervals between respective activities in the environments
Now that the data is available, it needs to be taken into account by the development teams, and especially by product owners, to make the best prioritization decisions. The prioritization trade-offs are:
Invest in features to increase product effectiveness and / or
Invest in development efficiency based on CD Indicators Data and / or
Invest in service reliability
It might be possible to use machine learning to predict the stability and speed in a pipeline environment based on the data of stability and speed in preceding environments on the deployment pipeline. This is something we can explore in future.
Additionally, we might automatically block deployments into some pipeline environments based on the CD Indicators Data.
In summary, if a team optimizes their Development Process using Continuous Delivery Indicators, then the team is able to gradually optimize their ways of working in a data-driven way. Over time, the team can achieve a state where they build features evidently in an efficient way without big bottlenecks in their value stream.
Finally, the Continuous Delivery Indicators aim to offer a data-driven approach to the continuous improvement of common software delivery processes. It helps depoliticize and enable transparency in the decision making process in Development.
This article is part of the Data-Driven Decision Making for Software Product Delivery Organizations Series. The Series provides an overview of how the three main activities in the software delivery – Product Management, Development and Operations – can be supported by data-driven decision making. A previous article shed light on data-driven decision making in Product Management. Future articles will shed light on data-driven decision making in Operations and combinations of data-driven decision making in Product Management, Development and Operations.
Many people contributed to the thinking behind this article. Kiran Kumar Gollapelly, Krishna Chaithanya Pomar and Bhadri Narayanan ARR were instrumental to the implementation of the Continuous Delivery Indicators Tool, initially funded by Frances Paulisch. Thanks go to the entire team at “teamplay” with Siemens Healthineers for introducing and adopting the methods from this article.