How the World Measures DevOps Quality
With continuous everything, knowing whether each new release will ultimately enhance or undermine the overall user experience is essential. Yet, most of today’s go/no-go decision still hinge upon quality metrics designed for a different era.
Every other aspect of application delivery has been scrutinized and optimized for DevOps. Why not re-examine quality metrics as well?
Are classic metrics like number of automated tests, test case coverage and pass/fail rate important in the context of DevOps, where the goal is immediate insight into whether a given release candidate has an acceptable level of risk? What other metrics can help us ensure the steady stream of updates are truly fit for production?
To provide the DevOps community an objective perspective on what quality metrics are most important for DevOps success, Tricentis commissioned Forrester to research the topic. The results are published in the 55-page report, Forrester Research on DevOps Quality Metrics that Matter: 75 Common Metrics—Ranked by Industry Experts. The report takes a deep dive into the global findings, complete with heat maps, quadrant mappings and some fun lists such as “Most Overrated,” “Hidden Gems” and “Top DevOps Differentiators.”
One of the most common questions we received after publishing the report was: “How do the results vary across regions?” In response, we performed some additional regional analysis—and I’d like to share those results here.
To start, let’s take a look at the global top 20. The following metrics were ranked as the most valuable by the DevOps experts who measure them (across all regions).
Europe DevOps Quality Metrics Trends
Looking specifically at Europe, the top 20 changes as follows:
Interesting trends in this region:
- There is a greater commitment to measuring quality metrics. European respondents reported a higher level of DevOps quality metrics measurement across the board. For almost all metrics, the usage rate was at least 6% higher than the global average. For metrics related to time, coverage, risk, effectiveness and efficiency, the usage rate was over 14% higher. This speaks to European organizations’ commitment to scrutinizing and continuously optimizing their quality processes—especially in terms of time and resource utilization.
- Risk and coverage metrics are valued more than in the global average. European respondents also ranked risk and coverage quality metrics a surprising 21% higher than the global average. This could be related to the fact that the respondents from this region came primarily from the financial services and insurance sector, with healthcare and government close behind. In such highly-regulated industries, measuring and mitigating risk is certainly a core concern. This finding could also indicate European organizations place a greater emphasis on protecting the corporate brand.
- Test data preparation time seems to be a greater concern. European respondents were more likely to measure (+18%) and highly-value (+23) time spent preparing test data than their global peers. Given the restrictions GDPR placed on test data as of May 2018, it seems likely that European organizations have significantly changed their test data management processes (e.g., to masking and more synthetic test data generation), and are cautiously monitoring how the changes are impacting their overall efficiency.
Asia Pacific DevOps Quality Metrics Trends
Now, let’s shift focus to Asia Pacific. The following 20 metrics were ranked as the most valuable by the Asia Pacific DevOps experts who measure them:
Notable trends in this region:
- End-to-end testing metrics are valued—and measured—more than in the global average. Although Asia Pacific respondents measured fewer build and functional validation metrics than the global average, they measured (and valued) end-to-end testing metrics much more than their peers around the world. For example, percent of automated end-to-end tests was measured by 47% of the organizations (versus 36% globally) and highly-valued by 84% (versus 70% globally). Risk coverage measurement was significantly higher; it was measured by 49% (versus 34%) and highly-valued by 71% (versus 59%). This speaks to the region’s focus on digital transformation and commitment to delivering exceptional user experiences.
- API testing metrics were also valued—and measured—more than in the global average. Asia Pacific respondents also measured and valued API testing quality metrics more than the global average. Overall, API quality metrics were measured by 16% more organizations in this region than globally. The highest valued API quality metrics were API test coverage (63% versus 39% globally) and API risk coverage (79% versus 62% globally). This prioritization of API testing is likely a side effect of the regional trend towards API-driven open banking (the majority of respondents indicated they were in the financial services and insurance sector).
- There is a greater leader/laggard quality metrics measurement gap. Part of the study involved classifying the respondents as either DevOps leaders or DevOps laggards, based on their responses to various questions about the maturity of their processes. Although the percentage of DevOps leaders in the region was lower than the global average (18% versus 26%), the DevOps leaders from Asia Pacific generally measured quality metrics at a comparable rate to their global peers. However, the DevOps laggards in Asia Pacific generally measured quality metrics at a much lower rate than their global peers. This suggests the select set of firms that have truly prioritized DevOps initiatives have made great strides—and the laggards have a lot of catching up to do in order to remain competitive.