What’s Next in DevOps?
The shift from project to product continues, with an eye towards long-term maintainability and adaptability.
The shift to small, semi-autonomous teams continues, as does the market for platforms that integrate and aggregate information from disparate tools.
The growing importance of data science and analytics on the development lifecycle.
Concepts such as Value Stream Mapping that look at the entire Dev lifecycle become increasingly important, and pre-built tools to support this become more common.
Increasingly specialized vendors and SI partner offerings arise to assist with DevOps needs.
The DevOps movement continues to grow in terms of its impact and influence in the IT ecosystem. There’s no question that the DevOps movement is maturing to the point that it has a power and a voice comparable to that of other movements such as cloud and agile. One interesting characteristic of the DevOps movement is that it arose from a wish within IT to optimize and harmonize the activities of different teams; but it’s matured to take into consideration the needs of end users and a holistic view of how organizations need to operate in order to be both flexible and efficient at scale.
Business leaders may complain that IT workers don’t appreciate the underlying business context in which they’re operating; and historically the IT department has been seen as a cost centre and not a value centre for organisations. The net result of that is that IT struggled to justify infrastructure and process improvements that would increase costs, unless they were specifically tied to a new business-facing initiative. But IT is becoming increasingly central to business operations. Done well, the flexibility of software allows organisations to adapt much more quickly than in a pre-digital infrastructure, thus making effective IT teams the biggest enablers of value.
Project to Product
Digital transformation has become an incredibly popular buzzword in recent decades, but it is essential for companies to recognize that digital transformation is not a one-time activity. The work done by Mik Kersten in his book ‘Project to Product’ highlights the fallacy of thinking of IT improvements as one-time projects. Digging into the history of technological revolutions and witnessing the most advanced manufacturing processes of today helped Kersten recognize that shifting to a product-focused mindset is a far more beneficial way for companies to frame IT.
What does it mean to approach internal IT initiatives as products? One aspect is recognising that IT systems entail long-term maintenance responsibilities, and are not something that should simply be created and turned over to an unrelated team. Another aspect is re-thinking the way we finance projects. The annual budgeting cycle is optimised for relatively stable systems, where change can be anticipated a year in advance. Teams vie for a share of the budget, and ensure that they never underspend their allocation, lest they receive less the following year. The requirement to get up-front funding tends to promote large, expensive projects, while budget for subsequent operation is often not factored in. The result of this are complex IT systems that are maintained by teams other than the ones who built them, that are excessively complex, and that often don’t meet the needs of end users. Agility needs to extend into every aspect of a business, from how teams are structured, to how work is financed, to how and when an organization decides to make improvements.
An example of a product-mindset is to have the team in charge of a CPQ (Configure – Price – Quote) system be responsible for both building and maintaining that system over time – the “you build it, you own it” mantra. The power of this mindset is in promoting stability and flexibility in what is built. When a team has long-term responsibility for building and maintaining the product, and for getting feedback on how it’s used in production, they greatly increase the chances of this product bringing lasting value to the organisation. Feedback on how systems are used in production is the essential input for ongoing improvement. It’s for this reason that in the DevOps community the assembly line motif of software development has given way to a circular or infinity loop motif for software development. The loop indicates that ongoing feedback from end-users is as important as the original specifications.
Product companies and product teams are by nature more stable than teams who just build and move on. Project teams work great in construction and civil engineering, where infrastructure rarely needs to change once built. But software allows for an utterly different level of agility. It can be replatformed or refactored without end users being aware; its design or functionality can be changed at the last minute before release to users; and different versions can be exposed to different users to solicit feedback. This agility is why the knowledge that went into creating a software product must be retained in the team if at all possible. There is a “cone of uncertainty” where at the outset of building something we know the minimum that we will ever know about it. As we proceed with development and a system comes under use, we necessarily learn things that could never have been predicted even by the most intelligent and thoughtful planning team.
Related to this topic of “project to product” is the research that has been done around team cognitive load, and the importance of structuring teams in a way that optimises for communication and trust. There are both mathematical and empirical bases for saying that smaller teams that can act independently will perform more effectively than large teams or teams with many dependencies. Amazon’s two-pizza team size was not created to simplify their catering needs. It’s a wise structure that maximizes the power of teamwork while minimising the overhead required for constant coordination. Conway’s Law dictates that your architecture will come to reflect your team structure. Given that an optimal team size is constrained to five to ten people, modular architecture is essential if you want to be able to maintain your velocity over time. The idea is to give teams relative autonomy over particular systems. This feeds into topics like continuous integration which strives to reduce the impact of parallelisation. These kinds of DevOps best practices began with research and experimentation in the field, moved to being DevOps community lore, became substantiated through the research of teams like DevOps Research and Assessment, and will gradually become the basis for dedicated tools and entrenched ways of working.
Much of the tool growth that we are seeing today includes integration and Value Stream Management platforms such as Copado, Tasktop, Plutora, Xebialabs, GitLab and Cloudbees who are striving to bring information from many disparate systems together into one tool. I expect this integration and aggregation trend to continue as a practical way of dealing with the diverse reality of enterprise systems. In fact, teams benefit greatly from being able to choose their own tools. In the book ‘Team Topologies’ by Matthew Skelton and Manuel Pais, they refer to standardization as one type of monolith, “monolithic thinking”, that can interfere with maximum effectiveness. If you are striving to avoid monolithic thinking, but nevertheless need an integrated view of your systems and processes, data integrations are your only option.
LaunchDarkly showed that there is a market for products that facilitate particular DevOps practices, in this case separating deployments from releases. That practice is integral to activities like A/B testing and canary deployments, which have become recognised as powerful ways to reduce risk and enable experimentation. I would expect that more tools continue to appear to enable DevOps practices that otherwise would require custom coding.
Although this is not a new trend, definitely expect to see services like Amazon Web Services (AWS), Azure and Google Cloud Platform (GCP) continue to roll out new service offerings, compete with one another, and gain market share from legacy infrastructure. Azure and GCP are still playing catch-up in terms of the variety of products that they offer as well as the market share that they control, but expect them to be following close on the heels of AWS. Offerings such as kubernetes-as-a-service help to reduce the complexity of managing underlying infrastructure. Expect other kinds of complex systems to continue to be bundled into turnkey applications.
The State of DevOps Reports from Puppet and Google have set the standard for using data science to evaluate the effectiveness of development practices. Expect to see more tools begin to integrate analytics and data science into their offerings. And expect to see more teams requesting and making use of such capabilities to facilitate experimentation and to validate the results of those experiments.
Business functions such as marketing have been using A/B testing and quantifying their effectiveness for many years. There are a huge number of marketing-oriented tools that have been highly tuned to give metrics on click through rate, adoption, time on site, return on investment, etc. The most long-standing of these is Google Analytics, but marketers have a vast range of tools to choose from. Ironically, IT teams are late to the party in terms of practices such as A/B testing and ensuring that applications are being adopted by users. It is often left to business teams to track adoption of an application created by an IT department. But internal IT departments are the ones who have the greatest opportunity to make improvements to meet the needs of users.
Expect to see tools for usage monitoring and even embedded feedback and user satisfaction surveys becoming more frequently used by internal development teams. These practices help close the gap between end users and development teams in the same way that marketing teams have been striving to close the gap between companies and their customers for many years. This kind of feedback is exceptionally powerful since there’s an inefficiency introduced by involving a business analyst every time you want to solicit user feedback. Selective feedback initiatives also suffer from sample bias.
Lean and Agile
DevOps is aimed at “actualizing agile” by ensuring that teams have the technical capabilities to be truly agile, beyond just shortening their planning and work cadence. Importantly, DevOps also has Lean as part of its pedigree. This means that there is a focus on end-to-end lifecycle, flow optimisation, and thinking of improvement in terms of removing waste as opposed to just adding capacity. There are a huge number of organisations and teams that are still just taking their first steps in this process. For them, although the terminology and concepts may seem overwhelming at first, they benefit from a wide range of well-developed options to suit their development lifecycle needs. I anticipate that many software tools will be optimizing for ease-of-use, and continuing to compete on usability and the appearance of the UI.
Whereas most early DevOps initiatives were strictly script and configuration file based, more recent offerings help to visualise processes and dependencies in a way that is easily digested by a broader segment of the organization. Especially as companies try to capture the attention and wallet share of CIOs, CTOs and other organizational decision-makers, reducing the difference between the actual UI of a tool and how it can be visualized and marketed becomes even more important. The most beautiful and graphical tools just sell themselves (to both business and technical users). Value Stream Mapping and delivery pipelines are particularly popular and effective ways to visualize the delivery process, while also providing day-to-day access to metrics and monitoring.
Finally, it’s clear that the market and demand for practices such as DevOps exceeds the availability of skilled people to implement it. Thus system integrators will continue to have a role in helping teams to ramp up on these technologies and processes, and in some cases to even help manage the development lifecycle for organizations. Research in the State of DevOps Reports indicates that functional outsourcing is predictive of low performance, so organizations should be very careful about delegating one particular activity such as testing or deployment to an outsourced contractor. Unless done carefully, functional outsourcing goes against the spirit of DevOps which focuses on bringing all of the relevant stakeholders (from Dev to Ops) together with aligned goals, shared visibility, and shared technologies.
Consulting partners are an extremely powerful way of getting help in the absence of in-house talent. But they necessarily introduce organisational boundaries unless consultants are deeply embedded in the organisation, working long-term alongside full-time employees. Rely on DevOps consultants as enablers to help you to adopt technologies, improve your processes, design your metrics and so forth. But be careful about outsourcing a particular part of your process (such as testing) that is on the critical path to production.
It may work better to give the entire development process for a particular application to a consulting company. But consider the long-term lifecycle of this and how you want that company to maintain the application over time. In the spirit of project to product, there is risk in having one team build an application and a separate team maintain it. Think of the knowledge that you are building within the team as a critical part of your architecture. Just as it’s foolhardy to rip out a substantial chunk of your architecture just after go-live, so too it’s unwise to rip out a substantial chunk of knowledge from your team just after go-live.
In summary, the DevOps movement continues to grow, flourish, and gain influence in the IT world and the business world at large. As our organisations become increasingly digital, the agility of our IT systems becomes critical to the life and health of our companies. DevOps as a movement blends together psychology, sociology, technical management, automation, security, and practices such as lean and agile to optimise an organisation’s ability to thrive in a digital world. The consulting, tooling, infrastructure, and training ecosystem to support this is still evolving. The market for DevOps is in fact the market for digital success, thus expect continued growth through 2020 and beyond.