Continuous delivery orchestration: How to choose the best suite for your pipeline
Continuous delivery of software is hard. There is no one approach that guarantees that you will be able to compile, integrate, build, deliver, and (if necessary) deploy an application with the push of a button, or even several buttons. Even in the recent past, most of these have been manual steps. Yet if there is to be a way to deliver software rapidly and seamlessly for the business, you have to be able to get a variety of tools working in concert.
How do teams accomplish continuous delivery? It’s more complex than it sounds. The process of delivering and deploying software used to take days or even weeks to accomplish, rather than hours or minutes. Continuous delivery requires a greater emphasis on orchestration, using automation processes for the delivery of software updates.
No single solution offers the full toolchain needed to create a continuous delivery pipeline. Many have pieces of the puzzle, and someday they may be pulled into a coherent whole. But today teams have to search for the pieces that work for them. They are typically close to best-of-breed and have integrations with multiple other tools, as well as an API that supports easy connections with still more.
The way to look at the DevOps toolchain is through the steps involved in a prototypical DevOps process. Here’s a review of tools in that context.
Understand the DevOps process
DevOps delivery processes typically begin at the user story stage, with an agile planning and tracking system. Developers code to the user story, run their code on their own system with their own builds, then commit that code into the repository.
Once committed, it is automatically pulled and built into the product, then integrated with other services as needed. At this point, it will usually undergo a set of smoke tests, predefined tests that use automation to show that the build isn’t obviously broken. It is delivered, sometimes for further testing, and sometimes directly to the production application, depending on the needs of the product and its users.
Most teams use an agile approach to planning for DevOps releases. Atlassian’s JIRA is the most popular tool here, with the ability to easily define multiple projects, stories for those projects, defects against stories, and much more. JIRA has become such a de facto standard that other tools, such as test and artifact management, code control, and test automation, have built dedicated and robust integrations with it.
But JIRA, while perhaps the best known, isn’t the only tool available. Other agile management tools, including Version One and Pivotal Tracker, are also popular. DevOps teams use these planning tools to define how the software interacts with the users and which features are needed at various stages of product development. They are also used in many cases to record and track defects during various stages of testing. They enable teams to manage their backlog of tasks and the assignment of tasks to individual contributors.
Development teams can take several different directions in developing and building their software. Most use versions of Microsoft Visual Studio (for Windows and web applications) or Eclipse (for general-purpose and web development). However, there are a wide variety of editors and compilers available that make turning source code into executable code easy and automated.
What you are building also plays into tool orchestration. An increasing number of teams are opting to build applications as a collection of microservices, small and relatively simple features that can be combined into possibly several different applications.
And don’t forget about the “make” build automation tool and its variants. While the fundamental concept is over two decades old, makefiles still represent a significant way of building software and continue to be valuable in automated builds.
Docker is frequently the delivery vehicle for microservices and other applications. As a cross-platform container, it encapsulates runnable code segments into discrete elements for better testing and easier delivery and enhancement.
But Docker is not a silver bullet. Because it encapsulates applications and services in a container, it improves the likelihood that it will continue to work even as operating systems and other third-party software change. However, that also means that teams could be lax in updating critical open-source components, especially in regards to security concerns.
Continuous integration is one of the key elements of modern software. Much software today is a collection of services, some internally developed, but many from third parties. These may be advertising, retail partners, data feeds, and other third-party services.
The open-source package Jenkins is a popular integration server. Jenkins serves as the glue that pulls code from repositories, open source, third-party sites, and other locations into a coherent package of software. It helps manage versioning of these components and ensures confidence that the right versions are part of a final product.
Integration is one of the true challenges of DevOps, and of building reliable software in general. Most software today includes an eclectic combination of brand-new code, code from older applications, code developed under contract, and a dozen or more open-source components and frameworks. Keeping all of this up to date and working together is a challenge for any team, and one that requires tools to get it right.
In DevOps, testing takes several forms. In many cases, testing involves unit testing at the individual developer level, and smoke testing once the product is built and integrated. After delivery, there should be a push for exploratory testing before deployment.
There could also be testing in production. At DevOps Days Charlotte in early 2017, speaker James Huston exhorted DevOps professionals to be cognizant of the importance of alerting and monitoring. This includes synthetic testing during production, where organizations can send automated tests to their web applications periodically to check for performance and availability.
In fact, testing can occur at any stage after the product is built. Unit testing as well as integration testing should occur before the product is formally delivered to testing. Automated testing is possible at any point in time that a test can be automated. Exploratory testing should also occur when a product can be tested by humans. There are a wide variety of test management tools, both general-purpose ones and others focused on agile projects.
This isn’t generally thought of as being in the DevOps tool chain, but seamless communication among disparate team members is a fundamental hallmark of DevOps. Both Slack and Hipchat have become the de facto team communication tools for DevOps, with good reason. Both provide real-time chat services to teams working together on the same project.
These types of tools are often called “ChatOps,” but they are more than simply chatting. They allow team members to easily exchange files and receive notifications from systems, software, and other devices,
At the end of the day, you can build the best tool chain imaginable and still not be successful because you haven’t addressed team communications and cohesion. Spending time on the tool chain, and getting it right for your team and applications, can mean the difference between a seamless process and software that requires continuous tweaking.
But not getting communication right is a bigger issue. As the Agile Manifesto notes: “Individuals and interactions over processes and tools.” While some may consider that hyperbole, there is more than a grain of truth in it.
Work together to make it happen
Orchestration involves both tools and processes. Many DevOps professionals are focused on getting both the toolset as well as the process right. It’s only when both work together that teams can realize the potential of DevOps.