Grow rapidly into a continuous delivery pipeline

Source- searchitoperations.techtarget.com

If the only thing colourful about your deployments is the language, evaluate canary and blue/green environments.

One application engineering team found that continuous delivery (CD) makes the rapid-fire updates to a high volume of microservices orderly and manageable. But to truly reap the benefits, they must find ways to minimize the blast radius of live code and to catch and roll back anything that mars user experience (UX).

Ibotta, a Denver-based mobile rewards app provider, adopted a continuous delivery pipeline as it moved to a microservices-based app architecture, and it plans to advance to continuous deployment to move code straight to production for even faster releases. When Ibotta ran a solely monolithic application, team members deployed updates via a bunch of scripts. That method couldn’t scale with the migration to microservices — as many as 45 code updates in a day — which drove the adoption of a continuous delivery pipeline, said Scott Bassin, engineering director at Ibotta. That old path to production worked for the company’s monolithic application but wasn’t sustainable in the face of hundreds or thousands of microservices, he said.

In IT organizations, developers’ application delivery decisions shape the approach to infrastructure operations, whether in integrated engineering teams as at Ibotta or in more traditional enterprise setups.

Continuous pipelines rely on orchestration

It’s not enough to simply automate where an admin once clicked through by hand. Continuous delivery and deployment interconnect disparate tools used to provision and manage production systems.

“Often, you’re walking into a setup of tools that are half-tied into each other with funky scripts and such,” said Brandon Carroll, director of transformation, DevOps and cloud services at TEKsystems Inc., an IT services provider that works primarily with large enterprises. To create a continuous pipeline is less about individual tools and more about the orchestration of workflow, he said.

Ibotta has continuous integration (CI) in place, through which developers create and validate code with quick testing feedback and version control. They wanted to extend into CD without a complex product setup, with out-of-the-box integration for the tools they already use for validation, logging and application performance monitoring. After evaluating other options, such as Jenkins and AWS CodeDeploy, Ibotta implemented a CD engine called Harness and deployed it as a service.

“We see CI and CD as two different worlds,” said Steve Burton, DevOps evangelist at Harness. The pipeline engine, whether gated for CD or open for continuous deployment, must push tech code across infrastructures, verify deployments, roll back versions, manage secrets, provide auditability and handle other “unsexy stuff,” he said. Microservices migration is one of the prevalent customer use cases for Harness, because constant application updates on these independent pieces of code can expose brittle areas — such as poor connections that fail to deliver code in the proper format from one tool to the next — in a homegrown continuous delivery pipeline.

Create a safe deployment environment

To continuously deploy to live users, organizations must consider the quality of the code and visibility into each update’s effects.

Testing should be part of a CI/CD strategy, but test is never an exact replica of production. “You can’t replicate that scale, and you can’t put customer data into a [traditional] staging environment,” said James Freeman, head of professional services at Quru, a consultancy focused on open source technologies. Things test fine and pass to production, then they go live and fall over. “You’ve got to put good process behind deployments,” Freeman said in a presentation at AnsibleFest 2018 in Austin, Texas.

Ibotta uses blue/green deployment to handle the multitude of microservices updates per day. Blue and green setups mirror each other and trade off as staging and production environments. The team can quickly revert to a previous version of code without creating a bottleneck. The blue/green changeover currently serves as a gate between development/test and production. As the microservices count at Ibotta grows, Bassin plans to set up the CI pipeline to initiate deployment to production automatically via Harness.

Blue/green deployment is an effective method to quickly deploy and roll back production changes. But it is resource-intensive, and changes hit all users at once, Carroll said. It also requires tight configuration management so the two setups remain the same. “It’s reliable and great for back-end services,” he said. Canary releases are another option for a continuous deployment pipeline, well-suited for front-end UX features.

Canary deployment is so named for the early warning system of a canary in the coal mine. A small percentage of users experience the new code while the engineers monitor its performance and functionality before the entire deployment occurs. Ibotta plans to switch from blue/green deployment to canary releases eventually for its microservices. Canary deployment decreases the blast radius of problematic updates, Bassin explained.

In a project with U.K. retailer Sports Direct, Quru upgraded 60 nodes successfully on a Friday afternoon with automated Ansible playbooks, but it didn’t roll out all 60 nodes at once. Instead, it started with a single node to prove that the process was set up correctly and the configuration update worked as intended. If the canary continues to sing, deploy the remaining 59 nodes with confidence, and head out the door for the weekend, Freeman said.

Ibotta’s engineers deal with a cumbersome canary deployment process currently on the company’s monolithic application. Ibotta aims to change to canary deployments with microservices because it can make the changeover to new code faster than blue/green without staging and without the issues experienced on a monolithic architecture, Bassin said.

Carroll recommended that organizations model their operations toolchain and choose a deployment pattern that best fits the pipeline and objectives. Then, iterate on the CI/CD pipeline, with tests and post-deployment monitoring, until they achieve the target cycle time.

Hope for the best — expect the worst

Continuous delivery and deployment cannot succeed without a plan for failure. Even with rigorous control before code reaches production, things go wrong unexpectedly, and you cannot test for every possibility.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x