Top 5 Issues Curtailing DevOps Adoption

Source – devops.com

DevOps may seem simple and logical on the surface, yet most of the companies we talk to are struggling in one aspect or another. This is no surprise: Successful DevOps initiatives require putting together a full toolset and simultaneously creating a smart team of developers, testers and project managers who are well-versed in cloud and agile technologies.

Those tasks alone can take many months of trial and error until you are ready to start DevOps in earnest. After a period of experimentation, it’s time to refine and optimize. Focus shifts to more mature issues such as creating a fully integrated tools stack, supporting a rich test data environment that doesn’t undermine privacy and security, and determining the best DevOps infrastructure strategy for your company.

Regardless of your DevOps maturity, we’ve outlined some top issues to consider and mitigate, as your DevOps practice evolves.

Integrating Tools

As the practice of DevOps matures, we have seen an explosion of tools, both open-source and commercial. These tools serve every discrete need in the process: requirements management, test management, defect tracking, agile planning, source code management, build, deployment, monitoring and more. We calculate there are at least 1,500 such tools available today.

So, while developers, testers, project managers and business analysts have many sophisticated capabilities to streamline DevOps, those tools must work together. It’s imperative that tools and groups don’t work in silos, but are part of an integrated pipeline. This allows the code to flow seamlessly according to business objectives, from requirements definition to test and dev, production and deployment. Organizations may choose to create these integrations themselves—although that gets costly fast, both at the onset and with ongoing maintenance. Some vendors offer out-of-box integrations with popular tools, although you must consider if these will fit your workflows and needs. A few vendors now offer the best of both worlds—connectors that are simple to set up and with the ability to easily customize the integration when needed.

Configuration as Code

A common problem in DevOps is the need to align the environment configuration present when/where the code is written with the environment configuration used in production. By adopting configuration as code (consisting of tools and processes), you can store all configuration data as a file alongside the code. This is valuable because if there are bugs or UI issues, your staff can quickly locate the version of the configuration that matches the code/feature in question and load it into that environment quickly. If we are confident that the environment configuration is accurate, we can narrow down the app failure being most likely due to the code itself.

Having both code and the configuration setup information in one place is also useful for testing purposes, as infrastructure configurations sometimes differ between development and testing worlds.

Fast is Not Free

Software teams have always wrestled with the need to balance speed with quality and cost. You can’t get the best of everything, so teams should figure out where they can compromise and where they can’t. In DevOps, however, the emphasis is almost always on speed. Once you reach maturity, you usually go fast and achieve high-quality results, yet this requires investing in tools, process and people.

Working smartly can go a long way toward moving faster without sacrificing quality. If your team can figure out how to be efficient and precise, with clear objectives, then you won’t have to do much rework, which adds cost and time. Many people will flaunt the benefits of full automation to work faster. In many cases, however, automation ends up being a more expensive solution to the problem. Automation adds additional maintenance and overhead that needs to be outweighed with saved effort; you will have more tools and integrations to maintain, and more pieces that will break.

Justify how much automation you need and acquire only the tools necessary to meet your goals. Ideally, organizations should determine the need for speed based on business requirements. Many industries are not driven by speed to market and benefit more by working in a steady, iterative fashion. Make sure you have a sound business reason for working faster and releasing faster, and that you can effectively manage the automation and staff needed to work at that pace.

Acquiring Test Data

For high-stakes applications, such as a new e-commerce site or mobile customer app, there is a great deal of risk in launching an app that hasn’t been tested on representative data. Accumulating enough high-quality test data to find bugs and optimize all the key features and scenarios before going live can be a monumental task given today’s big data initiatives, fragmented DB technologies and data privacy/security concerns. Specifically, when companies are dealing with sensitive data, such as customer information and credit card numbers, this is a tricky endeavor.

One common option is to use real customer data and “scrub” it to remove identifying characteristics. If the cleansing process is not thorough, however, you’re creating tremendous security risks for customers. Another basic option is to create fake data that’s representative of production data. However, that data might not be accurate in all key scenarios, nor comprehensive enough to meet your QA objectives.

Our analysis is that test data is a complex problem that no one tool can solve, and should be handled with importance as a cross-functional initiative. It’s not unusual to see security, compliance, testing, development and business personnel involved in test data management activities, especially for companies handling sensitive data.

Internal vs. External Hosting

DevOps environments seem to run optimally in the cloud. Cloud infrastructure gives organizations the optimal scale, flexibility and speed to run more builds and test faster through highly automated services. Top providers including AWS and Azure offer a full range of DevOps-oriented services to manage infrastructure for you, such as load balancing, log and instance monitoring and automatic backup/failover. With many DevOps vendors hosting their products and services in the cloud as well, it’s logical to host the entire environment there.

However, this isn’t possible for many organizations with strict security and compliance requirements. Running a DevOps infrastructure internally requires a great deal of configuration, maintenance and software, not to mention staff to maintain it all. Automated testing often can be time-intensive when limited machines are available in the internal network for testing. Running those tests may require waiting for the weekend to free up resources, given the many hours required to complete those jobs. This, in turn, causes significant delays in testing and code quality feedback.

One solution to this problem is using containers to scale the internal DevOps infrastructure. This allows more tests to run at once and reduces the overall build process time. Companies strictly doing web development can use open-source tools to do this. Conversely, large enterprises with apps on multiple platforms will need to build out a great deal of their own infrastructure.

It’s a Process

All of this might seem overwhelming to organizations just getting started with DevOps. But remember that DevOps is a journey taken over time; it is more than just new tools and skills. DevOps is a dramatic and evolving way to develop software and services that meet the frequently changing demands of customers. DevOps is the only way organizations can truly become data-driven operations, supporting new revenue streams and product innovation. When viewing it through this lens, we know that change will be gradual. Celebrate the small wins with low-risk projects first, and work to drive efficiency wherever possible.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.