Developing services with DevOps & legacy components

Source – devopsonline.co.uk

DevOps methodology, tools and techniques are increasingly being used to provide services which will need to integrate with core legacy applications and, for those providing a robust end-to-end service using DevOps and legacy, it is sure to be a challenge.

While we focus on DevOps and agile developments and explore the opportunities afforded to us by these techniques and the tools that support them – unless you are working in a start-up – most organisations will have legacy systems that represent significant investments and provide continued support to core business processes. There is often little appetite in an organisation to transform a system that is supplying this level of support. The risks and the costs of this level of transformation will often need to be thought through carefully. More often than not, a policy of caution is chosen. However, modern applications developed with cloud-based containerised microservices are not a natural fit with legacy systems developed with old inflexible technologies and maintained in on-premises environments.

But, more often than not, that is our challenge as developers: to get the best out of the new technologies and DevOps processes, while exploiting the value in the core legacy components.

Maintaining software & infrastructure

For Google, eBay and Amazon it is necessary to provide fast, highly scalable and easily maintained software and infrastructure. These organisations will commit vast sums of money to ensure they can constantly meet customer expectation. They will continuously re-engineer the processing pipeline to exploit the latest technologies and recruit and nurture the best software engineers they can find. They basically focus on technology as the core enabler of their business. However, very few companies operate at the scale of these giants nor, for that matter, will they have the budget.

While it is good to learn from the pioneering engineering and results achieved by these companies, the reality is they are solving a different problem to most organisations. This is not a one-size-fits-all equation. While we all would like a Ferrari, not all of us need one.

According to the world QA report 2017 – 2018, most organisations are making use of DevOps at some level: 12% of companies surveyed had not used DevOps, 47% of companies used DevOps in less than 20% of projects and 30% of companies had used DevOps in between 20% and 50% of their projects. Few organisations have made it their primary development methodology. Most organisations have exploited DevOps to support the customer-facing operations and have left back-office operations to continue as is, until they have enough confidence with DevOps to do a full transformation. However, in most scenarios, the customer-facing DevOps solutions will need to interact with legacy systems.

By using DevOps to support customer-facing operations, they are able to exploit DevOps’ ability to make quick changes to reflect the latest customer propositions, for example, new offers and promotions can be swiftly reflected in the customer-facing services and where there is a need to respond to a competitor in order to maintain their position in the market.

Green fields and brown fields

The phrase Green Field is used to describe a development where nothing already exists and we are free to do what we like to get our solution in place. If we were in a Green Field, we would just focus on DevOps, pure and simple. However, as I’ve stated above, most organisations are transforming slowly – they are not in a Green Field scenario, they are in a Brown Field Scenario. Brown Field is the term used for constructing on land that is being reclaimed from previous industrial use. I think the name is appropriate.

When we are in a Brown Field development we have to accept there are constraints on what development freedoms we have. The legacy components will impose data definitions and business rules on any new application. By definition, if these systems are too complex or important to risk transforming, then they will be the systems that define the way the business holds and processes data. If we fail to understand this, we may have a very long and painful integration phase. To get this right we really need to take a hybrid approach.

Suggestions for a hybrid approach

So, imagining a situation where we are a DevOps development about to deliver a service where one or more DevOps squads will produce a set of components to enable that service, but a sizable chunk of that service will be delivered by existing legacy components, we first need to understand the constraints that are being imposed by the legacy components.

Now, a lot of people can shy away from this, as it really isn’t as easy as it sounds; if a legacy system has been retained due to the risk and cost of transforming it, chances are the way that it manages data and enforces business rules is only known to a small, dedicated support team with years of specific knowledge. Don’t even begin to hope the documentation is comprehensive or up to date, it won’t be.

Produce an end-to-end architecture

For the service under construction, you will need to show the integration points between the new development you have complete control over and the legacy components you do not. It is necessary to understand these points and what data and processing they are supporting. The legacy integration points become third-party API’s into your development. However, they are usually more complex in terms of data and rules processing than a simple third-party API. Remember, these are the systems they were afraid to touch. As such, they will require much more attention and understanding to successfully integrate. For example, a rule enforcing a date for a registration process may seem clear, but will it have been enforced by the legacy system in the same way that you need to implement it? If it isn’t, you may have thousands of existing customers breaking your service on day one.

I once sat in a contact centre to see how a registration process was being completed. There was a date that was required, but the script didn’t explain what was required or what the date meant. The staff were proud to inform me they had overcome this problem by setting it to a default date they all could use, which allowed them to proceed and didn’t seem to cause any problems. To them, it was a problem solved… but not until someone like me came along to apply a shiny new business rule processing to it, anyway! The lesson was learned.

The end-to-end journey

As much as agile is about leaving documentation behind and focusing on collaboration, the complexity of legacy systems means that you need to get a clear understanding of what you can and can’t do and in some manner document it. If it’s a constraint, it’s not up for discussion. The way the legacy system manages the data and the process you are dealing with is of critical concern. You need to understand that by working with the legacy support teams and, in some instance, performing exploratory testing on the legacy systems to confirm their behaviour. Exploratory testing of the legacy components allows you to build up knowledge of the behaviour and document this as part of the knowledge necessary to develop an end-to-end service. A short, architectural sprint may be required, but these are all potential solutions and it is first necessary to take a realistic view of the problems that your team might encounter. If these issues are not managed, then they will appear in test (if you are lucky), and in live service (if you are not). Once understood and documented, the behaviours and constraints of the legacy components can be shared between all the development squads.

Data latencies across the services

Legacy systems are likely to make use of performance optimisation strategies based around the batching of updates to make use of evening ‘downtime’ and create overnight batch windows to process data. So, while an update may be sent to the legacy system as part of a customer interaction with a mobile application, the downstream impact may be dependent on a batch overnight update, such as the provisioning of a service on the legacy system based upon a successful customer application, managed by a mobile application where the confirmation is provided to the customer immediately. This causes issues in service design, implementation, and will place challenges on the end-to-end testing of the solution. It will certainly place restrictions on test automation and the prospects of continuously testing the end-to-end service.

Testing, automation and continuous testing

The challenge of testing an end-to-end service from such heterogeneous components is quite large. Generally, the modern products of the DevOps development can be tested via automation and continuously tested with little trouble. However, fragile legacy components will have constraints. These will be the environments and data. If a legacy component is hosted on premises, generally there will be a set of test environments that a new development can integrate with for testing. These are generally shared with other development and support teams and the test data in the legacy environment is often cumbersome to set up and difficult to refresh. This will limit what can and can’t be done to test the end-to-end service.

End-to-end automated service testing can be done with much work and a bespoke set-up, but it will be fragile and difficult to maintain and may generate some false results. A risk-based approach will be needed to manage end-to-end service testing and the dynamics of testing will be different between the modern and the legacy components. The legacy component is often treated as a third-party API that we have little control over. The option to possibly stub out the legacy component should be considered. But, however it is done, it won’t be perfect. It will be risk-based and it will require understanding and agreement as an approach.

In creating an end-to-end service in a hybrid environment, we are often dealing with the meeting point between two different worlds. We need a clear understanding of how both of those operations and the risks involved in your own unique set up, as each development incorporating legacy components will be individual. There will always be difficulties. However, doing as much upfront as possible will reduce these and will only increase your chances of success.