10 Steps to True Mainframe DevOps

Source:-it.toolbox.com

Compuserve explained in this year’s Arcati Mainframe Yearbook how mainframe sites can get the best from DevOps and Agile methods.

They began by saying that enterprises must quickly and decisively transform their mainframe practices. IT leaders must bring the proven advantages of Agile, DevOps, and related disciplines to bear on the mainframe applications and data that run their business. Their article discussed a proven, phased approach for measurably modernizing mainframe practices.

To remain competitive in an app-centric economy, mainframes must be as adaptive as other platforms. And enterprise IT must be able to manage DevOps in an integrated manner across mainframe, distributed, and cloud.

Most mainframe development is still performed in antiquated ‘green screen’ ISPF environments that require highly specialized knowledge and, problematically, limit new staff productivity. Modernizing the mainframe begins with modernizing this developer workspace.

A modernized mainframe workspace should possess the look and feel of the Eclipse-style IDEs. This user-friendly interface will allow staff at all experience levels to move easily between development and testing as they work on both mainframe and non-mainframe applications.

Unit testing is central to Agile. Frequently testing small increments of code enables developers to quickly and continuously assess how closely their current work aligns with immediate team objectives – so they can promptly make the necessary adjustments or move on to the next task.

Effective unit testing requires more than technology. Mainframe developers not accustomed to unit testing must learn how to best leverage the practice to work much more iteratively on much smaller pieces of code. Once you have completed the unit testing phase, you can work through the functional testing phase. Functional testing validates that the implementation works as specified in its requirements. This is different from “the code works correctly”, which is determined during unit testing.

After functional testing, you can start the integration testing phase. With integration testing, you evaluate whether the collaboration between two or more programs works as expected. This extends the functional testing that focuses on testing the specifications of one program to test the interaction between several programs.

Mainframe applications have, typically, become quite large and complex, and are, typically, not well-documented. To overcome this, you have to make it much easier for any new participant/contributor to quickly ‘read’ existing application logic, program interdependencies, data structures, and data relationships. Developers and other technical staff also need to be able to understand application runtime behaviours – including the actual sequence and nature of all program calls as well as file and database I/O – so they can work on even the most unfamiliar and complex systems with clarity and confidence.

Successful mainframe transformation demands rigorous, reliable, and early detection and resolution of quality issues. There are three primary reasons for this. Firstly, mainframe applications often support core business processes that have little to no tolerance for error. Secondly, in transitioning from waterfall to Agile delivery cycles, continuous quality control reduces costs and prevents even relatively minor errors from adding friction that undermines the goal of faster, more streamlined application updates.

Thirdly, a new generation of developers with less mainframe experience and expertise are being called upon to maintain and evolve mainframe applications. These developers must be supported with quality controls and feedback above and beyond the automated unit testing adopted.

Continuous Integration (CI) is especially important because it ensures that quality checks are performed continuously as your code is updated.

The goal is to have developers for mobile, Web, and mainframe components collaborate on a single Scrum team. Teams are focused on stories and epics that capture specific units of value to your business versus technical tasks in a project plan. By estimating the size of these stories and assigning them their appropriate priority, your teams can start engaging in Agile processes that allow them to quickly iterate towards their goals.

Training in Agile processes and work culture is therefore a must. Technical Leadership roles and Product Owners, in particular, need in-depth training and coaching. However, all team members should have at least some formal introduction to basic Agile concepts – especially if they’ll be expected to read Scrum or Kanban boards.

To ensure that your applications will perform optimally in your production environment, it’s not enough to just write good code. You also need to understand exactly how your applications behave as they consume processing capacity, access your databases, and interact with other applications.

One good way to gain this understanding is to leverage operational data continuously throughout the DevOps lifecycle. This provides dev and ops teams with a common understanding of the operational metrics/characteristics of an application throughout its lifecycle, helping them more fully and accurately measure progress towards team goals. Early use of operational data can also dramatically reduce your MIPS/MSU-related costs by allowing you to discover and mitigate avoidable CPU consumption caused by inefficient code.

To truly enable Agile and DevOps on the mainframe, your Software Configuration Management (SCM) must do more than just provide automation, visibility, and rules-based workflows to your Software Development Life Cycle (SDLC). It must also integrate with other tools in your end-to-end toolchain.

The shift from waterfall-based SCM to Agile-enabling SCM is a pivotal moment in any mainframe transformation, and it should be carefully planned to avoid disruption to current work in progress.

To keep pace with today’s fast-moving markets, your business must also be able to quickly and reliably get new code into production. That means automating and coordinating the deployment of all related development artefacts into all targeted environments in a highly synchronized manner. You’ll also need to pin-point any deployment issues as soon as they occur, so you can take immediate corrective action.

If things go wrong, you have to be able to quickly and automatically fallback to the previous working version of the application. This automated fallback is, in fact, a key enabler of rapid deployment—since it is the primary means of mitigating the business risk associated with code promotion.

DevOps teams must be able to fully synchronize the delivery of new approved code across all platforms. These deployment controls should also provide unified, cross-platform fallback and progress reporting.

This is a de-siloed environment where the mainframe is ‘just another platform’ – an especially scalable, reliable, high-performing, cost-efficient, and secure one – that can be quickly and appropriately modified as needed,

You can also provide your IT service management (ITSM) team with a unified environment for both mainframe and non-mainframe applications. This ITSM model will become increasingly useful as more of your company’s digital value proposition is based on code that traverses multiple platforms – from back-end mainframe systems of record to customer-facing Web and mobile apps.

If your core systems of record aren’t agile, your other efforts can only deliver limited benefits. The performance of your business will ultimately be constrained by the constraints of your mainframe environment.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x