As the quantity of test data increases, machine learning could be the answer to sort through it all

Source:-https://www.applause.com

Scaling test automation and managing it over time remains a challenge for DevOps teams. Development teams can utilize machine learning (ML) both in the platform’s test automation authoring and execution phases, as well as in the post-execution test analysis that includes looking at trends, patterns and impact on the business.

Before diving deeper into how ML can help during both of these phases of the test automation process, it is important to understand the root causes of why test automation is so unstable when not utilizing ML technologies:

The testing stability of both mobile and web apps are often impacted by elements within them that are either dynamic by definition (e.g., react native apps), or that were changed by the developers.
Testing stability can also be impacted when changes are made to the data that the test is dependent on, or more commonly, changes are made directly to the app (i.e. new screens, buttons, user flows or user inputs are added).
Non-ML test scripts are static, so they cannot automatically adapt and overcome the above changes. This inability to adapt results in test failures, flaky/brittle tests, build failures, inconsistent test data and more.
Let’s dig into a few specific ways that machine learning can be valuable for DevOps teams:

Make sense of extremely high quantities of test data
Organizations that implement continuous testing within Agile and DevOps execute a large variety of testing types multiple times a day. This includes unit, API, functional, accessibility, integration and other testing types.

With each test execution, the amount of test data that’s being created grows significantly, making the decision-making process harder. From understanding where the key issues in the product are, through visualizing the most unstable test cases and other areas to focus on, ML in test reporting and analysis makes life easier for executives.

With AI/ML systems, executives should be able to better slice and dice test data, understand trends and patterns, quantify business risks, and make decisions faster and continuously. For example, learning which CI jobs are more valuable or lengthy, or which platforms under test (mobile, web, desktop) are faultier than others.

Without the help of AI or machine learning, the work is error prone, manual and sometimes impossible. With AI/ML, practitioners of test data analysis have the opportunity to add features around:

Test impact analysis
Security holes
Platform-specific defects
Test environment instabilities
Recurring patterns in test failures
Application element locators’ brittleness
Make actionable decisions around quality for specific releases
With DevOps, feature teams or squads are delivering new pieces of code and value to customers almost on a daily basis. Understanding the level of quality, usability and other aspects of code quality on each feature is a huge benefit to the developers.

By utilizing AI/ML to automatically scan the new code, analyze security issues and identify test coverage gaps, teams can advance their maturity and deliver better code faster. As an example, code-climate can review any code changes upon a pull request and spot quality issues, and optimize the entire pipeline. In addition, many DevOps teams today leverage the feature flags technique to gradually expose new features, and hide them in cases of issues.

Enhance test stability over time through self-healing and other test impact analysis (TIA) abilities
In traditional test automation projects, the test engineers often struggle to continuously maintain the scripts each time a new build is being delivered for testing, or new functionality is added to the app under test.

In most cases, these events break the test automation scripts — either due to a new element ID that was introduced or changed since the previous app, or a new platform-specific capability or popup was added that interferes with the test execution flow. In the mobile landscape specifically, new OS versions typically change the UI and add new alerts or security popups on top of the app. These kinds of unexpected events would break a standard test automation script.

With AI/ML and self-healing abilities, a test automation framework can automatically identify the change made to an element locator (ID), or a screen/flow that was added between predefined test automation steps, and either quickly fix them on the fly, or alert and suggest the quick fix to the developers. Obviously, with such capabilities, test scripts that are embedded into CI/CD schedulers will run much smoother and require less intervention by developers.

An additional benefit would also be the reduction of “noise” within the pipeline. Most of the above mentioned brittleness in testing are not real defects, but interruptions to automation scripts. By eliminating them proactively through AI, teams will get more time back to focus on real issues.

Conclusion
When thinking about ML within the DevOps pipeline, it is also critical to consider how ML is able to analyze and monitor ongoing CI builds, and point out trends within build-acceptance testing, unit or API testing, and other testing areas. An ML algorithm can look into the entire CI pipeline and highlight builds that are consistently broken, lengthy or inefficient. In today’s reality, CI builds are often flaky, repeatedly failing without proper attention. With ML entering this process, the immediate value is a shorter cycle and more stable builds, which translates into faster feedback to developers and cost savings to the business

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x