Automate tests with attention to UI, rapid feedback and more How do I know what's test automation vs. automated testing?
Tip

How autonomous software testing could change QA

Manual testing takes too much time, and test automation scripts need ongoing maintenance. Autonomous testing might provide an answer for teams unwilling to compromise on speed.

As enterprises increasingly embrace test automation, AI might enable an even more predictive form of testing that will move the industry forward.

AI and, more specifically, machine learning can take test automation to the next level in the form autonomous software testing, a capability that speeds the development of new test cases. As software testers learn technical skills for test automation, particularly how to write scripts, autonomous testing can empower them to be more strategic in their efforts.

With conventional automation technology, testers have to invest considerable time into learning how to script each test scenario. On the other hand, with autonomous software testing, testers can spend more time training tools and contributing to QA management initiatives, said Theresa Lanowitz, co-founder and analyst at Voke. Autonomous testing frees testers to spend more time, for instance, helping the CIO or CEO tackle critical objectives around bringing AI into the organization to benefit the customer. And when autonomous tools mature, their capabilities will enable testers to spend more time exploring nonfunctional requirements of a project, such as performance and security.

Once these tools fulfill their promise and have a proven track record, many software quality engineers will ditch test tools with scripted interfaces.

"Capabilities [of traditional test tools] are going to be so far eclipsed by what these autonomous testing tools can do that you will leave that tool behind," she said.

Several tool vendors have already introduced autonomous or AI-based capabilities for test creation, impact analysis, test data management and UI analytics. While these tools are still early in their development, they could revolutionize the software quality industry, Lanowitz said.

Different levels of autonomy

Next-generation testing tools incorporate autonomous capabilities in a variety of ways. For one, autonomous software testing forces testers to stray from happy path testing, in which QA executes well-understood test cases that produce an expected result. These tools require testers to learn new skills, such as the identification of edge cases that complement autonomous testing. Lanowitz recommends that testers spend time with line-of-business experts to understand the edge cases most likely to matter for software products.

Although test automation using AI addresses many existing aspects of testing, machine learning tools currently struggle to retain the context of data. But, as tools improve, Lanowitz expects they will help QA engineers scan through large code bases to understand this context in-depth and identify critical areas for test coverage focus. An increase in system simulations adoption could make it easier to test for scenarios before the software product's infrastructure is even in place. These capabilities give testers access to components and services that are incomplete or unavailable at the time of testing. Service virtualization tools, for example, can already simulate APIs under test. Other kinds of simulations could include IoT infrastructure, and components of blockchain applications.

Enterprise uses for autonomous software testing 

To start performing autonomous testing, identify test scenarios where rule-based test scripts do not lead to the desired level of result accuracy, said Torsten Volk, analyst at Enterprise Management Associates. Rule-based tests are often time-consuming to write and complex to maintain.

At this stage, the AI in these tools consists of one or more micro-models, each of which solves a challenge that is limited in scope. There is no single AI model that analyzes whether a webpage complies with all of these needs: corporate branding requirements, ADA compliance and performance that matches specification. So, the tester must pair a chain of numerous traditional code functions with predictive models for each one of these tests.

AI models help answer simple questions that are difficult to address in standard code, such as:

  • Is a certain image sufficiently annotated for text-to-speech description?
  • Is this new font sufficiently readable for the visually impaired?
  • Is a certain nonstandard placement of the corporate logo still acceptable?

While these questions are small, they are fairly tricky and cause headaches for human testers, Volk noted.

AI can also alert staff to gaps within a test suite, due to new features and capabilities being left uncovered by tests. In a world of continuous releases, where, ideally, development teams release new features each day, this sort of warning is critical for quality, Volk said.

Some tools not only alert software engineers to gaps, but even propose ways to address these gaps; such tools can learn from human choices and minimize the need for future interaction. A tool like Mabl, which falls into the test automation category with Selenium and Testim.io, provides a list of recommended fixes for a tester to approve, alter or reject -- and it learns from the process. These tools focus on assisting test engineers, instead of trying to replace them. "This means that there always needs to be a human reviewer present to sign off on changes or expansions in test cases," Volk said.

How AI is infused into test tools

Although all major software testing vendors have test automation tooling, enterprises still struggle to keep pace with faster development cycles in their software teams. "We have not solved the problem of automation," Lanowitz said. "Our research shows that a whole lot of manual testing is still going on."

Over the last 15 years, she has seen a movement that pushes developers to deliver applications faster. But an obsessive focus on shipping code can come back to haunt enterprises; rushed releases are easily riddled with bugs, and bugs wear down customers' patience.

Autonomous testing tools aim to solve some of these problems. Vendors like AutonomIQ, Functionize, and Mabl focus on core software quality processes, like impact analysis, test creation, maintenance and test data management.

Other autonomous testing tools focus on one challenging aspect of the testing lifecycle. For example, Parasoft SOAtest uses AI to improve the automation of service testing. Applitools Eyes makes use of AI to automate visual analysis of applications that run on web and mobile devices.

Gannett, a media company whose publications include USA Today, uses Applitools to automate the visual UI inspection of its mobile apps, which helped reduce manual testing by 50%. "Visual testers are excellent at catching errors that the human eye can see, but the manual process is tedious and wastes time," said Greg Sypolt, director of quality engineering at Gannett. Not only did this shift reduce the company's release time by 75%, it also improved the quality of Gannett's mobile app.

There is no universal autonomous testing toolkit. Each vendor tackles different aspects of the software testing lifecycle. Evaluate how these capabilities apply to an existing testing project, and become familiar with each tool's strengths and limitations; a fair degree of skepticism about their purported abilities is healthy. "They will not be 100% effective in version one," Lanowitz said.

Start with something small, and identify concrete reasons to use autonomous testing, such as to improve test efficiency, reduce staff or improve customers' experience.

Dig Deeper on Software testing tools and techniques