What would win an AI testing face-off? Applitools vs. Functionize
Expert Tom Nolle compares two AI testing tools that both interact with -- plus utilize -- applications' GUIs for more comprehensive testing.
One of the most difficult, error-prone and consequential tasks in application development and ALM is testing. So with the advent of AI, capable of reducing both human effort and errors, then why not apply it to testing?
Two vendors, Applitools and Functionize, represent different approaches to AI testing. If the two were to face off on AI testing, which one would stand out? That's what we're going to explore.
Applitools' GUI-centric approach to AI testing
Most development projects focus on user experience (UX) improvement for web access or mobile user access via an app. In many cases, the only changes developers make are to the UX. So, it makes sense to concentrate on testing the GUI appearance aspect. Applitools tests the UX via the GUI, using AI technology to enhance this outside-in perspective on testing.
Every application's GUI is a progression of screens, also called views, and the sequence of views and the contents of fields displayed in each have a predictable relationship to inputs. Applitools' aim is to use AI to examine actual screens, compare them to the expected results and generate a list of discrepancies. An index development team can then examine and use them to guide their corrections. AI testing from Applitools mimics how users approach applications: They notice small things that software tools have traditionally been unable to recognize.
The Applitools approach helps to continuously test and monitor the GUI in a CI/CD process -- not only because most projects tend to focus on the GUI, but since it's how users interact with any application, which makes GUIs a critical hurdle for production deployment.
With Applitools, the tester builds a browser reference that can extend cross-browser and also brings testing to mobile devices. Applitools also supports summarization of test exceptions and integration with other popular testing tools.
How Functionize uses AI to create tests
Functionize has a different approach to automated testing. This AI testing tool aims not to reinvent how you test GUIs, but to improve the more traditional test generation and execution tasks through use of the GUI and applied AI. This focus on existing techniques makes Functionize easy to teach to development and ALM teams; the broad flow of both activities isn't impacted by the adoption of the Functionize model.
Functionize uses AI and machine learning to create tests quickly and match tests to application development changes. The company aims to make testing autonomous, rather than require developer teams to be extensively involved in creating comprehensive tests for new features and regression testing of features that should not have changed.
Functionize starts with test design, which means you can define a range of fields and how to generate data for those fields. The results of test design feed a test modeler that lets you interact with the application site and align test designs (fields and input ranges) with the actual input screens. What the company calls "adaptive event analysis" also enables a user to adapt the tests to changes in scripting or application logic, which simplifies the integration of the GUI and software logic tests. Functionize supports and integrates performance, load, cross-browser and mobile app testing.
In addition to its focus on autonomous tests, Functionize's association with load and performance testing defines its value proposition. Functionize doesn't ignore the GUI but rather frames traditional testing processes in a GUI-centric way. The result of a Functionize test is GUI-driven, and GUI processes are implicitly tested during both the test modeler and autonomous testing phases. You are using AI, and the screen views correlate with the expected results.
AI testing comparison
To compare these two AI testing tools, see how they each perform when applied to a typical set of development tasks. One task represents a GUI modernization associated with mobile worker support, and another task deals with a traditional functional change to the application itself.
With a GUI modernization, a development team focuses on the presentation of information to users and the return of updated data. There is a sequence of steps developers enforce and a set of fields within each step. Information presentation is paramount, because the project does not include any functional changes to the application.
A more traditional application change is exactly the opposite. Here, the primary goal is to add functionality, which often requires additional information, meaning new data fields. Additional information leads to changes to the GUI, but in most cases, functional changes are designed to avoid massive shifts in the way in which users work with the application.
If you have two different models for AI testing, then you have two different kinds of application changes to test. Is a truly GUI-centric view better than a tool that undertakes traditional testing from a GUI perspective? That's not an easy question to answer. Many current development projects are driven primarily by a goal to modernize the UI. That trend seems to favor the more GUI-centric Applitools approach. But because functional changes to applications generally require much more stringent testing than GUI refreshes and usually involve a complete ALM cycle, you might favor Functionize's approach to link GUI design with traditional testing procedures.