alphaspirit - stock.adobe.com
4 vital UX testing methods for concept through development
Don't rely on a single UX test, and don't create an overloaded bottleneck that delays development. Instead, deploy these four UX assessments iteratively and often.
UX goals are inherently subjective; they're not a simple matter of whether an API works or a screen displays correctly. Therefore, UX testing must pertain to an application's intended business goals, and that's a challenge.
Business applications should improve worker productivity, the relationship a company has with its customers or suppliers, or decision-making in management. UX does not equate directly to user satisfaction with an interface or application. Instead, UX should both achieve business goals and work for users.
Architects and developers who focus on UX have a lot of tips and tricks, which one can group into four UX testing methods:
- moderated or collaborative approach, which relies on input from users for interface designers;
- statistical experience analysis, which measures aspects of an application as it works and looks for evidence of improved UX;
- interaction analysis, which examines how users progress through a sequence of screens to accomplish a task; and
- satisfaction analysis via A/B testing, which collects retrospective subjective views of the UX from users.
These four UX testing methods have appropriate implementations and benefit specific phases of a project. Peruse the key points of each approach, and plan to use more than one during UX design and testing.
Run UX testing like a focus group
Moderated UX testing, sometimes called a focus group approach, is built around explicit responses to the software or specific feature -- preferably in the form of suggestions -- between users and interface designers. The goal is to solicit what users like or dislike about a given experience to which interface designers and developers can suggest enhancements or changes. Use this feedback-oriented UX testing process to create a new interface, which will become the subject of a new test.
Moderation provides the most direct UX testing method, but it's also resource-intensive, particularly with a large, geographically distributed user base. To make this model work in challenging circumstances, UX testers can sample user groups. Generally, don't mix too many distinct user communities in a collaboration because they'll confuse each other. Users report that this UX testing process requires between three and eight iterations to complete; it depends on the number of user communities involved and how many distinct activities the test subject supports.
Analyze UX statistically
Statistical experience analysis is a UX testing approach many interface designers like because it avoids extensive user interaction. The team collects and analyzes information on the rate of user interactions or transactions, along with indicators of errors or problems. Statistical analysis reveals changes in UX quality. Data collection can take place during live interactions, but it is commonly done with generated test data, which automates the process. Statistics depend on information collection, so this method works best in situations where designers and developers can insert statistical probes into apps to monitor user interactions.
Statistical analysis of UX is most effective as an assessment of the effect changes have on UIs and interactions. Old interface data forms a baseline against which to compare the new data; the designer or developer can then evaluate whether a change improved or hampered the UX. Developers favor statistical analysis, as it helps them make decisions for a large and distributed user base. While it's beneficial at the end of UX assessment, statistical analysis shouldn't be a stand-alone process.
Watch the clock
Interaction analysis is a more refined statistical approach to UX testing than statistical experience analysis. Rather than collect overall statistics on user activity, UX testers track the distribution of time spent per screen or task. This information reveals whether users get stuck on specific screens or inputs and also whether there are specific points at which many users abandon their tasks or retrace their steps to redo something.
Interaction analysis usually requires both a software probe toolkit and some user/designer interaction to help visualize the possible sequence of user paths -- both normal and exceptions -- through the application UI.
Determine satisfaction with UX A/B testing
UX testers can perform satisfaction analysis, often implemented through a form of A/B testing. They compare implementation options -- both new or one current and one new -- based on user rating. Users score each option either as a whole or for each of the steps in the UX process. For A/B tests to work, users must actually drive the process. Testers cannot rely on automated data generation, and so satisfaction analysis can be labor-intensive.
User satisfaction with a given experience is only one metric that development teams can and should use to judge UX. While users' satisfaction with the app experience is important, most teams aim to meet business goals, and user satisfaction is just a part of that target. CIOs and senior executives might combine direct user scoring efforts with another UX testing mechanism that addresses the business value of the experience changes.
Choose the right UX testing methods
It's difficult to pick a single best approach to UX testing. How much of the application's interfaces the team has developed can influence which method you choose. If the UI or app is still conceptual, the collaborative approach is essential to bring user inputs into the design. However, to assess prototype interfaces, a statistical testing framework most effectively weeds out options based on simple performance metrics. To validate modifications, some form of satisfaction testing fits the scenario.
Expect UX testing to involve at least two of these four methods. Test with the process appropriate to the project and phase of development. Don't force all the UX measurement goals into a single-step test program as it will inevitably result in disappointment at best and failure at worst.