Edelweiss - Fotolia
Improving testing architecture when moving to microservices
Gamesys, the developer and operator of many online games, rethought its approach to testing during the move from a monolith. A pyramid made software tests more manageable.
Organizations that move to microservices for application development can take the opportunity to improve their testing procedures as well. This can lead to faster, more manageable assessments that ease debugging for developers.
Gamesys test architect Tomasz Borys and his team were able to create a better testing architecture during a transition to microservices. They refactored large, seven-hour tests to run in 15 minutes, Borys explained at the SauceCon 18 conference in San Francisco. His team broke large procedures into smaller, reusable testing components that run in parallel across a cluster of Selenium browser automation servers. Additionally, they adopted other effective practices, such as to include QA earlier in the development process, align tests with microservices and weave these components into tests that mimic common customer flows, or clickstreams, through the application.
Team boundaries present testing challenges
Before Gamesys rethought its testing architecture, its software development was split across client-side and core development teams. The client team focused on front-end application development, while the core team focused on back-end services. However, these separate teams often managed the same swaths of code, which caused a variety of communication and testing challenges.
The back-end application was a monolith. If someone on the client team sought to add functionality, such updates would have to integrate with services on the monolith. Bugs sometimes required fixes on the back end.
"As the situation got more complex, the synchronization between the client and core teams would get sloppy," Borys said. The overlapping responsibility and complexity of a monolithic architecture led to release delays.
Adopt a testing pyramid
Gamesys shifted to a microservices approach, which made it easier to decouple application functionality across the entire stack. The organization broke its teams into different service-focused categories, such as payments, members and front end, which made it easier to operate and test independently during software releases.
This reorganization gave Gamesys a chance to reassess its testing architecture. "We did not want to make the same mistakes [as] with the client and core model," Borys said.
As part of this new strategy, Gamesys switched to a three-tier pyramid model to evaluate the complexity of its practices. With this structure, Gamesys aims to cover small, medium and large tests:
- Large tests are 10% or less of the overall regiment and facilitate acceptance testing.
- Medium tests account for 20% to 40% and also serve as integration testing.
- Small tests make up about 50% of the total testing regiment and correspond to basic unit tests.
"Try to analyze any cases where you can push tests down the pyramid," Borys recommended -- meaning, if a team pursues a similar setup, it should always strive to remake large assessments as medium-sized ones and medium tests small-sized.
In the past, when Gamesys' developers added a feature, they would throw it over the wall to QA, who would initiate an end-to-end test for the code. Now, before they develop a feature, developers collaborate with QA to identify how the addition will change testing so that QA can identify an efficient strategy. When QA identifies a bug within large and medium assessments, they analyze the practice to see if they can rewrite it into smaller test components, which run more quickly and earlier in the development lifecycle.
Make tests smaller
A key part of Gamesys' strategy is for the teams to identify ways to break larger tests into smaller, reusable testing components. Before this effort, Gamesys performed an elaborate end-to-end regression test that took seven hours to execute. The procedure was so complex that it often wouldn't pass, and it was hard to identify whether the failure was caused by a flaky assessment or a software bug.
After a thorough analysis of the regression test, Gamesys built a library of the testing components that brought the highest value to the finished software product and that could execute in a maximum of 15 minutes. The robust component testing library started with two problems: how to assess the site as a whole and how to test the functionality of particular components. "We had to come up with a solution to satisfy these requirements and improve the architecture for testing," Borys said.
Additionally, in the past, Gamesys' QA engineers created test objects that were difficult to break apart. As a result, they created new assessments for each customer flow that all had to update separately when the service changed.
Gamesys' new component-oriented strategy requires QA engineers to, when possible, create a library of test objects that mirror the functionality of microservices. The team can string together these testing components to construct higher-level assessments. Then, when the core functionality of a microservice changes, a tester only has to update the component associated with it, a move that also automatically updates all of the higher-level tests built on these components in the library. This strategy enables Gamesys to mimic common customer flows it identified for its user base.
"This approach reduces the fears around changing software applications," Borys said. "If you are passionate, you can push for something that makes life easier. With component objects, you can create whatever you want without knowing about the internals and then reuse them in different test suites."