Find the right automation test cases
One piece of software might be well-suited for test automation while another won't. Here are the factors to look for when you need to settle on your test strategy.
Bolton and Bach famously drew the distinction between testing and checking, and the two support Cem Kaner's argument that testing is by its nature investigative. They use the term checking for work that is more confirmative, including applying algorithmic decision rules to make specific observations. In other words, to automate a test, you need to turn it into an algorithm that can run and that will produce a simple red or green light. Once automated, these checks are generally cheap, easy and relatively fast to run.
But what makes it easy to create those automated checks? Certain factors can help us identify the automation test cases that truly change the game. Let's look at some of the factors that go into making checks easy to automate.
How to decide when to automate tests
Close examination of the following areas will help you determine whether and how to automate a test case.
Isolation. Software is much easier to test if we can examine, poke and prod just one piece -- be it a REST API, a piece of user interface or a code function. This isolation is a big benefit because it frees you from considering the state of other subsystems. Code that is accidentally coupled with other code is hard to test by itself. Take an addition example: If we isolate the GUI from the business logic, we can simply pass in 2+2 to the business logic, get 4 back and have a check. Likewise, we can click 2+2 and make sure the GUI sends that message to the business logic and displays whatever is passed back. While the example is trivial, the challenge is to find the right amount of complexity before breaking things down.
Complexity. It is possible to have code so simple that isolation is not worth pursuing, but you can also have the opposite problem. Imagine a completely isolated algorithm designed to simulate a nuclear reaction. The user enters the number of different items (plutonium, uranium, enriched substances, water and temperature) and the computer graphs the impact over time. Because the number of combinations is incredibly high, predicting the right answer to any problem would take a great amount of time with a spreadsheet. A tester could spend a month creating test cases and only cover a handful of scenarios. If the app could be structured in a way that was less complex, like a Lego castle built with well-tested building blocks, then it would be easier to automate the checks. Put differently: If your module has a high cyclomatic complexity, it will require more tests to cover. It is often possible to compose such a module out of smaller components that are easier to cover.
Setup. If the setup isn't automated, then the check will not be automated either. In addition, setup often has a multiplier effect on productivity. In our nuclear reactor example, we can add import and export commands that drive the system to a certain state and save it. Once the software exists, you can save a series of changes to disk. Rechecking then becomes as easy as loading step one, taking a step, exporting it and comparing it to yesterday's run. This is the essence of a check: It confirms that a set of predetermined things still act as expected, as opposed to investigating new things.
Surprise. The types of problems you look for can influence whether something makes a good automation test case. This would include situations where you hear yourself saying, "Oh, we never thought about what it should do under those conditions." In those cases, where the problem isn't one you ever imagined facing, it is hard to even formulate what the check should be.
Dependencies. Strictly speaking, it might be possible to mock an external API such as Facebook, LinkedIn or Twitter. If that API contains information that is important, such as a security certificate or special code -- especially if it changes over time -- then you have a host of problems. The mock could be hard to set up, the information could be complex, the results could be difficult to predict and the problems you seek could be unwelcome surprises. Pay close attention to testability in design for external dependencies, even for things as simple as a database or file system.
Testability: The great umbrella term
All the ideas here make the software more testable. Software testability, however, has a precise definition: the degree to which software supports testing.
Google's reCAPTCHA, for example, is not testable as defined, because its entire purpose is to create images that a computer could not guess. A human, of course, would be able to test it. This definition of testability is more in line with Bolton and Bach's definition of checking. In Design Patterns Explained, Alan Shalloway claims that a system with poor cohesion, tight coupling, redundancy and lack of encapsulation is difficult to test. It's no coincidence that those sound a lot like the examples above.
Changing the environment for testability usually means adding to old systems. Some people call that work technical debt. My colleague Sherry Dood calls them enablers for velocity.
Once the testability work is done, someone must go in and add the old tests. That work is often tedious and unrewarding. Rather than retrofit old functionality, start off with testability. I have seen far too many lists like the one above turned into checklists that are ignored. Other times, the list is implemented, which can be a worse outcome because it slows down development.
Lisa Crispin, co-author of Agile Testing: A Practical Guide for Testers and Agile Teams, suggests a different approach. At the beginning of any project or change, simply ask: How can we test this more easily? This will lead to more testable designs. In my experience, features of testability, such as export/import state, tend to provide benefits for the team and the customer.
I do not suggest "How do we test this?" as a formal step in a process, nor do I suggest we add it to a checklist used in determining automation test cases. Instead, it is more a continuous conversation -- a habit. The way to get there is through skill development and encouraging curiosity.
The author thanks Lisa Crispin and Klára Jánová for their peer review on this article.