Getty Images

Tip

An overview of data-driven API testing

API testing -- including data-driven API testing -- differs from other software tests in its overall process and relevant metrics. Analytics and automation play a key role.

APIs are essential components of software interoperability. Properly designed and implemented APIs allow one program to securely access another program's data or operations.

The program that provides the data or service and the program that accesses the data or service have no knowledge or dependency on the other. As a result, API developers face unique challenges. Data-driven API testing arms developers with critical insights into API performance.

To improve software quality, learn the model for and components of data-driven API tests, the relationship between test automation and test analytics, and the benefits possible for API development. Then, check out the tools that enable such tests.

What is data-driven API testing?

Developers can build application components to perform an array of tasks based on a sequence of commands and inputs. Then, software testers follow steps to emulate user behavior going through these various tasks. For example, a software test simulates a series of user inputs and then gauges the actual output against the expected output. The software passes the test if actual and expected results match.

APIs do not perform a series of tasks like other software components do. They facilitate data transfer between requestors and providers. API testing emphasizes data access rather than the logic behind a user's actions. This is the heart of data-driven API testing. Where tests of a process use an assortment of test logic to simulate user actions, data-driven testing relies on limited logic. API tests provide a range of data test cases to run the set test logic for the API. There must be enough varied data to verify the software's underlying operational rules and boundary conditions work correctly.

Data access and handling, therefore, drive these kinds of API tests. They follow a request/response model and involve three key components:

  1. a data set or source, such as files, spreadsheets, Java database sources, open database sources and comma-delimited text files;
  2. test logic that exercises the API's function and drives additional steps, such as database queries, encryption and computational results; and
  3. a test script or framework that provides an overarching series of tests, along with comparative pass/fail checks of actual and expected results.

For example, suppose a business provides analytical services using its proprietary data. The business develops an API for users to make analytical queries and requests from that data. In the test, the API requests include searching for selected data, transforming or normalizing different data sets, and making calculations. Data-driven API testing invokes a series of analytical requests through this API and then compares the actual and expected results of each request.

The role of data analytics in API testing

As continuous development paradigms accelerate software development cycles, the need for testing increases. An API can undergo frequent testing as developers add or update features. Software testers cannot realistically manually keep up with the volume of mainline testing required.

Developers, project managers and even executives need to understand what's happening beneath the automation layer.

Therefore, automation has become a key feature of software testing and the greater CI/CD toolchain. Automation alone isn't enough. Developers, project managers and even executives need to understand what's happening beneath the automation layer.

In API and other software development, analytics provides these necessary insights when automation increases test velocity and volume. Analytics tools ingest and analyze large volumes of test results to provide details about the test cycle. Development teams can then review this information to gauge the outcome and identify failures to address.

The analytics used in API testing typically provide straightforward pass/fail results for each type of test. The exact tests depend on the API, its purpose and the test suite created for it. As an example, an API that supports online shopping might undergo test analysis for a range of user activities, including the following:

  • logon success or failure;
  • validating security features, such as Secure Sockets Layer/Transport Layer Security;
  • ability to access and browse inventory;
  • interaction with a virtual shopping cart; and
  • ordering with address and payment data.

As automation drives each test through varied scenarios and data sets, analytics tools assess and document each test's success or failure. Tools typically summarize and share results in human-readable reports, such as dashboards.

Data analytics in API testing can yield several benefits, including the following:

  • Faster testing. Teams can execute automated testing without human intervention, often during off-hours.
  • Fewer errors. Automation reduces test errors and oversights, ensuring that all tests are included the same way every test cycle.
  • Better test clarity. Automation and analytics document every test. Development teams can use analytics tools to understand the precise reason a given test failed.
  • Faster fixes. Broader testing and better test clarity lead to problems more quickly than other methods of testing. In turn, this can speed up code remediations for the next iteration. In the event of persistent or widespread test failures, managers should shift focus to training, coding techniques or other best practices for better overall code quality.

Tools for data-driven API testing

Data analytics requires tools, which are often added to the CI/CD toolchain. There are numerous API test automation and test data management tools for developers, including Curiosity Software, Datprof Runtime, Delphix, GenRocket and Loadmill. These tools' capabilities include automated test design and synthetic test data generation and data masking.

Software teams should research and evaluate any potential API testing tool for usability and interoperability before adding them to the CI/CD toolchain.

How to analyze failed tests

In production, IT teams monitor APIs to gather metrics such as call volume, uptime, response time and error rates. In development, however, testing focuses on pass/fail results. Every API does a different job, so tests depend on the specific API and the back-end functionality it exposes.

The point of testing is to drive calls to the API using varied input and then measure the success or failure of the results. For example, the tester can repeat a functional test 1,000 times with different data sets. How many of those API functional calls were successful? More importantly, what were the circumstances and criteria of the failed tests? This kind of insight helps developers understand and fix issues quickly.

Dig Deeper on Software testing tools and techniques