Testing for performance, part 1: Assess the problem space
In the first article of this series on testing for performance, Mike Kelly outlines ways for you to understand your content and the system and figure out where to start testing.
Imagine you're a competent programmer who has worked with a number of different clients in many different technologies and who has just been assigned to your first project as a performance tester. To complicate matters, the technology involved happens to be fairly complicated and unfamiliar to you; this is not a project similar to one in your portfolio.
Last year I received a phone call from that programmer -- a former co-worker on a project from years before. He knew I had a passion for testing, he happened to be in the same city, and figured he might be able to get some ideas from me for where to start.
This three-part series of articles on testing for performance is based on the conversation we had, where I outlined how I might approach the problem if I were him, working in that context. The series is broken into the following articles:
- Assess the problem space: Understand your content and the system, and figure out where to start
- Build out the test assets: Stage the environments, identify data, build out the scripts, and calibrate your tests
- Provide information: Run your tests, analyze results, make them meaningful to the team, and work through the tuning process
Depending on your project's context, you may find yourself doing everything at the same time. In other contexts, you might take more of a phased approach. Breaking the content out into those topics gave me a useful framework for talking to my developer friend. Throughout this series, we will look at possible activities that may take place, as well as possible artifacts that one might produce.
In this first article, we will look at assessing the initial problem space so that we can move forward with the first round of performance testing. This can include developing an understanding of the goals of your testing, understanding the system(s) from various perspectives, understanding how the system(s) will be used, and developing the initial scope for your tests.
Start a strategy
There is a difference between a test strategy and a strategy document. Regardless of the project, you're going to need a test strategy. Sometimes you're going to need a strategy document. If you think documenting your ideas will help you manage them, then by all means document them.
A good place to start for thinking about your strategy is Scott Barber's article on developing an approach to performance testing. In that article, Scott provides nine heuristics for thinking about the problem. The first three heuristics that Scott provides are context, criteria and design:
- Context: Project context is central to successful performance testing
- Criteria: Business, project, system and user success criteria
- Design: Identify system usage and key metrics; plan and design tests
Those first three heuristics capture the essence of what assessing the problem space is all about. In the article Barber also provides some useful distinctions for different types of criteria: requirements, goals, objectives, targets and thresholds. My experience has been that gaining upfront understanding of those criteria is the most important task for getting started on developing a test strategy. What are we trying to do and why are we trying to do it? Once we know that, we can answer the question of how.
More information on performance testing
- Testing for performance, part 2: Build out the test assets
- Testing for performance, part 3: Provide information
As you think about your performance test strategy, another useful place to start is the Satisfice Heuristic Test Strategy Model (PDF) by James Bach. You can use this model to help provide structure to your thinking. Open a blank document and write down everything you think you know and all the questions you have. Once you can't think of anything else to write, open the Heuristic Test Strategy Model. As you look at each list contained in the model, write down any additional information you may know or new questions that occur to you. I often find getting started this way to be more effective then using a strategy template. It doesn't anchor me to thinking about the problem in a specific way.
Understand the system
As you work to figure out how you're going to test, you need to understand what you're testing. I'm a visual learner, so for me that means pictures -- and lots of them. If diagrams don't exist, I'll get someone to start drawing them on whiteboards and I take pictures. If I need to, I'll start drawing some of them myself. I find pictures to be the easiest way to identify misunderstandings and missing information.
There are three things I feel like I need to know before I'm comfortable with the application I'm testing:
- What are the various applications, data sources, services and protocols in the system?
- What do the network and hardware infrastructure look like?
- What do the use cases and business workflows look like?
Some common places to find this information include network diagrams, deployment diagrams, activity diagrams, dataflow diagrams, use case diagrams and business workflows. As I start collecting these views of the system, I'll annotate each one with the little details that I uncover. I'll also create a corresponding list of open questions.
Some initial questions to answer around the various applications, data sources, services and protocols in the system include the following:
- What applications and services comprise the system? Which are internal and which are external?
- Where is data stored and read from? In what formats?
- What software processes are involved? How often do they run?
- What are the differences between the production environment and the environment you're testing?
- What are the process names, queue names, Web service names and method calls that are used for the different pieces to communicate? What are the various protocols used?
- Are there licensing issues or data limitations that you need to be aware of?
Some initial questions to answer around the network and hardware infrastructure:
- What are the servers and appliances/devices, and how are they all connected? What are the various names, specifications and configurations for each?
- Is there any load balancing? Where is it and what kind?
- If the test environment is different than the production environment, do we know the differences between the two? Do we have this information for both?
Some initial questions to answer around what the system does and who uses it:
- Who uses this system and why?
- What types of transactions take place?
- Are there any calendar-based transactions (end of day, monthly, yearly, etc.)?
As you take in all this information, it's easy to get overwhelmed. As a performance tester, you don't need to be an expert in each of these areas, but you do need to understand the basics. Developing some foundation knowledge will be helpful, but mostly you just need to be willing to research new items and concepts as they come up.
As you work through the information start a list of assumptions (e.g., "currently XML is not cached; in the future it may be"), collect notes on the differences between your test environment and the production environment (e.g., different server programs co-located on the same machine) and capture some of the key metrics you think will be helpful when it comes to tuning or debugging (e.g., memory/CPU usage, database calls, live sessions versus active sessions, etc.).
Draft your usage models
In his article on the User Community Modeling Language (UCML) Barber shows a method to visually depict complex workloads and performance scenarios. When applied to performance testing, UCML can "serve to represent the workload distributions, operational profiles, pivot tables, matrixes and Markov chains that performance testers often employ to determine what activities are to be included in a test and with what frequency they'll occur."
I've used UCML diagrams to help me plan my performance testing, to document the tests I executed, and to help elicit performance requirements. Models like this will enable you to create reasonably sensible performance test scenarios. The power behind a modeling approach like this is that it's intuitive to developers, users, managers and testers alike. That means faster communication, clearer requirements and better tests.
For example, the figure below (click on the image for a larger view) shows a sample UCML diagram for an online bookstore (reused from Barber's publication with permission). This example has four types of users: new and existing users of the site, Web site administrators and vendors (people who sell their books on the Web site). Each user starts at the home page; from there, their paths vary depending on what functions they need to perform. Some functions are unique to certain roles, and others are shared between roles. The model shows us all the potential paths through the application, as well as expected percentages of users for any given role or for any given path through the model.
Click on the image for a larger view
When performing this type of modeling I start with the end user in mind. What will the user do with the software? What types of transactions do they care about? What time of day will they do it? What needs to be set up in the systems for them to be successful? The list of questions goes on and on. For this type of modeling, having detailed Web analytics can be invaluable. If the system is already in production and you are working on a new release, get your hands on production usage data. If your system is new and you can't pull from production numbers, that's OK. It just means you have a bit more work in front of you.
When you start with your modeling, think first about what actors or inputs there are to the system (such as users, administrators, customers, batch processes or existing scheduled tasks). These inputs would then branch into various activities that are available to them. (For example, an admin might create a new account or delete an existing account.) For each branch, assign the percentage of use to these activities. A sometimes useful assumption when coming up with your percentages is to model what you believe will be a "peak hour" from production.
The final product of usage modeling is a diagram where each path (from input through endpoint) represents a possible scenario that you might want to include in your test. It might be that you will create a test script for each possible scenario (a script for every path), or you might narrow your selection based on other criteria. It's not necessary to be complete; the main focus should be on paths that are of particular business concern or technical concern.
When modeling, remember that not all performance testing is focused on the end-user response time. Sometimes there are better measures than end-user response time. Many times you'll find service-level agreements that focus on resource utilization and throughput specified in transactions per minute and percent usage. Be careful that you don't focus too much on the end users. That's one more reason why all those different views of the system you're testing are helpful.
Figure out what to test first
If you haven't guessed it by now, in the assessment phase you're going to be collecting more data then you'll know what to do with and you'll be coming up with more tests than you'll have time to execute. That's by design. Performance testing is a dynamic and always-changing process that is highly iterative and collaborative. In order for it to be collaborative, you need data in the format and terminology of those you'll be working closely with. For it to be iterative, you'll need to have different sets of tests, focused on different risks, with different priorities.
I often develop three broad categories of tests. I'll sort my test ideas into those categories and focus my testing on each type of test as time allows. The categories that will be useful to you may be different then mine (I normally find myself testing financial service applications), and that's OK. My categories are change in load, change in workflow, and change in system.
When I focus on the change in load, I normally start with the assumption that my users are actually doing what my usage model states they are doing. I pretend like those scenarios capture all the right behaviors and percentages. We know that it doesn't -- it's an approximation. But for this step, we say it does. These tests then ask the question, "If we increase the number of users, and they continue to perform the same activities, how does that affect the performance of the system?" When executing these tests I would increase the number of users with each successive test with the end goal of showing a graph of response time (y) versus number of customers (x).
When I focus on the change in workflow, I normally start with the assumption that my number of users is constant but that what they do changes. Here I might take anticipated peak load and vary the percentages in the usage model (or perhaps even add to or remove paths from the model). We know our model is just an approximation of usage, so with this testing we are trying to identify any variants of the model that will come back to bite us if we are wrong. We are asking the questions, "What happens if the users begin favoring some tasks more and some tasks less?" You might run a number of tests where you just increase and decrease the way the percentage-of-use numbers are allocated in the UCML. This testing may more accurately represent potential outcomes when new features are added (i.e., a Web services begins to be used more).
When I focus on the change in system, I normally start with the assumption that my user load and model is correct but that the deployment diagram may change. The question we are asking is, "What happens when the state of the system changes?" This can means different things, from changing queue depth, to adding a new server, to changing our load-balancing algorithm, to increasing database size. With some types of changes, you wouldn't expect to see any difference. With other changes you may just be trying to confirm predications of scalability or failover/recovery. These tests look at the physical environment (often ignored in my experience) and try to provide information around how changes there can affect your key metrics and your end user response times.
At the end of this phase
At this point you should have an understanding of the system you're testing. You should have a strategy for approaching the problem, along with a rich set of test ideas to prioritize and work from. You may have more questions than answers, and that's OK. You don't need complete understanding, but you need enough to be able to communicate effectively with your business and technical stakeholders to be able to begin modeling risk so you know what to focus your testing on first.
Here is a possible summary of some of the work products from the assessment phase:
- Performance test strategy document
- Various diagrams of the application(s) and system(s) with accompanying details, assumptions and open questions
- Various usage models for the application(s) and system(s) with accompanying details, assumptions and open questions
- Test idea list(s)
If you were in a contract-heavy or highly formalized project environment, you would now be ready to write up a test plan, isolate environments and start developing scripts. If you were in a more agile environment, you may have already run some initial exploratory tests to prove out some ideas and assumptions and would now be ready to start scripting and executing your tests at full speed. The important point to remember is that this phase is about understanding. To some extent, you'll always be learning more about the system (a fundamental aspect of testing is learning). This phase just focused more on learning and less on execution and providing information.
Summary of references for the assessment phase
- Developing an approach to performance testing by Scott Barber: This article outlines nine heuristics for thinking about the performance testing problem.
- User Community Modeling Language (UCML) v1.1 for Performance Test Workloads (PDF) by Scott Barber: This article provides an overview of usage modeling with UCML.
- Satisfice Heuristic Test Strategy Model (PDF) by James Bach: This model provides a useful set of heuristics for thinking about the problem and laying out your initial test ideas.
-----------------------------------------
About the author: Mike Kelly is currently a software development manager for a Fortune 100 company. Mike also writes and speaks about topics in software testing. He is currently the president for the Association for Software Testing and is a co-founder of the Indianapolis Workshops on Software Testing, a series of ongoing meetings on topics in software testing, and a co-host of the Workshop on Open Certification for Software Testers. You can find most of his articles and his blog on his Web site www.MichaelDKelly.com.