Search This Blog

Welcome to Machers Blog

Blogging the world of Technology and Testing which help people to build their career.

Thursday, July 10, 2008

Software performance testing: You can't test everything

Scott Barber

It is rarely, if ever, possible to simulate all of the possible ways that an application will be used, thus making the selection of scenarios to include in a performance test one of the most important elements of predicting or estimating performance in production. Over the years, I've collected a series of heuristics or triggers that have helped me to do this. For some reason, I've been getting a lot of questions about this topic recently, so I thought I'd share my thought process with you.
Contractual obligationsThere are a few situations wherein using an application's contractual obligations as a heuristic is not useful, such as when performance testing for an internal development effort.
Be that as it may, I've still made it a habit to always explore contractual obligations first.
When there are contracts, the challenge is usually obtaining and interpreting them, as contracts are often viewed as proprietary and are not made easily available. When your access to contracts is impeded, it might be necessary to remind those obstructing your access that you cannot do your best work without being able to review the actual contract because the specific language and context of any statement related to performance is critical to determining which scenarios need to be included during performance testing. For example, if a contract states, "The system shall allow users to update their personal information and present that information in a responsive manner," it implies that the "update personal information" scenario needs to be included during performance testing -- even though "in a responsive manner" is ambiguous and requires clarification.
A note of caution: Marketing documents often contain performance-related statements that may be legally binding in the United States. I recommend treating marketing claims as contractual obligations.
Most common application usageThe two most frequently committed mistakes I witness when testers choose usage scenarios for performance simulations are that they consider only the most common scenarios or that they become so focused on exceptions that the most common scenarios are forgotten. Instead of doing either, what you want is to determine a reasonable array of possible scenarios. After collecting a list of what you believe are all of the activities a user can perform, circulate the list to the team along with the question, "What else can a user of any type possibly do with this application that isn't on this list?" I use this list as an input for my other heuristics.
Using the list, I suggest ranking the usage scenarios according to their expected frequency. This need not be an exact ranking; in fact, breaking the scenarios into the following five groups in roughly equal proportions is probably sufficient:
Extremely common
Quite common
Common
Rare
Very rare
As always, I recommend soliciting commentary from the team members. Ask what they think is missing, what they think is out of place, and why.
Business-critical scenario(s)

After ranking possible scenarios by expected likelihood, I like to rank scenarios by business importance. The key to accomplishing this ranking is to ensure that you thoroughly understand the business goals for the application. For example, some applications are designed primarily to generate revenue and others primarily to generate publicity. For a revenue-generating application, "purchase item" would probably rank highly, while for the publicity application, "create blog content" may be the top-ranking scenario. For business criticality, I frequently group by the following categories:
Critical to the success of the application or business
Important to the success of the application or business
Desirable for the success of the application or business
Incidental to the success of the application or business
In this case, after you have a prioritized or categorized a list of scenarios, it is generally valuable to solicit feedback from executive champions, clients and the sales and/or marketing team.
Performance-intensive activitiesIt is also valuable to rank the scenarios by the most performance-intensive activity each scenario contains. The key to this ranking is having an understanding of relevant technologies and their implementations, which can often be gained by interviewing developers, architects, etc. and paying attention to performance complaints. Here, I typically sort into three groups:
Obviously performance-intensive scenarios
Potentially performance-intensive scenarios
Unlikely to be performance-intensive scenarios
Architects, developers, administrators and anyone currently using or testing the system will have valuable feedback to help guide this sort of grouping.
Areas of technical concernNext I usually rank the list of scenarios by the degree of technical risk or concern related to performance. In doing this, I am generally trying to identify scenarios that employ new technologies or technologies that have been the source of performance challenges in the past. I tend to use the following groups:
Scenarios employing new technologies
Scenarios employing technologies of historic performance concern
Scenarios with no known technological concern with regard to performance
Architects, developers and administrators tend to have the most valuable feedback about this grouping.
Areas of stakeholder concernFinally, I like to rank the list according to the degree of risk or concern expressed by stakeholders. This actually involves no real decision-making on my part; I simply interview internal and external stakeholders and ask what their concerns are related to performance. There is no need to rank or group these concerns other than to annotate which scenarios are associated with the concerns.
Consolidate and prioritize scenariosAt this point, what I usually do is spread my lists out on a table with the "contractual obligation" list under my writing hand and ensure the top several scenarios from each of the other lists are represented. When I find a scenario that seems important to include, I write it down. When I feel I have a reasonable balance between what I think I can accomplish and what I wish I could accomplish, I review my draft with the whole team for feedback.
Review and update scenariosLike many aspects of software development, the perfect arrangement of important usage scenarios to include in performance simulations is a moving target. The previous heuristics generate a tentative list of scenarios that can be expected to provide a reasonable degree of confidence in the accuracy of the performance test results. As long as that list continues to make sense when evaluated against evolving requirements, features, etc., I advise sticking with it and fleshing it out incrementally as you go. If the list stops making sense, recalibrate until it does. You are likely to iterate through this process many times before the project is completed.

No comments: