Search This Blog

Welcome to Machers Blog

Blogging the world of Technology and Testing which help people to build their career.

Thursday, July 10, 2008

Performance testing in context

Q- Our company markets a solution that monitors the production client devices from the actual end user perspective, which provides level of QoS, performance, availability, etc. My question is: Would it not make sense that QA could use this current and historical information to determine the level of QoS and determine if the level of efficiency increased as a result of new release under a production environment? I see many testing products and procedures that will emulate the production environment or create synthetic transactions but do not have actual end user production data from end users' actual experiences to accurately gauge whether QA test of new or modified application will ultimately meet user acceptance.

A- When I approach performance testing, I typically take an end user approach. In his article on the User Community Modeling Language (UCML) -- -- Scott Barber shows a method to visually depict complex workloads and performance scenarios. When applied to performance testing, UCML can "serve to represent the workload distributions, operational profiles, pivot tables, matrixes, and Markov chains that performance testers often employ to determine what activities are to be included in a test and with what frequency they'll occur." I've used UCML diagrams to help me plan my performance testing, to document the tests I executed, and to help elicit performance requirements.
Models like this enable me to create realistic (or reasonably realistic) performance test scenarios. Using a model like this, I find that I can create a series of performance tests to measure application performance in terms of a specific user type or an aggregate of all user types, overall response given a specific load, effects of load on any given type of user, and so forth. The power behind a modeling approach like this is that it's intuitive to developers, users, managers and testers alike. That means faster communication, clearer requirements and better tests.
When performing this type of modeling I start with the end user in mind. What will the user do with the software? What types of transactions do they care about? What time of day will they do it? What needs to be set up in the system for them to be successful? The list of questions goes on and on. For this type of modeling, having the detailed Web analytics you refer to in your question is invaluable. It's a performance tester's dream. I always value real numbers and real data over calculations and guesses.
However, saying that is not without its problems. Let's look at just a handful of situations where Web analytics may not add a lot of value to our performance testing:
When the performance testing is focused on transactions and not on the end user response time. Imagine my surprise when a fellow performance tester came up to me one day and burst my end-user-focused bubble. He didn't really care about end user response times. When he did his testing, he only cared about transactions and how they affected the system environment. After a long conversation about how our approaches could be so different, I came to understand that not everyone shares my context and his concerns happened to be different than mine. His product had service level agreements and alerts in place that focused on resource utilization and throughput specified in transactions per minute and percent usage. His company didn't get in trouble if the 95th percentile user experienced a five second response time, they got in trouble if the 101st transaction fell off the queue or failed to process in under 0.5 seconds. The end user was not his end goal. When your performance testing risk concerns contractual requirements, then the value you can derive from Web analytics may be limited in terms of your testing. I'm certain that someone along the way could have benefited from that information when the contract was specified, but from a testing perspective, that type of feedback might be too late. That said, I'm by no means suggesting that information like the information you are referring to won't be valuable in that context. It just may not be as valuable given the different goals of the testing..
When the performance testing is focused on new features and systems and not on enhancements to existing systems. An obvious context where detailed Web analytics might not be too helpful is when the performance testing is focused on new development. If the users can't do it today, then you obviously can't get real information on its use. It seems obvious to say this, but it's an area where I've seen some teams struggle. I've been on projects where management wanted to take "real" numbers from a legacy system and apply them to a new system which had a different workflow, different screens, and sometimes even different data. Sometimes we become anchored to production numbers, even when we don't really have a compelling argument for using them.
When the test being modeled is not similar to current production usage. A third scenario where this information might not be overly useful is one where your test models something that doesn't happen in production today. Possibly the simplest example of this might be a Super Bowl commercial for a company with an online product. If you've never run a Super Bowl ad before, and never had 50 million people navigate to your site in sixty seconds to see that monkey commercial again, then how do you know what they will really do when they get there? Well, the same way you would have done it without the Web analytics. You might hold focus groups, build in fixed navigation, scale down the features, or perhaps just guess. But odds are, current usage models won't offer much help.
All that said, I think real data like that is invaluable in many situations. It allows a performance tester to check assumptions, model more accurately, and it provides insight into potential future use if you trend the data over time. If you have access to information like that, and you are a performance tester, it's most likely in your best interest to at least review the data to see if there is a way you can use it to make your testing more accurate or more valuable.

No comments: