Search This Blog

Welcome to Machers Blog

Blogging the world of Technology and Testing which help people to build their career.

Thursday, July 10, 2008

An Explanation of Performance Testing on an Agile Team (Part 2 of 2)

By Scott Barber
Introduction
This is the second article in a two article series. The two article series describes activities that are central to successfully integrating application performance testing into an agile process. The first article discussed the following four topics:
Introduction to Integrated Performance Testing on an Agile Team
Understand the Project Vision and Context
Identify Reasons for Testing Performance
Identify the Value Performance Testing Adds to the Project
This article will go on to discuss the following topics:
Configure Test Environment
Identify and Coordinate Immediately Valuable Tactical Tasks
Execute Task(s)
Analyze Results and Report
Revisit Value and Criteria
Reprioritize Tasks
Additional Considerations
Additional Resources
Configure or Update Tools and Load Generation Environment
Overview: With a conceptual strategy, get the tools and resources prepared to execute the strategy as features and components become available for test.
Load generation and application monitoring tools are almost never as easy to get up and running as one expects. Whether issues arise from setting up isolated network environments, procuring hardware, coordinating a dedicated bank of IP addresses for IP spoofing, or version compatibility between monitoring software and server operating systems, issues always seem to arise.
Also, it is inevitable that load generation tools are always behind evolving technologies and practices. Tool creators can only build in support for the most prominent technologies, and even then, they have to become prominent before the support can be built.
This often means that some of the biggest challenges a performance testing project faces may include: getting your first three-user, two-scenario, three-loop prototype test running with no script errors; parameterized variables; authentication and sessions handled correctly; data collected correctly; and users generally being simulated in such a way that the application under test cannot legitimately tell the difference between real users and simulated users. Plan for this and do not be surprised when it takes significantly longer than expected to get it all working smoothly.
Additionally, plan to periodically reconfigure, update, add or otherwise enhance your load generation environment and associated tools throughout the project. Even if the application under test stays the same and the load generation tool is working properly, it is likely that the metrics you wish to collect will change. This frequently implies some degree of change to or addition of monitoring tools.
Identify and Coordinate Immediately Valuable Tactical Tasks
When it is time to start executing tests or conducting other performance test-related tactical tasks, the key is to determine what task will deliver the most value to the project at this point in the project, rather than executing whatever test or task you thought was going to be next. This activity can be summarized as follows:
Together with the team, ask "How can I [the performance specialist] best leverage my skills and tools to add the most value to the project right now?"
Quickly collect several possible tasks.
Prioritize those tasks based on Benefit/Cost, Risk/Reward, or Time/Value
Coordinate with the team to execute the highest-priority task (or maybe the next day's worth of tasks.)

Tips for the Performance Specialist
Depending on how your team works, you might want to create performance test execution plans for each 1-2 days’ worth of performance-testing tasks. Most often, iterations occur at 1-2 week intervals. This allows for several performance-testing tasks to be completed during each iteration. Especially early in the development cycle, this is a necessity because performance-testing tasks will be focused on individual features and components, and more than one of these features/components will need testing during an iteration.
An execution plan is intended to communicate the details needed to complete or repeat a task. Performance test execution plans should be communicated by using the same method(s) as the strategy does. Depending on the pace and schedule of your project, there may be one execution plan per strategy or several. It is important to limit execution plans to 1-2 days of anticipated tasks for several reasons, such as the following:
Even though a task is planned to take 1-2 days, it is not uncommon for the actual execution to stretch to three or even four days on occasion. If your plans are for tasks longer than about two days and you get delayed, you are likely to have the next performance build before you complete any valuable testing on the previous build.
Especially on agile projects, timely feedback about performance is critical. Even with two-day tasks and a one-week performance build cycle, you could end up with approximately eight days between detecting an issue and getting a performance build that addresses that issue. With longer tasks and/or a longer period between performance build, those 8 days can quickly become 16.
If you use performance test execution plans, share them with the team far enough in advance for team members to make recommendations or improvements, and for necessary resource coordination to take place, but not further. Due to the specificity of the execution plan, preparing them well in advance almost always leads to significant reworking. In most cases, the team as a whole will prioritize the sequence of execution of tasks. It is up to the performance specialist to communicate with the team the relative value, risk, and expense of each task.
In general, the types of information that teams find valuable when discussing a performance test execution plan for a task include:
Work item execution method
What specific data will be collected
Specifically, how that data will be collected
Who will assist, how, and when
Sequence of work items by priority
Execute Tasks
Executing tasks does not always mean conducting a test - sometimes test preparation and assisting other team members can be considered tasks. The fact is that sometimes the next-most-valuable task you can conduct is going to the store to restock the caffeine in the team refrigerator. Some more common "not a performance test" tasks might include:
"Go to the computer store, get a better video card, install it, and see if that helps."
"Meet with the database administrator and help her optimize those queries."
Seeing how many users run on this configuration before the CPU becomes the bottleneck.
Collaborating with administrators to tune application servers.
Reusing UNIT tests from developers for performance testing an API, component, or group of components.
Testing SQL for stored procedures.
Testing different implementations for a particular API.
Helping developers develop UNIT tests that are beyond developer scope for load, throughput, or contention.

Tips for the Performance Specialist
The most important part of task execution is to remember to modify the task and subsequent strategies as results analysis leads to new priorities. At the conclusion of task execution, share your findings with the team and then reprioritize the remaining tasks, add new tasks, and/or remove planned tasks from execution plans and strategies based on the new questions and concerns raised by the team. When reprioritizing is complete, move on to the next-highest-priority task.
In general, the keys to task execution include:
Analyze results immediately and modify your plan accordingly.
Work closely with the team or team subset that is most relevant to the task.
Communicate frequently and openly across the team.
Record results and significant findings.
Record other data needed to repeat the test later.
Revisit performance testing priorities after no more than two days.
Analyze Results and Report
One of the biggest differences between most agile teams and most other types of teams is that on agile teams, data and preliminary results are shared continually. While continual reporting keeps the team informed, these are generally summaries and/or raw data, so it is still important to pause periodically in order to consolidate results, conduct trend analysis, create stakeholder reports, and conduct pair-wise analysis with developers, architects, and administrators.
Tips for the Performance Specialist
You might need to remind the team that one of the jobs of the performance specialist is to find trends and patterns in the data, and that it is not uncommon to need to repeat tests in several different iterations to really understand the value of the data. This takes time. It also tends to lead to the desire to re-execute one or more tests to determine if a pattern really exists or if a particular test was flawed in some way. Teams are often tempted to skip this step. Do not succumb to that temptation. You might end up with more data more quickly, but you are unlikely to extract all of the useful findings from that data until it is too late if you don’t stop to look at the data collectively on a regular basis.
Revisit Value and Criteria
Every day and/or after each task is completed, process the new information you have. Ensure that the performance-testing value proposition is still the same, and determine if there are new or additional performance criteria to integrate into your testing. Incorporate that information into team discussions to determine what task would be most valuable to complete next. Once the success criteria, strategies, and/or tasks are updated and prioritized, it is time to resume where you left off.
Tips for the Performance Specialist
Some agile teams conduct periodic "performance only" scrums or stand-ups when performance testing - related coordination, reporting, or analysis is too time-consuming to be handled in the existing update structure. Whether during a special "performance only" session or during the existing session, the team collectively makes most of the major adjustments to priorities, strategies, tasks, and success criteria. Ensure that enough time is allocated frequently enough for the team to make good performance-related decisions while changes are still easy to make.
Of course, the key to successfully implementing an agile performance-testing approach is continual communication among team members. As described in the previous steps, it is a good idea not only to communicate tasks and strategies with all team members and check back with one another frequently, but also to plan enough time into testing schedules to review and update tasks and priorities. The methods you use to communicate plans, strategies, priorities, and changes are completely irrelevant as long as you are able to adapt to changes without requiring significant rework, and as long as the team continues to progress toward achieving the current performance-testing success criteria.
Sometimes, no matter how hard you try to avoid it, there are simply no valuable performance testing tasks to conduct right now. This could be due to environment upgrades, mass re-architecting/re-factoring, significant detected performance issues that someone else needs time to fix, and so on. The good thing is that the performance specialist has possibly the broadest set of skills on the team - not necessarily the deepest in any particular area, but the widest variety of skills. What this means is that when a situation arises where continued performance testing or paired performance investigation with developers or administrators is not going to add value at this time, the performance specialist can be temporarily given another task - possibly automating smoke tests, optimizing HTML for better performance, pairing with a developer to assist with developing more comprehensive unit tests, and so on. The key is to never forget that the performance specialist’s first priority is performance testing and that these other tasks are additional responsibilities, not vice versa.
Reprioritize Tasks
Overview: Based on the test results, new information and the availability of features and components, reprioritize, add to or delete tasks from the strategy, then return to coordinate for the execution of the next task.
Some agile teams conduct periodic "performance only" scrums or stand-ups when performance testing related coordination, reporting, or analysis are too time consuming to be handled in the existing update structure. Whether during a special "performance only" session or during the existing session, the team collectively makes most of the major adjustments to priorities, strategies, tasks, and success criteria. Ensure that enough time is allocated frequently enough for the team to make good performance-related decisions, while changes are still easy to make.
The key to successfully implementing an agile performance testing approach is continual communication among team members. As described in the previous steps, it is a good idea not only to communicate tasks and strategies with all team members, checking back with one another frequently, but also to plan time into testing schedules to review and update tasks and priorities.
The methods you use to communicate plans, strategies, priorities, and changes are completely irrelevant as long as you are able to adapt to changes without requiring significant re-work, and as long as the team continues to progress toward achieving the current performance testing success criteria.
Additional Considerations
Keep the following additional considerations in mind when integrating application performance testing into an agile process:
The best advice is to remember to communicate with the team.
No matter how long or short the time is between performance builds, performance testing will always lag behind. Too many performance-testing tasks simply take too long to develop and execute to be able to keep up with real-time development. Keep this in mind when setting priorities for what to performance-test next. Choose wisely.
Remember that for the vast majority of the development cycle, performance testing is about collecting useful information in order to enhance the performance through design, architecture, and development as it happens. It is only for releases for customer review or production release candidates that comparisons against the end user–focused requirements and goals have meaning. The rest of the time, you are looking for trends and obvious problems, not pass/fail validation.
Make use of existing unit-testing code for performance testing at the component level. It is quick and easy, helps the developers see trends in performance, and can make a powerful smoke test.
Do not force a performance build just because one is on the schedule. If the current build is not appropriate for performance testing, stick with what you have until it is, or give the performance tester another task until something reasonable is ready.
Performance testing is one of the single biggest catalysts to significant changes in architecture, code, hardware, and environments. Use this to your advantage by making observed performance issues highly visible across the entire team. Simply reporting on performance every day or two is not enough. The team needs to read, understand, and react to the reports; otherwise the performance testing loses much of its value.
Use cross-functional teams for performance testing. Do not divide labor within teams – share responsibility. Testers can greatly assist in clarifying the specification, especially around the edge cases – they love breaking things. Similarly, developers can greatly assist in clarifying the root cause of a specific risk, especially around the deeply technical risks – they love solving problems.
Test early, test often; release early, and release often.

No comments: