Search This Blog

Welcome to Machers Blog

Blogging the world of Technology and Testing which help people to build their career.

Monday, March 16, 2009

The role of a software test manager

By David W. Johnson

The role of the software test manager or test lead is to effectively lead the testing team. To fulfill this role, the lead must understand the discipline of testing and how to effectively implement a testing process while fulfilling the traditional leadership roles of a manager. What does that mean? The manager must manage and implement or maintain an effective testing process. That involves creating a test infrastructure that supports robust communication and a cost-effective testing framework.

What the test manager is responsible for:

Defining and implementing the role testing plays within the organization.
Defining the scope of testing within the context of each release/delivery.
Deploying and managing the appropriate testing framework to meet the testing mandate.
Implementing and evolving appropriate measurements and metrics.
To be applied against the product under test.
To be applied against the testing team.
Planning, deploying and managing the testing effort for any given engagement/release.
Managing and growing testing assets required for meeting the testing mandate:
Team members
Testing tools
Testing processes
Retaining skilled testing personnel.
The test manager or lead must understand how testing fits into the organizational structure. In other words, he must clearly define its role within the organization. This is often accomplished by crafting a mission statement or a defined testing mandate. Example: "To prevent, detect, record and manage defects within the context of a defined release."

Now it becomes the test lead's job to communicate and implement effective managerial and testing techniques to support this "simple" mandate. Your team, peers' (development lead, deployment lead and other leads) and superior's expectations need to be set appropriately given the timeframe of the release and the maturity of the development team and testing team. These expectations are usually defined in terms of functional areas deemed to be in scope or out of scope. Examples of those in scope include creating a new customer profile and updating a customer profile. Examples of those out of scope may include security and backup and recovery.

The definition of scope will change as you move through the various stages of testing. The key thing is to make sure your testing team and the organization as a whole clearly understands what is being tested and what is not being tested for the current release.

The test lead/manager must employ the appropriate testing framework or test architecture to meet the organization's testing needs. Although the testing framework requirements for any given organization are difficult to define, there are several questions the test lead/manager must ask. The answers to the questions and others will define the short- and long-term goals of the testing framework.

What is the relationship between product maturity and testing?
In the chart below, the first arrow leads to the product being ready for deployment. The second arrow leads to the product being ready to be tested as an integrated or whole system. The third arrow indicates functional testing can be performed against delivered components. The fourth arrow indicates the developer can test the code as an un-integrated unit. And the fifth arrow leads to the product concept being captured and reviewed.

Insert Image here

% Construction -- How much more construction is required to complete the product.
% Product -- How much of the product has been constructed.

How can the testing organization help prevent defects?
There are really two sides to testing verification and validation. Unfortunately the meaning of those terms has been defined differently by several governing/regulatory bodies. To put it more succinctly, there are tests that can be performed before the product is constructed or built, and there are tests that can be performed after the product has been constructed.

To prevent defects from occurring, you must test before the product is constructed. There are several methods for doing that. The most powerful and cost-effective method is reviews. Reviews can be either formal, technical reviews or peer reviews. Formal product development life cycles will provide the testing team with useful materials/deliverables for the review process. When properly implemented, any effective development paradigm should supply those deliverables. Example of development models and at what point during those models you can get information for the review process:

Cascade or waterfall
Functional specifications
Agile or Extreme Programming
High-level requirements
Testing needs to be included in this review process, and any defects found need to be recorded and managed.

How and when can the testing organization detect software defects?
The testing organization can detect software defects after the product or some operational segment of it has been delivered. The type of testing to be performed depends on the maturity of the product at the time. The classic hierarchy or sequence of testing is as follows:

Design review
Unit testing
Functional testing
System testing
User acceptance testing
The testing team should be involved in at least three of those phases: design review, function testing and system testing.

Functional testing involves the design, implementation and execution of test cases against the functional specification and/or functional requirements for the product. This is where the testing team measures the functional implementation against the product intent using well-formulated test cases and notes any discrepancies as defects (faults). One example is testing to ensure the Web page allows the entry of a new forum member. In that case, you are testing to ensure the Web page functions as an interface.

System testing follows much the same course (design, implement, execute and defect), but the intent or focus is very different. While functional testing focuses on discrete functional requirements, system testing focuses on the flow through the system and the connectivity between related systems. An example of that is testing to ensure the application allows the entry, activation and recovery of a new forum member. In that case, you are testing to ensure the system supports the business. There are several types of system tests; what is required for any given release should be determined by the scope:

What are the minimum set of measurements and metrics?
The single most important deliverable the testing team maintains is defects. Defects are arguably the only product the testing team produces that are seen and understood by the project as a whole. This is where the faults against the system are recorded and tracked. At a minimum each defect should contain the following:

Defect name/title
Defect description: What requirement is not being met?
Detailed instructions on how to replicate the defect.
Defect severity.
Impacted functional area.
Defect author.
Status (open, in progress, fixed, closed)
This will then provide the data for a minimal set of metrics:

Number of defects raised
Distribution of defects in terms of severity
Distribution of defects in terms of functional area
From this baseline the measurements and metrics a testing organization maintains are dependent on its maturity and mission statement. The Software Engineering Institute (SEI) Process Maturity Levels apply to testing as much as they do to any software engineering discipline:

Initial: (Anarchy) Unpredictable and poorly controlled.
Repeatable: (Folklore) Repeat previously mastered tasks.
Defined: (Standards) Process characterized, fairly well understood.
Managed: (Measurement) Process measured and controlled.
Optimizing: (Optimization) Focus on process improvement.
How disciplined the testing organization needs to become and what measurements and metrics are required depend on a cost/benefit analysis conducted by the test lead/manager. What makes sense in terms of the stated goals and previous performance of the testing organization?

How to grow and maintain a testing organization?
Managing or leading a testing team is probably one of the most challenging positions in IT. The team is usually understaffed and lacks appropriate tooling and financing. Deadlines don't move, but the testing phase is continually being pressured by product delays. Motivation and retention of key testing personnel under these conditions is critical. How do you accomplish this seemly impossible task? I can only go by my personal experience both as a lead and a team member:

If the timelines are impacted, modify the test plan appropriately in terms of scope.
Clearly communicate the situation to the testing team and project management.
Keep clear lines of communication with development and project management.
Whenever possible sell, sell, sell the importance and contributions of the testing team.
Ensure the testing organization has clearly defined roles for each member of the team and a well-defined career path.
Measure and communicate the testing team's return on investment. If the detected defect would have reached the field, what would have been the cost?
Explain testing expenditures in terms of investment (ROI) not cost.
Finally, never lose your cool

No comments: