Search This Blog

Welcome to Machers Blog

Blogging the world of Technology and Testing which help people to build their career.

Monday, September 1, 2008

SOFTWARE TESTING PROCESS IMPROVEMENTS

Objective
I have outlined here some of the flawed practices adopted in the testing phase activities of projects and propose corrective measures to wrong/ absent practices and improvements to existing practices.

I have traced the incorrect practices down the entire testing lifecycle to elucidate the impacts of such practices and to showcase how the suggested improvements can streamline and eliminate the wrong practices followed in testing phase of projects.

This document is based on case studies and observations done on projects with respect to factors that impacted the following:

* Overall functioning of testing teams
* The quality of deliverables
* Root cause analysis of the causative factors for the deviations in estimated delivery/ roll out plan
* QA sign off activities.
Getting Started: Envisioning Phase

The typical envisioning phase QA activities are namely,

* Creation of QA estimates.
* Identification of QA planning/ execution challenges.
* Compatibility/ Feasibility study of scope of automation
* Identification of project execution risks/ dependencies
* Initial proposal of risk management/ mitigation strategies.

Identified wrong practices

The following are the improvement areas that have been identified with respect to envisioning phase activities:

Creation of QA estimates

Identified problem area: No QA estimate for Sanity testing

QA does not estimate for the sanity testing of builds deployed in QA. The fallout of this is that sanity testing is not identified as an essential part of any QA activities. There are no efforts dedicated to identifying a sanity test suite. Hence the QA team cannot establish baselines on which to accept/ reject builds.

Sanity testing (if any) when performed is an ad-hoc activity, and hence there is no traceability for this activity.

Recommendations:

Sanity testing of all builds expected to be delivered to QA has to be factored in the estimates and activities have to be allocated to this in the test planning and execution phase.

Compatibility/ Feasibility study of scope of automation

Identified problem area: No POC developed to validate compatibility/ feasibility of automation

QA teams do not do POC’s for projects where they do not have prior platform/ technology automation expertise to ascertain that there are no compatibility/ feasibility issues with the automation tool/ scripting language identified for use. The fallout of this is that the QA team is not able to proactively identify the drawbacks if any of the automation frameworks they have envisioned. Since the issues in the automation approach are not identified until the actual development of automation test scripts start no alternative tools/ scripts/ approaches can be envisioned.

Recommendations:

POC’s have to be undertaken to ascertain that the indentified automation framework is compatible with the platform/ technology area being addressed.

“How are we going to do it?”: Planning Phase

The typical planning phase activities involved in testing are:

* Creation of Testing strategy document
* Design of Test plans for the modules under test
* Development of automation test scripts

Identified wrong practices

The following are some of the identified practices that are incorrect or needs improvement segmented according to the planning phase activity they belong to:

Creation of T&E strategy

Identified problem area: Lack of a QA build management strategy

QA Testing strategy documents do not outline the strategy that will be adopted by QA to ensure the criteria to accept or reject QA builds. Hence the parameters with regard to “testability” which are mandatory are not aligned with either development team or the customer side stake holders. The fallout of this is that the process of accepting or rejecting builds in QA is not formalized and is an ad-hoc process. This scenario is aggravated when numerous builds get deployed into QA to rectify deployment issues. The end result being that there is no traceability with respect to build issues since they have been rectified in an ad-hoc manner. There are no QA side documents logging the number of builds rejected and for what reason. Hence the project team does not have a collated document citing the major issues that need to be tackled while doing final builds in production prior to “go-live”.

Recommendations

The Testing strategy document should outline the mandatory checkpoints to ensure testability of builds. This should be coupled with a template document that will be maintained and communicated when accepting and rejecting builds.

Identified problem area: The points of verification of each module are not explicitly mentioned.

The Testing strategy for each module should also identify what all would be the points of verification for that module. This is because of the n number of modules within the system; there will be a small set of points from where QA would extract the outputs of testing for verification. The relevance of this activity is that when a module is deployed into QA the corresponding points of verification have to be in place to ensure that testing teams are able to start testing. The fallout of not adhering to this practice is that the module would be deployed in a healthy build; but if the point of verification is not in place or is not functional; that would impact the testability of the build.

Recommendations

The test strategy for each module should have its points of verification explicitly stated. This is so that this can then serve as checkpoints to start testing. The development team would also be aware of the necessity of these supporting modules and can ensure that these are in-place.

Design of test plans

Identified problem area: No sanity test suite identified by QA.

Currently as part of the test planning activities QA does not identify the subset of sanity test cases for any module. The fallout of this absent practice is that there is no alignment with the customer side technology/ business stakeholders with regard to what is the critical subset of sanity test cases that have to pass for QA to start testing. A formal sanity test suite can establish the grounds on which QA would reject a build as un-testable. The process of sanity testing is ad-hoc in nature and in many cases the members of the QA team are themselves unsure of what are the highly critical test cases that have to pass to deem a build as testable. This has led to many person hours of effort being wasted in identifying a module as un-testable and communicating the same to the development team.

Recommendations

For all modules that have been planned for testing; QA team should identify and maintain a suite of sanity test cases. An alignment has to be reached with the customer regarding the completeness of the test suite. The status of these test cases can be used to substantiate the acceptance of a QA build. This activity will enhance the customer confidence levels in the process adherence of QA personnel to a great extent.

Identified problem area: “Seat of the pants” method of test automation.

This is a unique automation methodology that is being followed in many testing projects. In these cases though the QA team has estimated for the automated testing of a module, there lacks an absence in the mapping of the manual test cases designed to the automation scripts developed for the same. Hence the QA team is hazy with regard to the extent of automation that they have actually driven in the project. The test coverage via automation is also not specific. There are only rough estimates regarding the extent of automation planned. These estimates are never re-visited during the course of the project of taking stock of the extent and coverage of automation.

Recommendations

The QA team has to identify the subset of manual test cases that can be automated. This then has to be documented and revised constantly to ascertain what extent of automation has been planned and what has been achieved in the project executions. This is an area where the QA teams will have to conduct E-R gap analysis*.

* E-R gap analysis: Expectation – Reality gap.
“How are we doing it?”: Execution Phase

The following are the core set of activities undertaken by QA team during the execution phase of projects.

* Execution of manual test cases.
* Execution of automated test scripts
* Defect reporting/ tracking
* Re-testing of defect fixes
* Regression testing of modules
* UAT support (if applicable)

Identified wrong practices

Execution of manual test cases

Identified problem area: Sanity testing of builds not performed

Since the QA team does not have a pre-defined test plan for sanity testing; little or no sanity testing is done once a build has been deployed into QA for testing. The initial testing of such modules is an ad-hoc process.

Recommendations

The sanity test plan that has been designed as part of the planning phase activities have to be executed before testing commences on any deployment or re-deployment in QA. This will ensure faster turnaround times in identifying unhealthy builds.

The process of build rejection will be more formalized by adopting this practice.

Conclusion

The above procedural flaws in testing practices have often led to sluggish turnaround times for QA. Adopting the recommendations outlined herewith will ensure that QA teams do not incur wasted efforts identifying sick builds and that automation efforts are more traceable.

No comments: