Search This Blog

Welcome to Machers Blog

Blogging the world of Technology and Testing which help people to build their career.

Monday, April 6, 2009

Why the quality assurance department should be involved in testing

By John Scarpino




In this sluggish economy, corporate software users are price-conscious, of course, but they also want the best product for the lowest price. So software companies are concentrating on making sure their products excel in performance, security and longevity. This tip's advice and examples show some lessons I've learned about achieving excellence in those areas by beefing up quality assurance (QA).

In recessions or boom times, any organization purchasing and/or implementing a new tool should involve the quality assurance department in the testing process. Doing so improves the assessment and eliminates bias from any one group.

Tools that are purchased for the security and performance environments are usually licensed to a specific department instead of the company at large (even though other departments within the corporation can use it). This is a technicality that can be abused, because a software license gives an entity all the information its needs to install and use the tool and the power to implement the software as it sees fit. Sometimes the licensed department may operate exclusive of other departments that would normally be involved in the software development lifecycle (SDLC). When something like this happens, the QA department is officially out of the picture -- which does not bode well for the future of the product.

Believe it or not, I've witnessed a Web application group validating and testing its own product with software testing tools and without informing QA that it was doing so. I've also seen an infrastructure group hoard performance information from QA and actually change the testing results because the sole validation and verification came from within the group.

Why does this present a problem? There is no objectivity involved in the testing process, because it is all conducted internally with a biased group. The involvement of QA is absolutely imperative during the testing process, because only the QA department has a variety of resources needed to effectively approach different situations and evaluate them.

Really, the issue isn't so much the fact that the Web or infrastructure groups conducted their own testing of security and performance; rather it's that they were the only groups that conducted testing. Just because the Web and infrastructure groups are the only ones using the tool does not mean they also own the software and license, nor is it right for them to test their own definition of "quality." This increases the chance of information being withheld from other groups that need it. It's important that departments do not become "information silos," because QA is ultimately responsible for the outcome of both process and product.

I believe that the best results come when a group tests its own product as a whole unit, and then the QA department tests it again to uphold the product's integrity through objectivity. Moreso, the QA team should use nontraditional testing approaches, such as testing "around" the product to other functionalities that may be affected by the product's implementation, just to cover all the bases. Then the verification and results can be centrally located with all of the other functional and nonfunctional tests for a given release or project.

A good way to centralize testing information is to create a data library for the results, so that every test during the SDLC is documented and accessible to everyone. It should support software requirements by including functional and nonfunctional test results, as well as security, performance and infrastructure implementation or installation updates.

Also, the principles of quality assurance must be weaved throughout the project. I like to arrange the requirements and the test plans by displaying the high-level details of each document in a test management tool or shared network location displaying the functional, nonfunctional, security and performance requirements and test plans with both positive and negative approaches. Then I'll create another folder within the test management system or shared network location for verifications, which contains all phases of my testing along with each phase's results.

Whenever a defect occurs, I note the phase during which the defect took place, and which test plan was created due to that defect. By encompassing security testing and performance testing information in the same place, everything is easy to find and navigate.

No comments: