David W. Johnson
Several classes of testing tools are available today that make the testing process easier, more effective and more productive. When properly implemented, these tools can provide a test organization with substantial gains in testing efficiency.
However, test tools need to fit into the overall testing architecture and should be viewed as process enablers -- not as the "answer." Test organizations will often look to tools such as reviews, test management, test design, test automation and defect tracking to facilitate the process. It is quite common for a testing tool or family of tools to address one or more of those needs, but for convenience they will be addressed from a functional perspective not a "package" perspective.
It is important to note that, as with any tool, improper implementation or ad-hoc implementation of a testing tool can negatively impact the testing organization. Ensure a rigorous selection process is adhered to when selecting any tool and do such things needs analysis, on-site evaluation, and an assessment of return on investment (ROI).
Reviews and technical inspections are the most cost-effective to way detect and eliminate defects in any project. This is also one of the most underutilized testing techniques. Consequently there are very few tools available to meet this need. Any test organization that is beginning to realize the benefits of reviews and inspections but is encountering scheduling issues between participants should look to an online collaboration tool. There are several tools available for the review and update of documents, but only one that I'm aware of that actually addresses the science and discipline of reviews -- ReviewPro by Software Development Technologies. I normally do not "plug" a particular tool, but when the landscape is sparse I believe some recognition is in order.
Test management encompasses a broad range of activities and deliverables. The test management aid or set of aids selected by an organization should integrate smoothly with any communication (email, network, etc.) and automation tools that will be applied during the testing effort. Generic management tools will often address the management of resources, schedules and testing milestones but not the activities and deliverables specific to the testing effort -- test requirements, test cases, test results and analysis.
Test requirements: Requirements or test requirements often become the responsibility of the testing organization. For any set of requirements to be useful to the testing organization, it must be maintainable, testable, consistent and traceable to the appropriate test cases. The requirements management tool must be able to fulfill those needs within the context of the testing team's operational environment.
Test cases: The testing organization is responsible for authoring and maintaining test cases. A test case authoring and management tool should enable the test organization to catalogue, author, maintain, manually execute and auto-execute automated tests. These test cases need to be traceable to the appropriate requirements and the results recorded in such a manner as to support coverage analysis. Key integration requirements when looking to test case management tools: Does it integrate with the test requirement aide? Does it integrate with the test automation tools being used? Will it support coverage analysis?
Test results and analysis: A test management suite of tools and aids needs to be able to report on several aspects of the testing effort. There is an immediate requirement for test case results -- which test case steps passed and which failed? There will be periodic status reports that will address several aspects of the testing effort: test cases executed/not executed, test cases passed/failed, requirements tested/not tested, requirements verified/not verified, and coverage analysis. The reporting mechanism should also support the creation of custom or ad-hoc reports that are required by the test organization.
Several test automation frameworks have been implemented over the years by commercial vendors and testing: record & playback, extended record and playback, and load/performance.
Record and playback: Record and playback frameworks were the first commercially successful testing solutions. The tool simply records a series of steps or actions against the application and allows a playback of the recording to verify that the behavior of the application has not changed.
Extended record and playback: It became quickly apparent that a simple record and playback paradigm was not very effective and did not make test automation available to non-technical users. Vendors extended the record and playback framework to make the solution more robust and transparent. These extensions included data-driven, keyword and component-based solutions.
Load/performance: Load/performance test frameworks provide a mechanism to simulate transactions against the application being tested and to measure the behavior of the application while it is under this simulated load. The load/performance tool enables the tester to load, measure and control the application.
The primary purpose of testing is to detect defects in the application before it is released into production; furthermore, defects are arguably the only product the testing team produces that is seen by the project team. The defect management tools must enable the test organization to author, track, maintain, trace to test cases, and trace to test requirements any defects found during the testing effort. The defect management tool also needs to support both scheduled and ad-hoc analysis and reporting on defects.