Search This Blog

Welcome to Machers Blog

Blogging the world of Technology and Testing which help people to build their career.

Friday, January 16, 2009

How to evaluate testing software and tools

David W Johnson
Once a testing organization reaches a certain size, level of maturity or workload, the requirement to purchase/build testing software or aides becomes apparent. There are several classes of testing tools available today that make the testing process easier, more effective and more productive. Choosing the appropriate tool to meet the testing organization's long-term and short-term goals can be a challenging and frustrating process. But following a few simple guidelines and applying a common-sense approach to software acquisition and implementation will lead to a successful implementation of the appropriate tool and a real return on investment (ROI).
One of the simplest questions to ask when looking at testing software is "What is ROI?" The simplest answer is "Anything that reduces the resource (people) hours required to accomplish any given task." Testing tools should be brought into an organization to improve the efficiency of a proven testing process. The value of the actual process has already been established within the organization or within the industry.
Example: Test management
The organization has meticulously tracked test requirements and test cases using spreadsheets, but it finds this to be a cumbersome process as the test organization grows. It has been shown that this process has reduced the number of defects reaching the field, but the cost of maintaining the approach now impacts its effectiveness. Solution: Invest in a test management tool or suite of tools.
Example: Test automation
The organization has created a suite of manual test cases using a text editor, but it finds it difficult to maintain, use and execute these test cases efficiently as the test organization's role grows. The test cases have proven effective in detecting defects before they reach production, but the time required to manage and execute these test cases now impacts the ROI. Solution: Invest in a test automation tool or suite of tools.
Example: Defect Management
The test organization has implemented a defect tracking process using email and a relational database, but it now finds that defects are being duplicated and mishandled as the volume of defects grows. Solution: Upgrade the current in-house solution or invest in a defect management tool.
Needs analysis
The first thing an organization must accomplish is catalogue what needs or requirements the testing software is expected satisfy. For an organization that is new to the acquisition process, this can be an intimidating exercise. There are three categories or "points of view" that must be addressed: management/organization, test architecture and end user.
Needs analysis: Management/organization
Management or the test organization needs to clearly state what the objective is for purchasing testing software. It must state the mission or goal that will be met by acquiring the test software and the expected ROI in terms of person-hours once the tool has been fully implemented. That can be accomplished by creating a simple mission statement and a minimum acceptable ROI.
It should be noted that any ROI fewer than three hours saved for every hour invested should be considered insufficient because of the impact of introducing a new business process into the testing organization. This should be a concise statement on the overall goal (one to three sentences) not a dissertation or catalogue of the products capabilities.
Example: Test management
The selected Test Management system shall enable end users to author and maintain requirements and test cases in a Web-enabled, shareable environment. Furthermore, the test management tool shall support test management "best practices" as defined by the Test Organization. The minimum acceptable ROI is four hours saved for every hour currently invested.
Example: Test automation
The selected Test Automaton tool shall enable end users to author, maintain and execute automated test cases in a Web-enabled, shareable environment. Furthermore, the test automation tool shall support test case design, automation and execution "best practices" as defined by the Test Organization. The minimum acceptable ROI is five hours saved for every hour currently invested.
Example: Defect management
The selected Defect Management tool shall enable end users to author, maintain and track/search defects in a Web- and email-enabled, shareable environment. Furthermore, the defect management tool shall support authoring, reporting and tracking "best practices" as defined by the Test Organization. The minimum acceptable ROI is four hours saved for every hour currently invested.
Needs analysis: Test architecture
Management has defined the immediate organizational goal, but the long-term architectural necessities must be defined by the testing organization. When first approaching the acquisition of testing software, test organizations usually have not invested much effort in defining an overall test architecture. Lack of an overall test architecture can lead to product choices that may be effective in the short term but lead to additional long-term costs or even replacement of a previously selected toolset.
If an architectural framework has been defined, then the architectural needs should already be clearly understood and documented. If not, then a general set of architectural guidelines can be applied. Example:
The selected testing software and tool vendor shall do the following:
1. Have a record of integrating successfully with all Tier 1 testing software vendors.
2. Have a history of operational success in the appropriate environments.
3. Have an established end-user community that is accessible to any end user.
4. Support enterprise wide collaboration.
5. Support customization.
6. Support several (1 to n) simultaneous engagements/projects.
7. Provide a well-designed, friendly and intuitive user interface.
8. Provide a smooth migration/upgrade path from one version of the product to the next.
9. Provide a rich online help facility and effective training mechanisms (tutorials, courseware, etc.).
The general architectural requirements for any tool will include more objectives than the nine listed above, but it is important to note that any objective should be applied across the entire toolset.
Needs analysis: End user
The end user needs analysis should be a detailed dissertation or catalogue of the envisioned product capabilities as they apply to the testing process. (It would probably be a page or more of requirements itemized or tabulated in such a way as to expedite the selection process.) This is where the specific and perhaps unique product capabilities are listed. The most effective approach is to start from a set of general requirements and then extend into a catalogue of more specific/related requirements.
Example: Test management
The Test Management solution shall do the following:
1. Support the authoring of test requirements.
2. Support the maintenance of test requirements.
3. Support enterprise wide controlled access to test requirements (Web-enabled preferred).
4. Support discrete grouping or partitioning of test requirements.
5. Support traceability of requirements to test cases and defects.
6. Support "canned" and "user-defined" queries against test requirements.
7. Support "canned" and "user-defined" reports against test requirements.
8. Support coverage analysis of test requirements against test cases.
9. Support the integration of other toolsets via a published API or equivalent capacity.
10. And so on…
The key here is to itemize the requirements to a sufficient level of detail and then apply these requirements against each candidate.
Example: Test automation
The Test Automation solution shall do the following:
1. Support the creation, implementation and execution of automated test cases.
2. Support enterprise wide, controlled access to test automation (Web-enabled preferred).
3. Support data-driven automated test cases.
4. Support keyword-enabled test automation.
5. Integrate with all Tier 1 and 2 test management tools that support integration.
6. Integrate with all Tier 1 and 2 defect management tools that support integration.
7. Enable test case design within a non-technical framework.
8. Enable test automation and verification of Web, GUI, .NET, and Java applications.
9. Support the integration of other toolsets via a published API or equivalent capacity.
10. And so on…
Once again, the key is to itemize the requirements to a sufficient level of detail. It is not necessary for all the requirements to be "realistic" in terms of what is available. Looking to the future can often lead to choosing a tool that eventually does provide the desired ability.
Example: Defect management
The Defect Management solution shall do the following:
1. Support the creation of defects.
2. Support the maintenance of defects.
3. Support the tracking of defects.
4. Support enterprise wide controlled access to defects (Web-enabled preferred).
5. Support integration with all Tier 1 and 2 test management tools that support integration.
6. Enable structured and ad-hoc searches for existing defects.
7. Enable the categorization of defects.
8. Enable customization of defect content.
9. Support "canned" and customized reports.
10. And so on…
In all cases, understanding of the basic needs will change as you proceed through the process of defining and selecting appropriate testing software.
In addition, for each case you must make sure that a particular vendor does not redefine the initial goal. Becoming an educated consumer in any given product space will lead to a redefinition of the basic requirements that should be recognized and documented.
Identify candidates
Identifying a list of potential software candidates can be accomplished by investigating several obvious sources: generic Web search, online quality assurance and testing forums, QA and testing e-magazines and co-workers. Once you create a list of potential software candidates, an assessment of currently available reviews can be done. Remember to keep an eye open for obvious marketing ploys.
It is also important to note which products command the largest portion of the existing market and which product has the fastest growth rate. This relates to the availability of skilled end users and end-user communities. Review the gathered materials against the needs analysis and create a short list of candidates (three to five) for assessment.
Assess candidates
If you have been very careful and lucky, your first encounter with the vendor's sales force will occur at this time. This can be a frustrating experience if you are purchasing a relatively small number of licenses, or it can be an intimidating one if you are going to be placing an order for a large number of licenses. Being vague as to the eventual number of licenses can put you in the comfortable middle ground.
Assessments of any testing software should be accomplished onsite with a full demonstration version of the software. When installing any new testing software, install it on a typical end-user system, check for "dll" file conflicts, check for registry entry issues, check for file conflicts, and ensure the software is operational. Record any issues discovered during the initial installation and seek clarification and resolution from the vendor.
Once the testing software has been installed, assess the software against the previous needs analysis -- first performing any available online tutorials and then applying the software to your real-world situation. Record any issues discovered during the assessment process and seek clarification and resolution from the vendor. Any additional needs discovered during an assessment should be recorded and applied to all candidates.
The assessment process will lead to the assessment team gaining skills in the product space. It is always wise to do one final pass of all candidates once the initial assessment is completed. Each software candidate can now be graded against the needs/requirements and a final selection can be made.
Implementation
Implementation obviously is not part of the selection process, but it is a common point of failure. Test organizations will often invest in testing software but not in the wherewithal to successfully use it. Investing hundreds of thousands of dollars in software but not investing capital in onsite training and consulting expertise is a recipe for disaster.
The software vendor should supply a minimum level of training for any large purchase and be able to supply or recommend onsite consultants/trainers who will ensure the test organization can take full advantage of the purchased software as quickly as possible. By bringing in the right mix of training, consulting and vendor expertise, the test organization can avoid much of the disruption any change in process brings and quickly gain the benefits that software can provide

No comments: