Search This Blog

Welcome to Machers Blog

Blogging the world of Technology and Testing which help people to build their career.

Monday, March 9, 2009

Testing Guideline

Source: Extract from one stop software testing
Testing should Reduce Software Development Risk-Software Testing Guideline-1
IT executives need to develop their IT strategy. Strategic plans are converted into business initiatives. The planning cycle comprising the plan-do components of the plan-do-check-act (PDCA) cycle is easy to understand. From a senior IT executive’s perspective, the check component must address business risk.

Risk is the probability that undesirable events will occur. These undesirable events will prevent the organization from successfully implementing its business initiatives.

For example, there is the risk that the information used in making business decisions will be incorrect or late. If the risk turns into reality and the information is late or incorrect, an erroneous business decision may result in a failed initiative. Controls are the means an organization uses to minimize risk. Software testing is a control that contributes to eliminating or minimizing risks; thus, senior executives rely on controls such as software testing to assist them in fulfilling their business objectives.

The purpose of controls such as software testing is to provide information to management so that they can better react to risky situations. For example, testing may indicate that the system will be late or that there is a low probability that the information produced will be correct. Knowing this information, management can then make decisions to minimize that risk: Knowing that the project may be late, they could assign additional personnel to speed up the software development effort.

Testers must understand that their role in a business is to evaluate risk and report
the results to management. Viewed from this perspective, testers must first ensure they understand the business risk, and then develop test strategies focused on those risks.
The highest business risk should receive the most test resources, whereas the lowest
business risk should receive the fewest resources. This way, the testers are assured that they are focusing on what is important to their management.
Testing should be performed effectively-Software Testing Guideline-2
Effectiveness means getting the maximum benefit from minimum resources. The process is well-defined. There should be little variance in the cost of performing the task from tester to tester. If no well-defined process is in place, the cost variance for performing a task between testers can vary significantly.
The object of the test process from an effective viewpoint is two-fold. First, processes reduce variance by having the process performed in a consistent manner by each tester.
The second processes reduce variance through continuous process improvement. Once variance is minimized, testers can perform those tests they say they will perform in the time frame and cost they say they can be performed in.
Testing should uncover defects- Software Testing Guideline-3
All testing focuses on discovering and eliminating defects or variances from what is expected. There are two types of defects:

■■ Variance from specifications. A defect from the perspective of the builder of the product.

■■ Variance from what is desired. A defect from a user’s (or customer’s) perspective. Testers need to identify both types of defects. Defects generally fall into one of the following three categories:

■■ Wrong. The specifications have been implemented incorrectly. This defect is a variance from what the customer/user specified.

■■ Missing. A specified or wanted requirement is not in the built product. This can be a variance from specification, an indication that the specification was not implemented, or a requirement of the customer identified during or after the product was built.

■■ Extra. A requirement incorporated into the product was not specified. This is always a variance from specifications, but may be an attribute desired by the user of the product. However, it is considered a defect.

Defects Versus Failures
A defect found in the system being tested can be classified as wrong, missing, or extra. The defect may be within the software or in the supporting documentation. While the defect is a flaw in the system, it has no negative impact until it affects the operational system.

A defect that causes an error in operation or negatively impacts a user/customer is called a failure. The main concern with defects is that they will turn into failures. It is the failure that damages the organization. Some defects never turn into failures. On the other hand, a single defect can cause millions of failures.

Why Are Defects Hard to Find?
Some defects are easy to spot, whereas others are more subtle. There are at least two reasons defects go undetected:
■■ Not looking. Tests often are not performed because a particular test condition is unknown. Also, some parts of a system go untested because developers assume software changes don’t affect them.
■■ Looking but not seeing. This is like losing your car keys only to discover they were in plain sight the entire time. Sometimes developers become so familiar with their system that they overlook details, which is why independent verification and validation should be used to provide a fresh viewpoint.
Defects typically found in software systems are the results of the following circumstances:
■■ IT improperly interprets requirements. Information technology (IT) staff misinterprets what the user wants, but correctly implement what the IT people believe is wanted.
■■ The users specify the wrong requirements. The specifications given to IT staff are erroneous.
■■ The requirements are incorrectly recorded. Information technology staff fails to record the specifications properly.
■■ The design specifications are incorrect. The application system design does not achieve the system requirements, but the design as specified is implemented correctly.
■■ The program specifications are incorrect. The design specifications are incorrectly interpreted, making the program specifications inaccurate; however, it is possible to properly code the program to achieve the specifications.
■■ There are errors in program coding. The program is not coded according to the program specifications.
■■ There are data entry errors. Data entry staff incorrectly enter information into the computers.
■■ There are testing errors. Tests either falsely detect an error or fail to detect one.
■■ There are mistakes in error correction. The implementation team makes errors in implementing your solutions.
■■ The corrected condition causes another defect. In the process of correcting a defect, the correction process itself institutes additional defects into the application system.

Usually, you can identify the test tactics for any test process easily; it’s estimating the costs of the tests that is difficult. Testing costs depend heavily on when in the project life cycle testing occurs., the later in the life cycle testing occurs, the higher the cost. The cost of a defect is twofold: You pay to identify a defect and to correct it.
Testing should be performed using business logic-Software Testing guideline-4
The cost of identifying and correcting defects increases exponentially as the project progresses. A defect encountered during the execution of a SDLC phase is the cheapest to fix if corrected in the same SDLC phase where the defect occurred. Let’s assume a defect found and corrected during the SDLC design phase costs x to fix. If that same defect is corrected during the system test phase, it will cost 10x to fix. The same defect corrected after the system goes into production will cost 100x. Clearly, identifying and correcting defects early is the most cost-effective way to develop an error-free system.
Testing should occur throughout the development life cycle-Software Testing Guideline-5
Life-cycle testing involves continuous testing of the solution even after software plans are complete and the tested system is implemented. At several points during the development process, the test team should test the system to identify defects at the earliest possible point.

Life-cycle testing cannot occur until you formally develop your test process. IT must provide and agree to a strict schedule for completing various phases of the test process for proper life-cycle testing to occur. If IT does not determine the order in which they deliver completed pieces of software, appropriate tests are impossible to schedule and conduct. Testing is best accomplished by forming a test team. The test team must use structured methodologies; they should not use the same methodology for testing that they used to develop the system. The effectiveness of the test team depends on developing the system under one methodology and testing it under another. Thus, the testing and implementation teams begin their work at the same time and with the same information. The development team defines and documents the requirements for implementation purposes, and the test team uses those requirements for the purpose of testing the system. At appropriate points during the development process, the test team runs the compliance process to uncover defects. The test team should use the structured testing techniques outlined in this book as a basis of evaluating the corrections.

As you’re testing the implementation, prepare a series of tests that your IT department can run periodically after your revised system goes live. Testing does not stop once you’ve completely implemented your system; it must continue until you replace or update it again.
Testing should test both function and structure-Software Testing Guideline-6
When testers test your project team’s solution, they’ll perform functional or structural tests. Functional testing is sometimes called black box testing because no knowledge of the system’s internal logic is used to develop test cases. For example, if a certain function key should produce a specific result when pressed, a functional test would be to validate this expectation by pressing the function key and observing the result. When conducting functional tests, you’ll be using validation techniques almost exclusively.

Conversely, structural testing is sometimes called white box testing because knowledge of the system’s internal logic is used to develop hypothetical test cases. Structural tests predominantly use verification techniques. If a software development team creates a block of code that will allow a system to process information in a certain way, a test team would verify this structurally by reading the code, and given the system’s structure, see if the code could work reasonably. If they felt it could, they would plug the code into the system and run an application to structurally validate the code. Each method has its pros and cons, as follows:

■■ Functional testing advantages:
■■ Simulates actual system usage
■■ Makes no system structure assumptions

■■ Functional testing disadvantages:
■■ Includes the potential to miss logical errors in software
■■ Offers the possibility of redundant testing

■■ Structural testing advantages:
■■ Enables you to test the software’s logic
■■ Enables you to test structural attributes, such as efficiency of code

■■ Structural testing disadvantages:
■■ Does not ensure that you’ve met user requirements
■■ May not mimic real-world situations

No comments: