Search This Blog

Welcome to Machers Blog

Blogging the world of Technology and Testing which help people to build their career.

Monday, March 9, 2009

What's the difference between priority and severity of bugs in Software Testing?

Source: one stop software testing
Priority" is associated with scheduling, and "severity" is associated with standards.

"Priority" means something is afforded or deserves prior attention; a precedence
established by order of importance (or urgency).

"Severity" is the state or quality of being severe; severe implies adherence to rigorous standards or high principles and often suggests harshness; severe is marked by or requires strict adherence to rigorous standards or high principles, e.g. a severe code of behavior.

The words priority and severity do come up in bug tracking. A variety of commercial, problem tracking/management software tools are available. These tools, with the detailed input of software test engineers, give the team complete information so developers can understand the bug, get an idea of its 'severity', reproduce it and fix it.

The fixes of bugs are based on project 'priorities' and 'severity' of bugs. The 'severity' of a problem is defined in accordance to the customer's risk assessment and recorded in their selected tracking tool. A buggy software can 'severely' affect schedules, which, in turn can lead to a reassessment and renegotiation of 'priorities'
How to write effective bug report?
"The purpose of a bug report is to let the developers see their faults and failures of the application under test. The bug report explains the gap between the actual result and expected result, and the details of that how to reproduce the bug."

Many times it happens that if the bug report is not effective or incomplete then programmers face many problems while fixing the bugs.

Due to the Bad bug report:
1. bug is not reproducible by developers
2. bug is fixed but with incorrect functionality.
3. delay in bug fixes
and many more….

Sample of bad bug report:

Bug Title: Error message

When running the application, I get an "Internal Server Error" that says "See the .log file for more details".

Steps to Recreate:
Happens when "Document.create = null". It is not happening when changed to " Document.create".

Expected results:
this error message should not appear when status is “Document.create = null”

Observed results:
See above.

So now how to write effective bug reports? Below I am giving some Bug report best practices:

1. Once the bug is found
Check the bug repository and search that if the bug is already exist. If exists, then check whether the status of bug is CLOSED OR OPEN. If the Status of bug is closed then REOPEN it.
Now, if the bug is not there in the repository and it is a new bug, then you need to report the bug.

2. If the bug is Reproduce able, then report it, Otherwise avoid reporting of non-reproducible bugs (best practices).

3. Report a new bug: "Bug description" also known as "Short description" or "Bug Summary": It should be a small statement, which briefly points towards the exact problem. Writing a one line description is an ART. Bug Summary helps everyone quickly review outstanding problems. It is the most important part of the bug. It should describe only the problem, not the replication steps.
If it is not clear then managers might defer the bug by mistake and also it affects the individual performance of a tester.

4. The Language of the bug: Language should be as simple as possible and as straight as possible. Don’t point any developer through your words. Remember – the nasty is the bug, not the programmer.

The language should be such that is the bug report should be easily understandable by developers, fellow testers, managers, or in some cases, even the customers

5. Steps to Reproduce:
- The steps should be in a logical flow. Don’t break the flow or skip any step.
- Mention the Pre-requisites clearly.
- Use attachments and screenshots of errors, and annotate the screenshots.
- The details must be elaborated like which buttons were pressed and in what order.
Note – Please don’t write an essay on it. Be clear and precise. People do not like to read long paragraphs

6. Give Examples: either with actual data or the dummy scenario. It will be easy for developers to recreate the bug.

7. Provide the Test Case ID, requirement ID, and Specs Reference.

8. Define the proper Severity and Priority.
The impact of the defect should be thoroughly analyzed before setting the severity of the bug report. If you think that your bug should be fixed with a high priority, justify it in the bug report.

This justification should go in the Description section of the bug report.
If the bug is the result of regression from the previous builds/versions, raise the alarm. The severity of such a bug may be low but the priority should be typically high.


8. Read what you wrote. Read the report back to yourself, and see if you think it's clear. If you have listed a sequence of actions which should produce the failure, try following them yourself, to see if you missed a step.

9. Mention the correct environment, application link, build number, and login/password details (if any).

10. Common issues: Many times it happens that the bug is not reproducible (even though the bug report is good) by developers, the don’t worry, arrange go to meeting/walkthrough with them and help them in order to recreate the bug. And sometimes it happens like first day the bug is appearing then on next day the same bug is not appearing. In this case, the bug can be assigned back to you. Now you need to accept it and close the bug with appropriate comments like
“It is working fine now, but previously this problem was appearing. So, will close this bug after verifying in next build.”
Of course, you need to close the bug after verifying in the next release/build/patch because it is an inconsistent bug.
Thus a good tester needs to be patient & always build a defense mechanism in the form of preserving test data & screenshots etc. to justify his statements.

11. Don’t Assume the Expected results. Write the expectations which are mentioned in the test case, requirements documents, FDD or in Specification documents.

That’s all. Practices make us perfect.
Code Coverage “ A White Box Testing Technique
What is code coverage – An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.

As per wiki – “Code coverage is a measure used in software testing. It describes the degree to which the source code of a program has been tested. It is a form of testing that inspects the code directly and is therefore a form of white box testing.”

Code coverage measurement simply determines those statements in a body of code have been executed through a test run and those which have not. In general, a code coverage system collects information about the running program and then combines that with source information to generate a report on test suite's code coverage.
Code coverage is part of a feedback loop in the development process. As tests are developed, code coverage highlights aspects of the code which may not be adequately tested and which require additional testing. This loop will continue until coverage meets some specified target.

The main ideas behind coverage:
- Systematically create a list of tasks (the testing requirements)
- Check that each task is covered during the testing

Code coverage is defined in six types as listed below:

• Segment coverage – Each segment of code b/w control structure is executed at least once.
• Branch Coverage or Node Testing – Each branch in the code is taken in each possible direction at least once. Branch Coverage Gives a measure of how many assembler branch instructions are associated with each line. In addition, a measure of the number of branches taken/not taken is given.
• Compound Condition Coverage – When there are multiple conditions, you must test not only each direction but also each possible combinations of conditions, which is usually done by using a ‘Truth Table’
• Basis Path Testing – Each independent path through the code is taken in a pre-determined order. This point will further be discussed in other section.

Basis path testing is a white box testing technique first proposed by Tom McCabe. The Basis path method enables to derive a logical complexity measure of a procedural design and use this measure as a guide for defining a basis set of execution paths. Test Cases derived to exercise the basis set are guaranteed to execute every statement in the program at least one time during testing.

• Data Flow Testing (DFT) – In this approach you track the specific variables through each possible calculation, thus defining the set of intermediate paths through the code i.e., those based on each piece of code chosen to be tracked. Even though the paths are considered independent, dependencies across multiple paths are not really tested for by this approach. DFT tends to reflect dependencies but it is mainly through sequences of data manipulation. This approach tends to uncover bugs like variables used but not initialize, or declared but not used, and so on.
• Path Testing – Path testing is where all possible paths through the code are defined and covered. This testing is extremely laborious and time consuming.

Path coverage
- Goal is to ensure that all paths through program are taken
- Too many paths
- Restrict to paths in a subroutine
- or to two consecutive branches

• Loop Testing – In addition to above measures, there are testing strategies based on loop testing. These strategies relate to testing single loops, concatenated loops, and nested loops. Loops are fairly simple to test unless dependencies exist among the loop or b/w a loop and the code it contains.
This white box technique focuses exclusively on the validity of loop constructs. Four different classes of loops can be defined:

1. simple loops,
2. nested loops,
3. concatenated loops, and
4. unstructured loops.

Simple Loops:
The following tests should be applied to simple loops where n is the maximum number of allowable passes through the loop:

1. skip the loop entirely,
2. only pass once through the loop,
3. m passes through the loop where m < n,
4. n - 1, n, n + 1 passes through the loop.


Nested Loops:
The testing of nested loops cannot simply extend the technique of simple loops since this would result in a geometrically increasing number of test cases. One approach for nested loops:

1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimums. Add tests for out-of-range or excluded values.
3. Work outward, conducting tests for the next loop while keeping all other outer loops at minimums and other nested loops to typical values.
4. Continue until all loops have been tested.

Concatenated Loops:
Concatenated loops can be tested as simple loops if each loop is independent of the others. If they are not independent (e.g. the loop counter for one is the loop counter for the other), then the nested approach can be used.

Unstructured Loops:
This type of loop should be redesigned not tested!!!

No comments: