Search This Blog

Welcome to Machers Blog

Blogging the world of Technology and Testing which help people to build their career.

Wednesday, July 9, 2008

Test Methods

...Test Methods...
What approach should be used for testing?
What are the Test Derivation Techniques?
How many different Test Types are there?
Why use Generic Test Objectives?
What are Quality Gates?
What Acceptance Criteria should be used?
Testing Metrics - Do you have examples?
Why use Test Scripts?
What tools are available for Test Support?
How-to Guides - What are they?
What are the 10 best steps for software testing?

1.
What approach should be used for testing?
There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation.
Common quality attributes include reliability, stability, portability, maintainability and usability.
Regardless of the methods used or level of formality involved the desired result of testing is a level of confidence in the software so that the developers are confident that the software has an acceptable defect rate.
When changes are made to software, a regression test ensures that the changes made in the current software do not affect the functionality of the existing software.
The role of highly skilled professionals in software development has never been more difficult - or more crucial - as organisations try to complete application development faster and more cost-effectively.
Test teams that use manual testing exclusively are struggling to keep up.
Because they cannot test all the code, they risk missing significant defects. At the same time, they cannot stop testing long enough to learn new skills.
2.
What are the Test Derivation Techniques?
• Equivalence partitioning
• Boundary value analysis
• State transition testing
• Cause-effect graphing
• Syntax testing
• Statement testing
• Branch / decision testing
• Data flow testing
• Branch condition testing
• Branch condition combination testing
• Modified condition decision testing
• Business process
• Requirements coverage
• Use case derivation
3.
How many different Test Types are there?
• Archive tests
• Clinical safety tests
• Compatibility and conversion tests
• Conformance tests
• Cutover tests
• Flood and volume tests
• Functional tests
• Installation and initialisation tests
• Interoperability tests
• Load and stress tests
• Performance tests
• Portability tests
• End-to-end thread testing
• Recovery and restart
• Documentation tests / manual procedure tests
• Reliability / Robustness tests
• Security tests
• Temporal tests
• Black box / White box tests
• User interface tests / W3C WAI Accessibility testing
4.
Why use Generic Test Objectives?
• Demonstrate component meets requirements
• Demonstrate component is ready to reuse in larger subsystems
• Demonstrate integrated components are correctly assembled or combined and collaborate
• Demonstrate system meets functional requirements
• Demonstrate system meets non-functional requirements
• Demonstrate system meets industry regulation requirements
• Demonstrate supplier meets contractual obligations
• Validate that system meets business or user requirements
• Demonstrate system, processes, and people meet business requirements
5.
What are Quality Gates?
• The Quality Gate process is a formal way of specifying and recording the transition between stages in the project lifecycle
• Each Quality Gate details the deliverables required and actions to be completed and metrics associated with the Quality Gate
• All testing stages specify formal entry and exit criteria
• The Quality Gate review process verifies the specified acceptance criteria have been achieved
6.
What Acceptance Criteria should be used?
In the context of the system to be released, good enough is achieved when all of the following apply:
• The release has sufficient benefits
• The release has no critical problems
• The benefits sufficiently outweigh the non-critical problems
• In the present situation, with all things considered, delaying the release to potentially further improve the system, would cause more harm than good
7.
Testing Metrics - Do you have examples?
• Number of test cases
• Number of tests executed
• Number of tests passed
• Number of tests failed
• Number of re-tests
• Number of Requirements tested
• Number of Defects per lines of software code or per function
• Number of Defects found in computer file types (e.g. jav, aspx, xml, xslt, html, com, doc)
8.
Why use Test Scripts?
• Test scripts are necessary to execute repeatable tests
• Can be manually executed
• Can be automatically executed
• Can be based on re-usable building blocks
• Are a constructive component in the testing process
• Provide traceability and documentation
9.
What tools are available for Test Support?
• Test Asset Management Tool
• Functional test tool
• Non Functional test tool
• Monitoring tools (for soak testing and live monitoring)
• Consistent, company-wide, Defect Management Process
• Repeatable Test Execution Processes
• Timely Reporting
• Use Cases Documentation
• Test Harnesses
• Common Nomenclature in use by all
• How-to Guides
10.
How-to Guides - What are they?
These are some of the possible How-to guides…
• How-to read Use Cases
• How-to scope each test
• How-to determine which test types are necessary
• How-to derive test conditions
• How-to prepare a test planner
• How-to write test cases
• How-to plan for Security testing
• How-to conduct WAI Accessibility testing
• How-to test Service Level Agreements
• How-to assess risks
• How-to raise, track and manage defects
• How-to create and maintain a regression test pack
• How-to setup and manage User Acceptance Testing
11.
What are the 10 best steps for software testing?
1. Establish the Test Methodology you wish to follow ... E.g. ISEB
2. Establish the Test Principle ... E.g. Fail fast
3. Define the Requirements ... If there are no requirements then there is nothing to test
4. Document the Requirements Traceability matrix ... This should work in both directions
5. Define the specific tests which apply in your situation
6. Document the test plan
7. Document the test cases
8. Define the start of testing
9. Conduct testing
10. Define the point at which testing can stop ... When the benefit of continuing testing is outweighed by the effort of continuing testing

No comments: