General Testing Principles

Principles-
A number of testing principles have been suggested over the past 40 years and offer general
guidelines common for all testing.

Principle 1 – Testing shows presence of defects
Testing can show that defects are present, but cannot prove that there are no defects. Testing
reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof of correctness.

Principle 2 – Exhaustive testing is impossible
Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Instead of exhaustive testing, risk analysis and priorities should be used to focus testing efforts.

Principle 3 – Early testing
Testing activities should start as early as possible in the software or system development life cycle, and should be focused on defined objectives.

Principle 4 – Defect clustering
A small number of modules contain most of the defects discovered during pre-release testing, or are responsible for the most operational failures.

Principle 5 – Pesticide paradox
If the same tests are repeated over and over again, eventually the same set of test cases will no
longer find any new defects. To overcome this “pesticide paradox”, the test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects.

Principle 6 – Testing is context dependent
Testing is done differently in different contexts. For example, safety-critical software is tested
differently from an e-commerce site.

Principle 7 – Absence-of-errors fallacy
Finding and fixing defects does not help if the system built is unusable and does not fulfill the users’ needs and expectations.


The Fundamental Test Process -
comprises five activities: Planning, Specification, Execution, Recording, and Checking for Test Completion.

The test process always begins with Test Planning and ends with Checking for Test Completion. Any and all of the activities may be repeated (or at least revisited) since a number of iterations may be required before the completion criteria defined during the Test Planning activity are met. One activity does not have to be finished before another is started; later activities for one test case may occur before earlier activities for another. During this cycle of activities, all the while, the progress of activities need to be monitored and controlled so that we stay in line with the test plan.

Planning-

The basic philosophy is to plan well. All good testing is based upon good test planning.
There should already be an overall test strategy and possibly a project test plan in place. This Test Planning activity produces a test plan specific to a level of testing (e.g. system testing). These test level specific test plans should state how the test strategy and project test plan apply
to that level of testing and state any exceptions to them. When producing a test plan, clearly define the scope of the testing and state all the assumptions being made. Identify any other software required before testing can commence (e.g. stubs & drivers, word processor, spreadsheet package or other 3rd party software) and state the completion criteria to be used to determine when this level of testing is complete

Example completion criteria are (some are better than others and using a combination of criteria is usually better than using just one):

100% statement coverage;
100% requirement coverage;
all screens I dialogue boxes I error messages seen;
100% of test cases have been run;
100% of high severity faults fixed;
80% of low & medium severity faults fixed;
maximum of 50 known faults remain;
maximum of 10 high severity faults predicted;
time has run out;
testing budget is used up.



 

No comments:

Post a Comment