This tutorial covers the basics of software testing with accents on white and black box techniques that are mandatory knowledge for certifications. The combination of this tutorial and our full version of the testing dictionary give a good coverage some of the topics for certification.
Table of contents: 1. Specification based software testing. 2. Self-assessment triangle test from Glenford J. Myers,1979 (Click here) 3. Results of Triangle Test from Glenford J. Myers,1979 and Robert V. Binder,1999 4.Principles of Software Testing (on Web) 5. Structured Walkthroughs 6. Equivalence Partitioning 7. Boundary Value Analysis 8. Unit Testing -Big band method -Incremental Software Testing -Top–Down Software Testing -Bottom-Up Software Testing 9. High-Order Software Testing 10.Testing Object-Oriented Software. 11.A Tester's Guide to the UML 12. Software Testing Questions and Answers 13. Progress Check (on Web) 2. The following test has been found useful in judging a person’s testing
abilities.( Glenford J. Myers,1979) The system you are asked to test
reads records from an input file, called TRIANGLE .DAT The following is a typical file: File: TRIANGLE. DAT Record 1 1,2 ,3 Record 2 4,5,6 Each record in the file is to contain three integers separated by commas. The
three values in each record represent the lengths of the three sides of a
triangle (For example, the file contents above describe a triangle whose
sides are 1, 2, and 3 units long and a triangle whose sides are 4, 5, and
6units long). The program evaluates whether the triangle described by each
record is equilateral, scalene, or isosceles and displays the result. An
equilateral triangle is one whose three sides are all of equal length, a
scalene is one whose sides all have different lengths, and an isosceles
triangle is one that has exactly two equal sides. You are to specify a set
of test cases to test this program. We have 20 Test Cases in the tutorial
THE DEFINITION OF TESTING
THE IMPORTANCE OF A GOOD DEFINITION
The reverse would be true if the goal were to locate and correct errors. Test data would be selected with an eye toward providing the program with cases that would likely cause the program to fail. This would be a more desirable result. Why? We begin with the assumption that the system, like most systems, contains errors. The job of testing is to discover them before the user does. In that case, a good tester is one who is successful in crashing the system, or in causing it to perform in some way that is counter to the specification.
The mentality of the tester, then, is a destructive one -quite different from the constructive attitude of the programmer, the "creator". This is useful information for the analyst. Who is acting as a project leader, and is responsible for staffing. Staff should be selected with the appropriate personality traits in mind.
Another effect of having a proper working definition of testing regards the way the project leader assesses the performance of the test team. Without a proper definition of testing, the leader might describe a successful test run as one which proves the program is error free and describe an unsuccessful test as one which found errors. As is the case with the testers themselves, this mind-set is actually counter-productive to the testing process.>
GOALS OF TESTING
The first goal refers to specifications which were not satisfied by the program while the second goal refers to unwanted side-effects.
THE EIGHT BASIC PRINCIPLES OF TESTING
More often that not, the tester approaches a test case without a set of predefined and expected results. The danger in this lies in the tendency of the eye to see what it wants to see. Without knowing the expected result, erroneous output can easily be overlooked. This problem can be avoided by carefully pre-defining all expected results for each of the test cases. Sounds obvious? You’d be surprised how many people miss this pint while doing the self-assessment test.
Programming is a constructive activity. To suddenly reverse constructive thinking and begin the destructive process of testing is a difficult task. The publishing business has been applying this idea for years. Writers do not edit their own material for the simple reason that the work is "their baby" and editing out pieces of their work can be a very depressing job.
The attitudinal l problem is not the only consideration for this principle. System errors can be caused by an incomplete or faulty understanding of the original design specifications; it is likely that the programmer would carry these misunderstandings into the test phase.
As obvious as it sounds, this simple principle is often overlooked. In many test cases, an after-the-fact review of earlier test results shows that errors were present but overlooked because no one took the time to study the results.
Programs already in production often cause errors when used in some new or novel fashion. This stems from the natural tendency to concentrate on valid and expected input conditions during a testing cycle. When we use invalid or unexpected input conditions, the likelihood of boosting the error detection rate is significantly increased.
It's not enough to check if the test produced the expected output. New systems, and especially new modifications, often produce unintended side effects such as unwanted disk files or destroyed records. A thorough examination of data structures, reports, and other output can often show that a program is doing what is not supposed to do and therefore still contains errors.
Test cases should be documented so they can be reproduced. With a non-structured approach to testing, test cases are often created on-the-fly. The tester sits at a terminal, generates test input, and submits them to the program. The test data simply disappears when the test is complete.
Reproducible test cases become important later when a program is revised, due to the discovery of bugs or because the user requests new options. In such cases, the revised program can be put through the same extensive tests that were used for the original version. Without saved test cases, the temptation is strong to test only the logic handled by the modifications. This is unsatisfactory because changes which fix one problem often create a host of other apparently unrelated problems elsewhere in the system. As considerable time and effort are spent in creating meaningful tests, tests which are not documented or cannot be duplicated should be avoided.
Testing should be viewed as a process that locates errors and not one that proves the program works correctly. The reasons for this were discussed earlier.
At first glance, this may seem surprising. However, it has been shown that if certain modules or sections of code contain a high number of errors, subsequent testing will discover more errors in that particular section that in other sections.
Consider a program that consists of two modules, A and B. If testing reveals five errors in module A and only one error in module B, module A will likely display more errors that module B in any subsequent tests.
Why is this so? There is no definitive explanation, but it is probably due to the fact that the error-prone module is inherently complex or was badly programmed. By identifying the most "bug-prone" modules, the tester can concentrate efforts there and achieve a higher rate of error detection that if all portions of the system were given equal attention.