Software Testing Dictionary

Software Testing Dictionary



A free, searchable by word and topic on-line vocabulary and thesaurus with definitions, synonyms and quotations for over 600 terms associated with Software Testing and QA (Quality assurance)

Page copy protected against web site content infringement by Copyscape

All following definitions are taken from accepted and identified sources.
This page is being updated on a monthly basis

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z


End-to-End testing. Similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Equivalence Partitioning: An approach where classes of inputs are categorized for product or function validation. This usually does not include combinations of input, but rather a single state value based by class. For example, with a given function there may be several classes of input that may be used for positive testing. If function expects an integer and receives an integer as input, this would be considered as positive test assertion. On the other hand, if a character or any other input class other than integer is provided, this would be considered a negative test assertion or condition.
Read Equivalence Partitioning Tutorial

Error: An error is a mistake of commission or omission that a person makes. An error causes a defect. In software development one error may cause one or more defects in requirements, designs, programs, or tests.[Robert M. Poston, 1996.]

Errors: The amount by which a result is incorrect. Mistakes are usually a result of a human action. Human mistakes (errors) often result in faults contained in the source code, specification, documentation, or other product deliverable. Once a fault is encountered, the end result will be a program failure. The failure usually has some margin of error, either high, medium, or low.

Error Guessing: Another common approach to black-box validation. Black-box testing is when everything else other than the source code may be used for testing. This is the most common approach to testing. Error guessing is when random inputs or conditions are used for testing. Random in this case includes a value either produced by a computerized random number generator, or an ad hoc value or test conditions provided by engineer.

Error guessing. A test case design technique where the experience of the tester is used to postulate what faults exist, and to design tests specially to expose them [from BS7925-1]

Error seeding. The purposeful introduction of faults into a program to test effectiveness of a test suite or other quality assurance program. [R. V. Binder, 1999]

Exception Testing. Identify error messages and exception handling processes an conditions that trigger them. [William E. Lewis, 2000]

Exhaustive Testing.(NBS) Executing the program with all possible combinations of values for program variables. Feasible only for small, simple programs.

Exploratory Testing: An interactive process of concurrent product exploration, test design, and test execution. The heart of exploratory testing can be stated simply: The outcome of this test influences the design of the next test. [James Bach]


[Software Testing Dictionary Back to Top]

Failure: A failure is a deviation from expectations exhibited by software and observed as a set of symptoms by a tester or user. A failure is caused by one or more defects. The Causal Trail. A person makes an error that causes a defect that causes a failure.[Robert M. Poston, 1996]

Fix testing. Rerunning of a test that previously found the bug in order to see if a supplied fix works. [Scott Loveland, 2005]

Follow-up testing, we vary a test that yielded a less-thanspectacular failure. We vary the operation, data, or environment, asking whether the underlying fault in the code can yield a more serious failure or a failure under a broader range of circumstances.[Measuring the Effectiveness of Software Testers,Cem Kaner, STAR East 2003]

Forensic testing. conducting forensic examinations on computers and other digital devices and media and clearly document, control, prepare and present examination results.

Formal Testing. (IEEE) Testing conducted in accordance with test plans and procedures that have been reviewed and approved by a customer, user, or designated level of management. Antonym: informal testing.

Framework scenario. A test scenario definition that provides only enough high-level information to remind the tester of everything that needs to be covered for that scenario. The description captures the activity’s essence, but trusts the tester to work through the specific steps required.[Scott Loveland, 2005]

Free Form Testing. Ad hoc or brainstorming using intuition to define test cases. [William E. Lewis, 2000]

Functional Decomposition Approach. An automation method in which the test cases are reduced to fundamental tasks, navigation, functional tests, data verification, and return navigation; also known as Framework Driven Approach. [Daniel J. Mosley, 2002]

Functional testing Application of test data derived from the specified functional requirements without regard to the final program structure. Also known as black-box testing.

Function verification test (FVT). Testing of a complete, yet containable functional area or component within the overall software package. Normally occurs immediately after Unit test. Also known as Integration test. [Scott Loveland, 2005]


[Software Testing Dictionary Back to Top]

Gray box testing. Tests involving inputs and outputs, but test design is educated by information about the code or the program operation of a kind that would normally be out of scope of view of the tester.[Cem Kaner]

Gray box testing. Test designed based on the knowledge of algorithm, internal states, architectures, or other high -level descriptions of the program behavior. [Doug Hoffman]

Gray box testing. Examines the activity of back-end components during test case execution. Two types of problems that can be encountered during gray-box testing are:

  • A component encounters a failure of some kind, causing the operation to be aborted. The user interface will typically indicate that an error has occurred.
  • The test executes in full, but the content of the results is incorrect. Somewhere in the system, a component processed data incorrectly, causing the error in the results.
    [Elfriede Dustin. "Quality Web Systems: Performance, Security & Usability."]

    Grooved Tests. Tests that simply repeat the same activity against a target product from cycle to cycle. [Scott Loveland, 2005]


    [Software Testing Dictionary Back to Top]

    Heuristic Testing: An approach to test design that employs heuristics to enable rapid development of test cases.[James Bach]

    High-level tests. These tests involve testing whole, complete products [Kit, 1995]

    HTML validation testing. Specific to Web testing. This certifies that the HTML meets specifications and internal coding standards.
    W3C Markup Validation Service, a free service that checks Web documents in formats like HTML and XHTML for conformance to W3C Recommendations and other standards.


    [Software Testing Dictionary Back to Top]

    "ility" testing security, maintainability, interoperability, compatibility, reliability, and installability software testing. [p 223, Agile testing by Lisa Crispin, 2009]

    In-parameter-order (IPO) testing. Combinatorial testing technique for the generation of test cases that for any pair of input parameters of the system under test at least one test case must be created for minimum coverage.

    Incremental integration testing. Incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.

    Inspection. A formal evaluation technique in which software requirements, design, or code are examined in detail by person or group other than the author to detect faults, violations of development standards, and other problems [IEEE94]. A quality improvement process for written material that consists of two dominant components: product (document) improvement and process improvement (document production and inspection).

    Integration. The process of combining software components or hardware components or both into overall system.

    Integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

    Integration Testing. Testing conducted after unit and feature testing. The intent is to expose faults in the interactions between software modules and functions. Either top-down or bottom-up approaches can be used. A bottom-up method is preferred, since it leads to earlier unit testing (step-level integration) This method is contrary to the big-bang approach where all source modules are combined and tested in one step. The big-bang approach to integration should be discouraged.

    Interface Tests. Programs that probide test facilities for external interfaces and function calls. Simulation is often used to test external interfaces that currently may not be available for testing or are difficult to control. For example, hardware resources such as hard disks and memory may be difficult to control. Therefore, simulation can provide the characteristics or behaviors for specific function.

    Internationalization testing (I18N) - testing related to handling foreign text and data within the program. This would include sorting, importing and exporting test and data, correct handling of currency and date and time formats, string parsing, upper and lower case handling and so forth. [Clinton De Young, 2003].

    Interoperability Testing which measures the ability of your software to communicate across the network on multiple machines from multiple vendors each of whom may have interpreted a design specification critical to your success differently.

    Inter-operability Testing. True inter-operability testing concerns testing for unforeseen interactions with other packages with which your software has no direct connection. In some quarters, inter-operability testing labor equals all other testing combined. This is the kind of testing that I say shouldn’t be done because it can�t be done.[from Quality Is Not The Goal. By Boris Beizer, Ph. D.]

    Inspection. A formal evaluation technique in which software requirements, design, or code are examined in detail by person or group other than the author to detect faults, violations of development standards, and other problems [IEEE94].

    Install/uninstall testing. Testing of full, partial, or upgrade install/uninstall processes.

    Investigative testing. Investigative testing efforts reveal any problems that developers missed long before they become too expensive to address. These defects might pertain to usability or integration problems, sometimes they pertain to requirements which we missed or simply haven't implemented yet, and sometimes they pertain to things we simply didn't think to test for. [Scott W. Ambler]


    [Software Testing Dictionary Back to Top]


    A B C D E F G H I J K L M N O P Q R S T U V W X Y Z


    This Internet Software Testing Computer Encyclopedia can be useful for students and other educational purposes as well as a reference material and a glossary for technical support.




    © 2004-2008 Alex Samurin geocities.com/xtremetesting/   2009 www.extremesoftwaretesting.com
    If you have navigated to this page from another site, and you would like to go to our home page, please click:
    Software Testing Dictionary Main Page