Software Testing Dictionary

Software Testing Dictionary



A free, searchable by word and topic on-line vocabulary and thesaurus with definitions, synonyms and quotations for over 600 terms associated with Software Testing and QA (Quality assurance)

Page copy protected against web site content infringement by Copyscape

All following definitions are taken from accepted and identified sources.
This page is being updated on a monthly basis

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z


Capability Maturity Model (CMM). - A description of the stages through which software organizations evolve as they define, implement, measure, control and improve their software processes. The model is a guide for selecting the process improvement strategies by facilitating the determination of current process capabilities and identification of the issues most critical to software quality and process improvement. [SEI/CMU-93-TR-25]
How is Capability Maturity Model organized?

Capture-replay tools. - Tools that gives testers the ability to move some GUI testing away from manual execution by 'capturing' mouse clicks and keyboard strokes into scripts, and then 'replaying' that script to re-create the same sequence of inputs and responses on subsequent test.[Scott Loveland, 2005]

Category Partition testing. - Testing methodology that requires to divide the functional specification into independent functional units that can be tested separately.

Cause Effect Graphing. (1) [NBS] Test data selection technique. The input and output domains are partitioned into classes and analysis is performed to determine which input classes cause which effect. A minimal set of inputs is chosen which will cover the entire effect set. (2)A systematic method of generating test cases representing combinations of conditions. See: testing, functional.[G. Myers]

classification tree testing technique
The classification-tree method is an approach to (black-box) partition testing which uses a descriptive tree-like notation and which is especially suited for automation. (using and improving ideas from the category-partition method defined by Ostrand and Balcer) [Matthias Grochtmann STAR’94, May 1994]

Clean test. A test whose primary purpose is validation; that is, tests designed to demonstrate the software`s correct working.(syn. positive test)[B. Beizer 1995]

Clear-box testing. See White-box testing.

Code audit. An independent review of source code by a person, team, or tool to verify compliance with software design documentation and programming standards. Correctness and efficiency may also be evaluated. (IEEE)

Code Inspection. A manual [formal] testing [error detection] technique where the programmer reads source code, statement by statement, to a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards. Contrast with code audit, code review, code walkthrough. This technique can also be applied to other software and configuration items. [G.Myers/NBS] Syn: Fagan Inspection

Code Walkthrough. A manual testing [error detection] technique where program [source code] logic [structure] is traced manually [mentally] by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.[G.Myers/NBS]

Coexistence Testing. Coexistence isn't enough. It also depends on load order, how virtual space is mapped at the moment, hardware and software configurations, and the history of what took place hours or days before. It�s probably an exponentially hard problem rather than a square-law problem. [from Quality Is Not The Goal. By Boris Beizer, Ph. D.]

Comparison testing. Comparing software strengths and weaknesses to competing products

Compatibility bug A revision to the framework breaks a previously working feature: a new feature is inconsistent with an old feature, or a new feature breaks an unchanged application rebuilt with the new framework code. [R. V. Binder, 1999]

Compatibility Testing. The process of determining the ability of two or more systems to exchange information. In a situation where the developed software replaces an already working program, an investigation should be conducted to assess possible comparability problems between the new software and other programs or systems.

Component testing. White box testing in which a software component is tested in isolation from other components.[Occasionally used by some authors in a dictionary]

Composability testing -testing the ability of the interface to let users do more complex tasks by combining different sequences of simpler, easy-to-learn tasks. [Timothy Dyck, 'Easy' and other lies, eWEEK April 28, 2003]

Condition Coverage. A test coverage criteria requiring enough test cases such that each condition in a decision takes on all possible outcomes at least once, and each point of entry to a program or subroutine is invoked at least once. Contrast with branch coverage, decision coverage, multiple condition coverage, path coverage, statement coverage.[G.Myers]

Configuration. The functional and/or physical characteristics of hardware/software as set forth in technical documentation and achieved in a product. (MIL-STD-973)

Configuration control. An element of configuration management, consisting of the evaluation, coordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identification. (IEEE)

Conformance directed testing. Testing that seeks to establish conformance to requirements or specification. [R. V. Binder, 1999]

Confirmation testing. Testing done for a new build to verify that the defect reported from a previous build is fixed in this build.

Cookbook scenario. A test scenario description that provides complete, step-by-step details about how the scenario should be performed. It leaves nothing to change. [Scott Loveland, 2005]

Coverage analysis. Determining and assessing measures associated with the invocation of program structural elements to determine the adequacy of a test run. Coverage analysis is useful when attempting to execute each statement, branch, path, or iterative structure in a program. Tools that capture this data and provide reports summarizing relevant information have this feature. (NIST)

CRUD Testing. Build CRUD matrix and test all object creation, reads, updates, and deletion. [William E. Lewis, 2000]


[Software Testing Dictionary Back to Top]

Data-Driven testing. An automation approach in which the navigation and functionality of the test script is directed through external data; this approach separates test and control data from the test script. [Daniel J. Mosley, 2002]

Data flow testing. Testing in which test cases are designed based on variable usage within the code.[BCS]

Database testing. Check the integrity of database field values. [William E. Lewis, 2000]

Defect. The difference between the functional specification (including user documentation) and actual program text (source code and data). Often reported as problem and stored in defect-tracking and problem-management system

Defect. Also called a fault or a bug, a defect is an incorrect part of code that is caused by an error. An error of commission causes a defect of wrong or extra code. An error of omission results in a defect of missing code. A defect may cause one or more failures.[Robert M. Poston, 1996.]

Defect. A flaw in the software with potential to cause a failure.. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Defect Age. A measurement that describes the period of time from the introduction of a defect until its discovery. . [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Defect Density. A metric that compares the number of defects to a measure of size (e.g., defects per KLOC). Often used as a measure of defect quality. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Defect Discovery Rate. A metric describing the number of defects discovered over a specified period of time, usually displayed in graphical form. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Defect leakage - The defects which were not found during the system / regression or UAT testing and happened in production are called defect leakage. Sometimes Defect Leakage is also called a Bug Leak.

Defect Removal Efficiency (DRE). A measure of the number of defects discovered in an activity versus the number that could have been found. Often used as a measure of test effectiveness. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Defect Seeding. The process of intentionally adding known defects to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of defects still remaining. Also called Error Seeding. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Defect Masked. An existing defect that hasn't yet caused a failure because another defect has prevented that part of the code from being executed. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Depth test. A test case, that exercises some part of a system to a significant level of detail. [Dorothy Graham, 1999]

Decision Coverage. A test coverage criteria requiring enough test cases such that each decision has a true and false result at least once, and that each statement is executed at least once. Syn: branch coverage. Contrast with condition coverage, multiple condition coverage, path coverage, statement coverage.[G.Myers]

Design-based testing. Designing tests based on objectives derived from the architectural or detail design of the software (e.g., tests that execute specific invocation paths or probe the worst case behaviour of algorithms). [BCS]

Diagnostic tests - Tests designed to verify that the hardware components (or modules) of the system are functioning correctly without failure.

Dirty testing Negative testing. [Beizer]

Distributed testing – testing of the system from the multiple locations

Dynamic testing. Testing, based on specific test cases, by execution of the test object or running programs [Tim Koomen, 1999]


[Software Testing Dictionary Back to Top]


A B C D E F G H I J K L M N O P Q R S T U V W X Y Z


This Internet Software Testing Computer Encyclopedia can be useful for students and other educational purposes as well as a reference material and a glossary for technical support.




© 2004-2008 Alex Samurin geocities.com/xtremetesting/   2009 www.extremesoftwaretesting.com
If you have navigated to this page from another site, and you would like to go to our home page, please click:
Software Testing Dictionary Main Page