A free, searchable by word and topic on-line vocabulary and thesaurus with definitions, synonyms and quotations for over 600 terms associated with Software Testing and QA (Quality assurance)
Qualification Testing. (IEEE) Formal testing, usually conducted by the developer for the consumer, to demonstrate that the software meets its specified requirements. See: acceptance testing.
Quality. The degree to which a program possesses a desired combination of attributes that enable it to perform its specified end use.
Quality Assurance (QA) Consists of planning, coordinating and other strategic activities associated with measuring product quality against external requirements and specifications (process-related activities).
Quality Control (QC) Consists of monitoring, controlling and other tactical activities associated with the measurement of product quality goals.
Our definition of Quality: Achieving the target (not conformance to requirements as used by many authors) & minimizing the variability of the system under test
Quicktest. A quicktest (or an attack) is a cheap test that has some value but requires little preparation, knowledge, or time to perform. [Cem Kaner, Exploratory Test Automation, 2009]
[Software Testing Dictionary Back to Top]
Race condition defect. Many concurrent defects result from data-race conditions. A data-race condition may be defined as two accesses to a shared variable, at least one of which is a write, with no mechanism used by either to prevent simultaneous access. However, not all race conditions are defects.
Railroading testing technique – testing techniques with strategy to continue execution of test suite in the next testing cycle.
Random-input testing.The processes of testing a program by randomly selecting a subset of all possible input values.[Glenford J.Myers, 2004]
Reactive testing – tester create the test details on-the-fly in reaction to the reality presented by the system under test. [Rex Black, 2007]
Recovery testing Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Regression Testing. Testing conducted for the purpose of evaluating whether or not a change to the system (all CM items) has introduced a new failure. Regression testing is often accomplished through the construction, execution and analysis of product and system tests.
Regression Testing. - testing that is performed after making a functional improvement or repair to the program. Its purpose is to determine if the change has regressed other aspects of the program [Glenford J.Myers, 1979]
Regulatory testing - Testing design to ensure that the system meets the requirements according of all government/ministry/standard regulations.Reengineering. The process of examining and altering an existing system to reconstitute it in a new form. May include reverse engineering (analyzing a system and producing a representation at a higher level of abstraction, such as design from code), restructuring (transforming a system from one representation to another at the same level of abstraction), recommendation (analyzing a system and producing user and support documentation), forward engineering (using software products derived from an existing system, together with new requirements, to produce a new system), and translation (transforming source code from one language to another or from one version of a language to another).
Reference testing. A way of deriving expected outcomes by manually validating a set of actual outcomes. A less rigorous alternative to predicting expected outcomes in advance of test execution. [Dorothy Graham, 1999]
Reliability testing. Verify the probability of failure free operation of a computer program in a specified environment for a specified time.
Reliability of an object is defined as the probability that it will not fail under specified conditions, over a period of time. The specified conditions are usually taken to be fixed, while the time is taken as an independent variable. Thus reliability is often written R(t) as a function of time t, the probability that the object will not fail within time t.Any computer user would probably agree that most software is flawed, and the evidence for this is that it does fail. All software flaws are designed in -- the software does not break, rather it was always broken. But unless conditions are right to excite the flaw, it will go unnoticed -- the software will appear to work properly. [Professor Dick Hamlet. Ph.D.]
Range Testing. For each input identifies the range over which the system behavior should be the same. [William E. Lewis, 2000]
Risk-Based Testing: Any testing organized to explore specific product risks.[James Bach website]
Risk management. An organized process to identify what can go wrong, to quantify and access associated risks, and to implement/control the appropriate approach for preventing or handling each risk identified.
Robust test. A test, that compares a small amount of information, so that unexpected side effects are less likely to affect whether the test passed or fails. [Dorothy Graham, 1999]
Robustness testing. Also known as negative testing [Occasionally used by some authors in a dictionary]
[Software Testing Dictionary Back to Top]
Sanity Testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is often crashing systems, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.
Scalability testing is a subtype of performance test where performance requirements for response time, throughput, and/or utilization are tested as load on the SUT is increased over time. [Load Testing Terminology by Scott Stirling ]
Scenario-Based Testing. Scenario-based testing is one way to document the software specifications and requirements for a project. Scenario-based testing takes each user scenario and develops tests that verify that a given scenario works. Scenarios focus on the main goals and requirements. If the scenario is able to flow from the beginning to the end, then it passes.[Lydia Ash, 2003]
(SDLC) System Development Life Cycle - a phases used to develop, maintain, and replace information systems. Typical phases in the SDLC are: Initiation Phase, Planning Phase, Functional Design Phase, System Design Phase, Development Phase, Integration and Testing Phase, Installation and Acceptance Phase, and Maintenance Phase.
The V-model talks about SDLC (System Development Life Cycle) phases and maps them to various test levels
Read about Bug Management through SDLC
Security Audit. An examination (often by third parties) of a server's security controls and may be disaster recovery mechanisms.
Sensitive test. A test, that compares a large amount of information, so that it is more likely to defect unexpected differences between the actual and expected outcomes of the test. [Dorothy Graham, 1999]
Server log testing. Examining the server logs after particular actions or at regular intervals to determine if there are problems or errors generated or if the server is entering a faulty state.
Service test. Test software fixes, both individually and bundled together, for software that is already in use by customers. [Scott Loveland, 2005]
Shotgunning testing technique - testing technique that distributed test scripts randomly during testing in each test cycle.
Skim Testing A testing technique used to determine the fitness of a new build or release of an AUT to undergo further, more thorough testing. In essence, a "pretest" activity that could form one of the acceptance criteria for receiving the AUT for testing [Testing IT: An Off-the-Shelf Software Testing Process by John Watkins]
Smoke test describes an initial set of tests that determine if a new version of application performs well enough for further testing.[Louise Tamres, 2002]
Sniff test. A quick check to see if any major abnormalities are evident in the software.[Scott Loveland, 2005 ]
Soak testing involves significantly loading a system for an extended period of time and assessing its behavior under an increasing load. (Soak testing is also referred to as load testing.)[Elfriede Dustin, 2009]
Specification-based test. A test, whose inputs are derived from a specification.
Spike testing. to test performance or recovery behavior when the system under test (SUT) is stressed with a sudden and sharp increase in load should be considered a type of load test.[ Load Testing Terminology by Scott Stirling ]
Standards This page lists many standards that can be related to software testing
STEP (Systematic Test and Evaluation Process) Software Quality Engineering's copyrighted testing methodology.
Stability testing. Testing the ability of the software to continue to function, over time and over its full range of use, without failing or causing failure. (see also Reliability testing)
State-based testing Testing with test cases developed by modeling the system under test as a state machine [R. V. Binder, 1999]
State Transition Testing. Technique in which the states of a system are fist identified and then test cases are written to test the triggers to cause a transition from one condition to another state. [William E. Lewis, 2000]
Static testing. Source code analysis. Analysis of source code to expose potential defects.
Statistical testing. A test case design technique in which a model is used of the statistical distribution of the input to construct representative test cases. [BCS]
Stealth bug. A bug that removes information useful for its diagnosis and correction. [R. V. Binder, 1999]
Storage test. Study how memory and space is used by the program, either in resident memory or on disk. If there are limits of these amounts, storage tests attempt to prove that the program will exceed them. [Cem Kaner, 1999, p55]
Story test – A story test defines expected behaviour for the code to be delivered by the story. [Agile testing by Lisa Crispin, 2009]
Streamable Test cases. Test cases which are able to run together as part of a large group. [Scott Loveland, 2005]
Stress / Load / Volume test. Tests that provide a high degree of activity, either using boundary conditions as inputs or multiple copies of a program executing in parallel as examples.
Stress Test. A stress test is designed to determine how heavy a load the Web application can handle. A huge load is generated as quickly as possible in order to stress the application to its limit. The time between transactions is minimized in order to intensify the load on the application, and the time the users would need for interacting with their Web browsers is ignored. A stress test helps determine, for example, the maximum number of requests a Web application can handle in a specific period of time, and at what point the application will overload and break down.[Load Testing by S. Asbock]
Structural Testing. (1)(IEEE) Testing that takes into account the internal mechanism [structure] of a system or component. Types include branch testing, path testing, statement testing. (2) Testing to insure each program statement is made to execute during testing and that each program statement performs its intended function. Contrast with functional testing. Syn: white-box testing, glass-box testing, logic driven testing.
Syntax testing. A black-box testing technique used to design a test case for testing software applications based on the syntax of the input.
System testing Black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
System verification test. (SVT). Testing of an entire software package for the first time, with all components working together to deliver the project's intended purpose on supported hardware platforms. [Scott Loveland, 2005]
[Software Testing Dictionary Back to Top]
This Internet Software Testing Computer Encyclopedia can be useful for students and other educational purposes as well as a reference material and a glossary for technical support.