A test strategy document must answer all really important questions: "what", "who", "why" and "how".
'PRODUCT'
Software Test Strategy Document Example
Version 1.0 (Initial Draft)
November --, 2000
Revision History
Date
Author
Description of revisions
Version #
November --, 2000
Initial Draft
1.0
1.1
Table of Contents
REVISION HISTORY 1
1. INTRODUCTION 4
1.1 PURPOSE 4
1.2 SOFTWARE FUNCTIONAL OVERVIEW 4
1.3 CRITICAL SUCCESS FACTOR 4
1.4 SOFTWARE TESTING SCOPE (TBD) 5
Inclusions 5
Exclusions 5
1.5 SOFTWARE TEST COMPLETION CRITERIA 5
2. TIMEFRAME 6
3. RESOURCES 6
4.1 SOFTWARE TESTING TEAM 6
4.2 HARDWARE REQUIREMENTS 6
4.3 SOFTWARE REQUIREMENTS 6
5. APPLICATION TESTING RISKS PROFILE 7
6. SOFTWARE TEST APPROACH 8
6.1 STRATEGIES 8
6.2 GENERAL TEST OBJECTIVES: 8
6.3 APPLICATION FUNCTIONALITY 8
6.4 APPLICATION INTERFACES 8
6.5 SOFTWARE TESTING TYPES 8
6.5.1 Stability 8
6.5.2 System 9
6.5.3 SOFTWARE Regression testing 10
6.5.4 Installation 10
6.5.5 Recovery 11
6.5.6 Configuration 14
6.5.7 Security 15
7. BUSINESS AREAS FOR SYSTEM TEST 16
8. SOFTWARE TEST PREPARATION 16
8.1 SOFTWARE TEST CASE DEVELOPMENT 17
8.2 TEST DATA SETUP 17
8.3 TEST ENVIRONMENT 17
8.3.1 Database Restoration Strategies. 17
9. SOFTWARE TEST EXECUTION 18
9.1 SOFTWARE TESTING EXECUTION PLANNING 18
9.2 SOFTWARE TEST EXECUTION DOCUMENTATION 18
9.3 PROBLEM REPORTING 18
10. STATUS REPORTING 19
10.1 SOFTWARE TEST EXECUTION PROCESS 19
10.2 PROBLEM STATUS 19
11. HANDOVER FOR USER ACCEPTANCE TEST TEAM 19
12. DELIVERABLES 19
13. APPROVALS 19
14. APPENDIXES 20
14.1 APPENDIX A (BUSINESS PROCESS RISK ASSESSMENT) 20
14.2 APPENDIX B (SOFTWARE TEST DATA SETUP) 20
14.3 APPENDIX C (SOFTWARE TEST CASE TEMPLATE)20
14.4 APPENDIX D (PROBLEM TRACKING PROCESS) 23
1.
Introduction
1.1 Purpose
This document describes the SOFTWARE Test Strategy for the 'PRODUCT' application and tend to support the
following objectives:
Identify the existing project information and the software components that should be tested
Identify types of software testing to be done
Recommend and describe the software testing strategy to be employed
Identify the required resources and provide the estimate of the test efforts
List the deliverables of the test project
1.2 Software Functional Overview
With the implementation of the 'PRODUCT' system the users community will be able to manage
sales contacts, turn sales contacts into sales opportunities, assign sales opportunities to
sales team members, generate reports, forecast sales, etc.
The 'PRODUCT' application is a client/server system with MS Access database
(Soon moving to the SQL server). It consists of the following:
1. Graphical User Interface (GUI) screens, running under Windows or NT/2000 client and
Master machines in the MS Outlook 2000 environment;
2. Reports are producing using MS Excel and MS Word
3. E-mails ...(??)
4. Interfaces to MS Outlook 2000 and flat files for data import
1.3 Critical Success Factor
To support delivery of an application that meets its success criteria, the critical success
factor for the testing are:
Correctness - Assurance that the data entered, processed, and outputted by application
system is accurate and complete. Accuracy and completeness are achieved through control over
transactions and data element, which should commence when a transaction is originated and
conclude when the transaction data has been used for its intended purpose.
File Integrity - Assurance that the data entered into application system will be returned
unaltered. The file integrity procedure ensures that the right file is used and that the data
on the file and the sequence in which the data is stored and retrieved is correct.
Access control - Assurance that the program prevents unauthorized access and prevents
unauthorized users to destabilize work environment.
Scalability - Assurance that the application can handle the scaling criteria within
constrains of performance criteria.
1.4 Software Testing Scope (TBD)
The Software Testing scope will be covered in this plan. It will describe activities that will
cover the functions and interfaces in the 'PRODUCT' application.
The following lists specific items that are included or excluded from the testing scope.
Inclusions
- Opportunity Contact
- Opportunities
- Opportunity Journal
- Opportunity Comments
- Sales Setup
Exclusions
Outlook2000 or other MS functionality
Software Testing under illegal hardware/software configurations
1.5 Software Test Completion Criteria
Software Testing for a given release will be considered to be complete when the following conditions
have been met:
Criterion
Description
Signoff of test cases
All test cases defined for the release have been reviewed by the appropriate stakeholders
and signed off.
Execution of the test
All test transactions have been executed successfully at least once.
Closure of outstanding problems
All problems found during the testing process have been reviewed, closed, or deferred by
the management agreement.
2. Timeframe
The critical dates are as follows:
3. Resources
4.1 Testing Team
Name
Position
Start Date
End Date
Days of Effort
Test
Tech Support
Sales
DBA
The Test Team staffing is based upon the following assumptions:
Testing of the coming release is planned to be complete in ...
The System Testing is planned after the coding will be completed
The promoted into the System test environment 'PRODUCT' version will properly Unit and
Integration tested by the development team. Testers will supply the checklist for Unit testing
to development team.
4.2 Hardware requirements
Personal computer with Pentium 233 MHz and higher - 2 clients
RAM for Windows 95/98/WinMe: 32 MB of RAM for operating system, plus an additional 8MB for
MS Outlook 2000 and higher
RAM for Windows NT Workstation, Windows 2000: 48 MB of RAM for the operating system, plus an
additional 8 MB for MS Outlook 2000 and higher
20 MB of available disk space for 'PRODUCT' and higher
The box for database ... . I do not think that we will need a separate box. Allocated space
for Test Environment, PV and backup could be enough.
4.3 Software Requirements
Windows 95/98/WinMe or Windows NT version 4.0 Service pack 3 or later, Windows 2000
Microsoft Outlook(r) 2000
Microsoft Excel(r) 2000 and Word(r) 2000 for 'PRODUCT' reports,
Access 2000
Bug Tracking system (TBD)
NOTE: 'PRODUCT' for Workgroups requires CDO 1.21 installed. This is on the Microsoft Office(r)
2000 CD.
5. Application Software Testing Risks Profile
Different aspects and features of 'PRODUCT' present various levels of risk that can be used
to determine the relative levels of testing rigor required for each feature. In this ballpark
risk analysis, likelihood of defects is determined mainly by complexity of the feature.
Impact is determined by the critical success factors for organization, such as Dollar
Value and Area of Reputation
Business Process Impact Ranking Criteria
Dollar Value Impact
Reputation Impact
1. High
Direct or Indirect (due to the loss of opportunity) Loss of Revenue
Typically up to $ millions (or thousands) per month (or per year)
Examples:
High Impact related to the loss of the client
Example:
2. Medium
Typically up to $ millions (or thousands) per month (or per year)
Examples:
Major inconvenience to the customer
Example:
3. Low
Typically up to $ millions (or thousands) per month (or per year)
Examples:
Minor inconvenience or no visible impact to a client
Example:
The "Business Process Risk Assessment" will be in the Appendix A
Based on the Assessment the following Processes will receive High Priority during the testing:
1.
Business Process Likelihood Ranking Criteria
Likelihood Ranking
Low
Feature set to be used to a particular company
Medium
Used by a particular User group.
High
Core functionality will be used by all User groups
The "Business Process Risk Assessment" will be in the Appendix A
Based on the Likelihood the following processes will have a High priority during the testing:
1.
6. Test Approach
6.1 Strategies
Several strategies are employed in the plan in order to manage the risk and get maximum
value from the time available for test preparation and execution.
6.2 General Test Objectives:
To find any bugs that have not been found in unit and integration testing performed
by development team
To ensure that all requirements have been met
Positive test cases designed to test for correct functions will be supplemented with
negative test cases designed to find problems and to test for correct error and exception
handling.
6.3 Application Functionality
All areas of the application will be tested for correct results, as documented in the
Project requirements document(s), supplemented with interfaces and
6.4 Application Interfaces
The following Interfaces are included in the 'PRODUCT' Test Plan:
- Internal interface with Outlook 2000
- Reporting capabilities with MS Word and MS Excel
- Internal interface with MS Access to verify correct storage and data retrieval
- Text files to verify data Importing capabilities
6.5 Testing Types
6.5.1 Stability
Stability (Smoke testing or Sanity Checks) testing has a purpose to verify promotions
into the test environment in order not to destabilize the test environment.
Software Test Objective:
Verify the stability of new builds before accepting them in the Test Environment
Technique:
Manually to validate the new build by running few simple tests on the separate environment.
The Stability testing usually runs for o one or two hours.
Completion Criteria:
The new build will not produce major delays for the testing group when it will be ported
into the Test Environment.
Special Considerations:
There are few questions should be asked with regard of the Stability testing:
Should we prepare special environment, like PV (port verification) or run it in Development
environment?
What would be the procedure in the case if the new build will not be accepted?
6.5.2 System
Testing of the application should focus on any target requirements that can be traced
directly to use cases (or business functions), and business rules. The goals of these
tests are to verify proper data acceptance, processing, and retrieval, and the appropriate
implementation of the business rules. This type of testing is based upon black box
techniques, that is, verifying the application (and its internal processes) by interacting
with the application via the GUI and analyzing the output (results). Identified below is
an outline of the testing recommended for each application:
Test Objective:
Ensure proper application navigation, data entry, processing, and retrieval.
Technique:
Execute each use case, use case flow, or function, using valid and invalid data, to verify
the following:
The expected results occur when valid data is used.
The appropriate error / warning messages are displayed when invalid data is used.
Each business rule is properly applied.
Completion Criteria:
All planned tests have been executed.
All identified defects have been addressed.
Special Considerations: (TBD)
[Identify / describe those items or issues (internal or external) that impact the
implementation and execution of System test]
6.5.3 Software Regression Testing
Software Regression testing has tree purposes. The first is to insure that the promoted problem
is properly corrected. The second purpose is to verify that the corrective actions did
not produce any additional problems, Third purpose is to verify that the new promoted
into the test environment functionality did not brake any previously working software parts.
This usually means repeating a number tests that were the problems originally created and
running few tests to verify surrounding functionality.
Test Objective:
Verify that the reported problems were fixed properly and no additional problems were
introduced during the fix.
Technique:
Manually or develop automated scripts to repeat tests were the problems were originally
discovered.
Run few tests to verify the surrounding functionality.
Completion Criteria:
'PRODUCT' transactions execute successfully without failure.
Special Considerations:
What is the extend of verification of surround functionality?
6.5.4 Installation
Installation testing has two purposes. The first is to insure that the software can be
installed on all possible configurations, such as a new installation, an upgrade, and a
complete installation or custom installation, and under normal and abnormal conditions.
Abnormal conditions include insufficient disk space, lack of privilege to create
directories, etc. The second purpose is to verify that, once installed, the software operates
correctly. This usually means running a number tests that were developed for Function testing.
Test Objective:
Verify and validate that the 'PRODUCT' client software properly installs onto each client
under the following conditions:
New Installation, a new machine, never installed previously with 'PRODUCT'
Update machine previously installed 'PRODUCT', same version
Update machine previously installed 'PRODUCT', older version
Technique:
Manually or develop automated scripts to validate the condition of the target machine
(new - 'PRODUCT' never installed, 'PRODUCT' same version or older version already installed).
Launch / perform installation.
Using a predetermined sub-set of Integration or System test scripts, run the transactions.
Completion Criteria:
'PRODUCT' transactions execute successfully without failure.
Special Considerations:
What 'PRODUCT' transactions should be selected to comprise a confidence test that
'PRODUCT' application has been successfully installed and no major software components
are missing?
6.5.5 Recovery
Failover / Recovery testing ensures that an application or entire system can successfully
recover from a variety of hardware, software, or network malfunctions with undue loss of
data or data integrity.
Failover testing ensures that, for those systems that must be kept running, when a failover
condition occurs, the alternate or backup systems properly "take over" for the failed system
without loss of data or transactions.
Recovery testing is an antagonistic test process in which the application or system is exposed
to extreme conditions (or simulated conditions) such as device I/O failures or invalid
database pointers / keys. Recovery processes are invoked and the application / system
is monitored and / or inspected to verify proper application / system / and data recovery
has been achieved.
Test Objective:
Verify that recovery processes (manual or automated) properly restore the database,
applications, and system to a desired, known, state. The following types of conditions
are to be included in the testing:
Power interruption to the client
Power interruption to the server
Communication interruption via network server(s)
Interruption, communication, or power loss to DASD and or DASD controller(s)
Incomplete cycles (data filter processes interrupted, data synchronization processes
interrupted).
Invalid database pointer / keys
Invalid / corrupted data element in database
Technique:
Tests created for System testing should be used to create a series of transactions.
Once the desired starting test point is reached, the following actions should be performed
(or simulated) individually:
Power interruption to the client: power the PC down
Power interruption to the server: simulate or initiate power down procedures for the server
Interruption via network servers: simulate or initiate communication loss with the network
(physically disconnect communication wires or power down network server(s) / routers).
Interruption, communication, or power loss to DASD and or DASD controller(s): simulate or
physically eliminate communication with one or more DASD controllers or devices.
Once the above conditions / simulated conditions are achieved, additional transactions
should executed and upon reaching this second test point state, recovery procedures should
be invoked.
Testing for incomplete cycles utilizes the same technique as described above except that
the database processes themselves should be aborted or prematurely terminated.
Testing for the following conditions requires that a known database state be achieved.
Several database fields, pointers and keys should be corrupted manually and directly
within the database (via database tools). Additional transactions should be executed
using the tests from System Testing and full cycles executed.
Completion Criteria:
In all cases above, the application, database, and system should, upon completion of
recovery procedures, return to a known, desirable state. This state includes data
corruption limited to the known corrupted fields, pointers / keys, and reports indicating
the processes or transactions that were not completed due to interruptions.
Special Considerations:
Recovery testing is highly intrusive. Procedures to disconnect cabling (simulating
power or communication loss) may not be desirable or feasible. Alternative methods,
such as diagnostic software tools may be required.
Resources from the Systems (or Computer Operations), Database, and Networking groups
are required.
These tests should be run after hours or on an isolated machine(s). This may call for
the separate test server.
6.5.6 Configuration
Configuration testing verifies operation of the software on different software and
hardware configurations. In most production environments, the particular hardware
specifications for the client workstations, network connections and database servers vary.
Client workstations may have different software loaded (e.g. applications, drivers, etc.)
and at any one time many different combinations may be active and using different resources.
Test Objective:
Validate and verify that the client, 'PRODUCT' Application function properly on the
prescribed client workstations.
Technique:
Use Software Integration and System Test scripts
Open / close various Microsoft applications, such as Excel and Word, either as part
of the test or prior to the start of the test.
Execute selected transactions to simulate users activities into and out of 'PRODUCT'
and Microsoft applications.
Repeat the above process, minimizing the available conventional memory on the client.
Completion Criteria:
For each combination of 'PRODUCT' and Microsoft application, 'PRODUCT' transactions are
successfully completed without failure.
Special Considerations:
What Microsoft Applications are available, accessible on the clients?
What applications are typically used?
What data are the applications running (i.e. large spreadsheet opened in Excel, 100 page
document in Word).
The entire systems, netware, network servers, databases, etc. should also be documented
as part of this test.
6.5.7 Security
Security and Access Control Testing focus on two key areas of security:
- Application security, including access to the Data or Business Functions, and
- System Security, including logging into / remote access to the system.
Application security ensures that, based upon the desired security, users are restricted
to specific functions or are limited in the data that is available to them. For example,
everyone may be permitted to enter data and create new accounts, but only managers can
delete them. If there is security at the data level, testing ensures that user "type"
one can see all customer information, including financial data, however, user two only
sees the demographic data for the same client.
System security ensures that only those users granted access to the system are capable
of accessing the applications and only through the appropriate gateways.
Test Objective:
Function / Data Security: Verify that user can access only those functions / data for
which their user type is provided permissions.
System Security: Verify that only those users with access to the system and application(s)
are permitted to access them.
Technique:
Function / Data Security: Identify and list each user type and the functions / data
each type has permissions for.
Create tests for each user type and verify each permission by creating transactions
specific to each user type.
Modify user type and re-run tests for same users. In each case verify those additional
functions / data are correctly available or denied.
System Access (see special considerations below)
Completion Criteria:
For each known user type the appropriate function / data are available and all
transactions function as expected and run in prior System tests
Special Considerations:
Access to the system must be reviewed / discussed with the appropriate network or
systems administrator. This testing may not be required as it maybe a function of
network or systems administration. The remote access control is under special consideration.
Performance (Synchronization issue) TBD>
7. Business Areas for System Test
For the Test purpose the system will be divided into the following areas:
1. Sales Setup
2. Creating Databases
3. Getting Started - User
4. Managing Contacts and Opportunities
5. Managing the database
6. Reporting
7. Views
8. Features
9. User Tips
8. Test Preparation
8.1 Test Case Development
Test cases will be developed based on the following:
- Online help
- Changes to the PST.doc
- Company Setup.doc
- Defining Views.doc
- Importing Contact.doc
- Installation Manual.doc
- Linking Activities.doc
- Quick Guide.doc
- Uninstalling.doc
Rather than developing detail test cases to verify the appearance and mechanisms of
the GUI during the Unit testing, we will develop a standard checklist to be used by developers.
If the timeframe will not permit the development of detail test scripts to test a
coming version of 'PRODUCT' with precise input and output, the test cases along
with check lists that will be explored to a level that will allow a tester to
understand the objectives of each test will be developed.
8.2 Test Data Setup
The test data setup and test data dependencies are described in the Appendix B.
(To consult with DBA) Test Data setup could be not an issue. However, the data
dependencies (what data and from where) should be identified.
8.3 Test environment
Test will only be executed using known, controlled databases, in secure testing environment.
The Stability testing will be executed for all new promotions in the separate environment
in order not to destabilize the Test environment.
8.3.1 Database Restoration Strategies.
The database will be backed up daily. Backups are to be kept for two weeks, so it should
be possible to drop back to a clean version of the database if we will have a database
corruption problem during the testing. (This will be more work if the database definition
has changed in the interim. In the case when the database will be moved from the MS Access
to SQL server the data conversion could be run if the test database will have a lot of data.)
9. Test Execution
9.1 Test Execution Planning
(See testing types) will be scheduled as follows.
Stage 1 will include Unit and integration testing to be done by development team.
The GUI checklist will be supplied by the test team and reviewed by the appropriate stakeholders.
Stage 2 will include the Stability testing. It to be done by the Test team lead.
Stage 3 will include the System testing to be done by the test team and supporting
personnel. The System testing will be mostly approached from the Business rules angle,
because the Unit testing will be functional.
Stage 4 will include the Installation, Compatibility, Security and other types
described in the 6.5 section of this document. These types of testing will be done by
the testing team and supporting personnel
Note: The Usability testing will be done during the whole testing cycle and it will
concentrate on user friendliness issues.
9.2 Test Execution Documentation
Testers will check off each successful step on the test sheets with the execution date,
then sign and date-completed sheets. Any printouts used to verify results will be annotated
with the step number and attached. This documentation will be retained for inclusion in the
package for hand over to the UAT team at the end of the Testing cycle.
For test steps that find problems, testers will note the test step number in the problem logs,
and also annotate the test sheet with the problem log numbers. Once the problem has been fixed
and successfully retested, the tester will update the problem log to reflect this.
Test Cases template is described in the Appendix C.
9.3 Problem Reporting
Problem Reporting will be processed using automated Bug tracking System (Name)
Summary reports of outstanding problems will be produced daily and circulated as required.
(Four) problem Priority and Severity Levels will be used. Screen-prints, printouts of database
queries, reports, tables, etc. demonstrating the problem will be attached to a hard copy of
each problem log, as appropriate. The Test Lead will hand all problem logs over to the
appropriate stakeholder.
Specific procedure will be developed for capturing, recording, fixing, closing, etc.
problems found during the testing process. The above procedure will depend on the problem
Priority and Severity levels. The appropriate actions will be designed based on the problem
Status at given period of time.
These are described in the Appendix D.
10. Status Reporting
10.1 Test Execution Process
Each Business area will be further subdivided into the sub-Business processes up to
smallest business execution unit. The number of test cases will be calculated for
each sub-Business process and percentage of executed test cases will be constantly tracked.
10.2 Problem Status
The following metrics in the forms of graphs and reports will be used to provide
the required information of the problem status:
Weekly problem detection rates
Weekly problem detection rates - by week - diagram
Priority 1-2 problems vs. Total problems discovered ratio
Re-Open / Fixed problem ratio
(TBD)
11. Handover for User Acceptance Test (UAT) Team
On the System test completion, the test Lead will hand over the tested system and all
accompanying test documentation to the (stakeholder). Specific handover criteria will
be developed and agreed upon.
12. Deliverables
The following documents, tool, and reports will be created during the testing process:
Deliverables
By Whom
To Whom
When
1. Test Strategy
2. Test Plan
3. Test Results
13. Approvals
The Test Strategy document must meet the approvals of the appropriate stakeholders.
Title
Name
Signature
Date
1.
2.
3.
14. Appendixes
14.1 Appendix A (Business Process Risk Assessment)
14.2 Appendix B (Test Data Setup)
##
Function
Data Required
Data source
1
14.3 Appendix C (Test Case Template)
Business Areas
01.
Process name
01.01
Test Case
01.01.01
Test Case Prerequisites
Tester
Sign off
Date
Version
Step
Action
Date
Results
Expected Results
Pass/Log#
Retest
.01
1.1
14.4 Appendix D (Problem Tracking Process)
This document describes the Bug Tracking Process for the 'PRODUCT' program.
All problems found during the testing will be logged in the XXX Bug tracking system,
using a single database for all participating systems. Everyone who will be testing,
fixing bugs, communicating with a clients, or managing teams doing either activity,
will be given "write" access to the database.
Several different kinds of reports and graphs can be produced in XXX using its
standard versions, or using the included report writer to create custom reports.
Either of these can be combined with XXX's filtering capabilities to report on
selection of data.
During the testing activities all promotions to the test environment must be
associated with a problem log, and agreed with a Testing Team. To avoid destabilizing
the test environment unnecessarily, promotions may be scheduled to include several changes,
except for problems classed as high priority (urgent), because they will hold up the testing.
The following Severity Strategy will be used:
Severity 1 -
Severity 2 -
Severity 3 -
Severity 4 -
Severity 5 -
The following Priority Strategy will be used:
Priority 1 - Complete crash of a system
Priority 2 - The important function does not work and there is no workaround
Priority 3 - The function does not work, but there is a work around
Priority 4 - Partial function deficiency
Priority 5 - Nuisance
Regular Bug Tracking Process
Step
Process
Responsible stakeholder
Action
Bug Status
Problem Type
1.
Log Problem
Problem Originator:
Tester
Technical Support
Sales
Open new log in XXX system
Try to reproduce the problem. If the problem is not reproducible, specify in
the problem log.
Verify whether any duplicates of the same problem is already in the system
Enter full description of the problem
If necessary print the log & attach any supporting printouts
Assign the Priority an Severity of the Problem
Assign the Owner as a responsible person to look after the problem resolution (TBD)
Open
Bug
2.
Evaluate the problem and initiate the problem resolution
Development Leader
Review the Problem
Review the Problem Priority and Severity. If disagree - change the Priority and
Severity and specify the reason why the severity or/and Priority were changed:
2.1
If this is a Bug, Assign the Problem for correction to a developer.
Assigned
Bug
2.2
If this is a bug, but it will not be corrected at this time due to the low
Priority/Severity rating, time or resources limitation:
Escalate for decision/agreement
Set the problem type as appropriate
Annotate the log with recommended solution
Set status to pending
The Development Leader remains as a problem owner until the problem will
be re-assign for resolution, corrected, send to training or closed by the management
decision with a Pending status assigned
Pending
Bug
2.3
If this is an environmental issue, initiate the environment correction process
by assigning to the appropriate person
Assign Environment
Setup
2.4
If this is an Originator's error:
Annotate the problem log with explanation
Change problem type to Non-problem, Duplicate or Change Request
Get a Problem Originator's agreement
Set status to Void
Void
Non-Problem
Duplicate
Change Request
2.5
If the problem will not be corrected, but it was reported by the Technical support
or Sales as a client complain:
Change problem type to Training
Annotate the problem log with explanation on the workaround
Get a Problem Originator's agreement
Notify sales and technical writer about new training issue (TBD with sales
and technical writer of how they want to proceed from there)
Set status to Deferred
Consider the problem correction in the next release
Deferred
Training
3
Fix the problem
Developer
Fix and Unit test a corrected problem
Update the problem log with resolution information
Set the status to Fixed
Pass back to problem Originator to verification
Fixed
Bug
4
Fix Setup problem
? (could be a network admin.)
Fix and test setup in the reported environment
If required notify sales and technical writer about possible setup problem
with setup correction solution
Update the problem log with resolution information
Set status to Fixed and redirect the ownership to the problem Originator
Fixed
Setup
5
Originator agreed with Non-problem or Duplicate
Originator
Close the problem log:
Change the problem status for Closed
Update other log fields if required
Originator remains as a problem owner after he/she closes the problem log
Closed
Non-problem,
Duplicate or Change Request
6
Promote the fix to the test environment
Development Leader
Verify the fix
Promote any modified programs to the next release, and update the problem
status to Promoted
Promoted
Bug
7
Verify the fix
Originator
Retest the fix
Update the problem log
Change status to Closed or Re-open
Annotate the test execution history with retest information
7.1
If the problem is fixed:
Change the problem status to Closed
Originator remains as a problem owner after he/she closes the problem
Closed
Bug
7.2
If the problem is not fixed or other problems were created as a result of the fix:
Change status of the problem to Re-open
Annotate the test execution history with retest information
Redirect the ownership to Development Team Leader
Re-open
Bug
NOTE: Priority 1 problems will require immediate attention, because they will
hold up the testing. If Priority 1 problems will be discovered, the problem
resolution will follow process structured as follows:
Problem will be logged into the XXX system
The notification with request for immediate problem resolution will be sent to
Development Team Leader
The problem fix will be done in the Test environment and promoted if necessary
into the development after the fix the retest in the test environment will be done.
'PRODUCT' program
Test Strategy Version 1.1
Software Test Strategy Document Example END.
Extreme Software Testing Main Page
© 2000 Alex Samurin geocities.com/xtremetesting/