<Project Name>

Test Strategy

Version <1.0>




[Note: The following template is provided for use with the Rational Unified Process (RUP), and is designed for use in conjunction with the detailed guidance provided within RUP. As with most of the templates provided with RUP, this template should be customized to suit the context of the specific project it will be used on.]

[Note: text such as this you are currently reading-enclosed in square brackets and displayed in blue italics (style=InfoBlue)-is included to provide guidance to the author and should be deleted before publishing the document. A paragraph entered following this style will automatically be set to normal (style=Body Text).]

 

Revision History

Date

Version

Description

Author

<dd/mmm/yy>

<x.x>

<details>

<name>

 

 

 

 

 

 

 

 

 

 

 

 


Table of Contents

1. Introduction

1.1 Purpose

1.2 Scope

1.3 Intended Audience

1.4 Document Terminology and Acronyms

1.5 References

1.6 Document Structure

2. Governing Evaluation Mission

2.1 Project Context and Background

2.2 Evaluation Mission

2.3 Test Motivators

3. Test Approach

3.1 Measuring the Extent of Testing

3.2 Identifying and Justifying Tests

3.3 Conducting Tests

3.3.1 Technique 1

3.3.2 Technique n+1

4. Environmental Needs

4.1 Base System Hardware

4.2 Base Software Elements in the Test Environment

4.3 Productivity and Support Tools

4.4 Test Environment Configurations

5. Responsibilities, Staffing, and Training Needs

5.1 People and Roles

5.2 Staffing and Training Needs

6. Risks, Dependencies, Assumptions, and Constraints

7. Management Process and Procedures

7.1 Problem Reporting, Escalation, and Issue Resolution

7.2 Traceability Strategies

7.3 Approval and Signoff


Test Strategy

1.     Introduction

1.1     Purpose

The purpose of the Test Strategy for the <complete lifecycle, specific-phase> of the <Project Name> is to:

-         Provide a central artifact to govern the strategic approach to the test effort. It defines the general approach that will be employed to test the software and to evaluate the results of that testing, and is the artifact that planning artifacts will refer to in terms of governing the detailed testing work.

-         Provide visibility to stakeholders in the testing effort that adequate consideration has been given to various aspects of governing the testing effort, and where approriate to have those stakeholders approve the strategy.

This Test Strategy also supports the following specific objectives:

- [Identifies the items that should be targeted by the tests.

- Identifies the motivation for and ideas behind the test areas to be covered.

- Outlines the testing approach that will be used.

- Identifies the required resources and provides an estimate of the test efforts.

- Lists the deliverable elements of the test project.]

1.2     Scope

[Defines the types of testing-such as Functionality, Usability, Reliability, Performance, and Supportability-and if necessary the levels of testing-for example, Integration or System- that will be addressed by this Test Strategy. It is also important to provide a general indication of significant elements that will be excluded from scope, especially where the intended audience might otherwise reasonably assume the inclusion of those elements.

Note: Be careful to avoid repeating detail here that you will define in sections 3, Target Test Items, and 4, Overview of Planned Tests.]

1.3     Intended Audience

[Provide a brief description of the audience for whom you are writing the Test Strategy. This helps readers of your document identify whether it is a document intended for their use, and helps prevent the document from being used inappropriately.

Note: Document style and content often alters in relation to the intended audience.

This section should only be about three to five paragraphs in length.]

1.4     Document Terminology and Acronyms

[This subsection provides the definitions of any terms, acronyms, and abbreviations required to properly interpret the Test Strategy. Avoid listing items that are generally applicable to the project as a whole and that are already defined in the project's Glossary. Include a reference to the project's Glossary in the References section.]

1.5    References

[This subsection provides a list of the documents referenced elsewhere within the Test Strategy. Identify each document by title, version (or report number if applicable), date, and publishing organization or original author. Avoid listing documents that are influential but not directly referenced. Specify the sources from which the "official versions" of the references can be obtained, such as intranet UNC names or document reference codes. This information may be provided by reference to an appendix or to another document.]

1.6     Document Structure

[This subsection outlines what the rest of the Test Strategy contains and gives an introduction to how the rest of the document is organized. This section may be eliminated if a Table of Contents is used.]

2.     Governing Evaluation Mission

[Provide an overview of the mission(s) that will govern the detailed testing within the iterations. .]

2.1     Project Context and Background

[Provide a brief description of the background surrounding the project with specific reference or focus on important implications for the test effort. Include information such as the key problem being solved, the major benefits of the solution, the planned architecture of the solution, and a brief history of the project. Note that where this information is defined sufficiently in other documents, you might simply include a reference to those other documents if appropriate; however, it may save readers of the test strategy time and effort if a limited amount of information is duplicated here: so you should use your judgement. As a general rule, this section should only be about three to five paragraphs in length.]

2.2     Evaluation Mission

[Provide a brief statement that defines the mission(s) for the test and evaluation effort over the scope of the plan. The governing mission statement(s) might incorporate one or more concerns including:

-         find as many bugs as possible

-         find important problems, assess perceived quality risks

-         advise about perceived project risks

-         certify to a standard

-         verify a specification (requirements, design or claims)

-         advise about product quality, satisfy stakeholders

-         advise about testing

-         fulfill process mandates

-         and so forth

Each mission provides a different context to the test effort and changes the way in which testing should be approached.]

2.3     Test Motivators

[Provide an outline of the key elements that will motivate the testing effort in this iteration. Testing will be motivated by many things-quality risks, technical risks, project risks, use cases, functional requirements, non-functional requirements, design elements, suspected failures or faults, change requests, and so forth.]

3.     Test Approach

[Note: It is important to remember that as a general rule an appropriate test approach is specific to the context of the individual project. As such, the elements defined in the approach will differ from project-to-project depending on the evaluation mission and otherproject-specific factors]

[The Test Approach presents the recommended strategy for analyzing, designing, implementing and executing the required tests. The specific Test Plans identify what items will be targeted for testing and what types of tests would be performed. This section of the test strategy describes how the tests will be realized.]

One of the main aspects of the test approach is the selection of techniques that will be used. This test strategy should include an outline of how each technique can be designed, implemented and executed, and the criterion for knowing that the technique is both useful and successful. For each technique, provide a description of the technique and define why it is an important part of the test approach by briefly outlining how it helps achieve the Evaluation Mission(s).

As you define each aspect of the approach, you should consider the impact it will have on resources such as staff, tools and testing hardware and note that impact accordingly.]

3.1     Measuring the Extent of Testing

[Describe what strategy you will use for measuring the progress of the testing effort. When deciding on a measurement strategy, it is important to consider the following advice from Cem Kaner, 2000 "Bug count metrics reflect only a small part of the work and progress of the testing group. Many alternatives look more closely at what has to be done and what has been done. These will often be more useful and less prone to side effects than bug count metrics."

A good measurement strategy will report on multiple dimensions. Consider the following dimensions, and select a subset that are appropriate for your project context: coverage (against the product and/ or against the plan), effort, results, obstacles, risks (in product quality and/ or testing quality), historical trend (across iterations and/ or across projects).]

3.2     Identifying and Justifying Tests

[Describe how tests will be identified and considered for inclusion in the scope of the test effort covered by this strategy. Provide a listing of resources that will be used to stimulate/ drive the identification and selection of specific tests to be conducted, such as Initial Test-Idea Catalogs, Requirements documents, User documentation and/ or Other Reference Sources. Examples of Test-Ideas Catalogs can be found in the process components shipped with RUP.]

3.3     Conducting Tests

3.3.1     Technique 1

[Provide a brief one-paragraph introduction to the technique, covering the basis for or theory behind the technique and the general quality risks it addresses.]

Technique Objective:

[Explain the focus and goal of the technique in relation to the quality risks it addresses (e.g. FURPS+).]

Technique:

[Outline the high-level procedure for the technique, possibly as bulleted steps at the overview level].

Oracles:

[Outline one or more strategies that can be used with the technique to accurately observe the outcomes of the test. The oracle combines elements of both the method by which the observation can be made and the characteristics of the specific outcome that indicate probable success or failure. Ideally, oracles will be self-verifying, allowing automated tests to make an initial assessment of test pass or failure, however, you should be careful to mitigate the risks inherent in automated results determination.]

Required Tools:

-         [Provide a simple list of the specific tools or a brief outline of each type of tools that the technique will require]

Success Criteria:

[Explain how the technique will be judged as successful, giving specific criteria that can and will be measured.]

Special Considerations:

-         [Provide a list of any assumptions, constraints, dependencies or other considerations that will have an impact on the technique, such as tester skills or test resource requirements.]

3.3.2     Technique n+1

[Provide a brief one-paragraph introduction to the technique, covering the basis for or theory behind the technique and the general quality risks it addresses.]

Technique Objective:

[Explain the focus and goal of the technique in relation to the quality risks it addresses (e.g. FURPS+).]

Technique:

[Outline the high-level procedure for the technique, possibly as bulleted steps at the overview level].

Oracles:

[Outline one or more strategies that can be used with the technique to accurately observe the outcomes of the test. The oracle combines elements of both the method by which the observation can be made and the characteristics of the specific outcome that indicate probable success or failure. Ideally, oracles will be self-verifying, allowing automated tests to make an initial assessment of test pass or failure, however, you should be careful to mitigate the risks inherent in automated results determination.]

Required Tools:

-         [Provide a simple list of the specific tools or a brief outline of each type of tools that the technique will require]

Success Criteria:

[Explain how the technique will be judged as successful, giving specific criteria that can and will be measured.]

Special Considerations:

-         [Provide a list of any assumptions, constraints, dependencies or other considerations that will have an impact on the technique, such as tester skills or test resource requirements.]

 

4.     Environmental Needs

[This section presents the non-human resources required for the Test Strategy.]

4.1     Base System Hardware

The following table sets forth the system resources for the test effort presented in this Test Strategy.

[The specific elements of the test system may not be fully understood in early iterations, so expect this section to be completed over time. We recommend that the system simulates the production environment, scaling down the concurrent access and database size, and so forth, if and where appropriate.]

[Note: Add or delete items as appropriate.]

System Resources

Resource

Quantity

Name and Type

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

4.2     Base Software Elements in the Test Environment

The following base software elements are required in the test environment for this Test Strategy.

[Note: Add or delete items as appropriate.]

Software Element Name

Version

Type and Other Notes

     
     
     
     

 

4.3     Productivity and Support Tools

The following tools will be employed to support the test process for this Test Strategy.

[Note: Add or delete items as appropriate.]

Tool Category or Type

Tool Brand Name

Vendor or In-house

Version

       
       
       
       

 

4.4     Test Environment Configurations

The following Test Environment Configurations needs to be provided and supported for this project.

Configuration Name

Description

Implemented in Physical Configuration

     
     
     
     

 

5.     Responsibilities, Staffing, and Training Needs

[This section presents the required resources to address the test effort outlined in the Test Strategy the main responsibilities, and the knowledge or skill sets required of those resources.]

5.1     People and Roles

This table shows the staffing assumptions for the test effort.

[Note: Add or delete items as appropriate.]

 

Human Resources

Role

Minimum Resources Recommended

(number of full-time roles allocated)

Specific Responsibilities or Comments

Test Manager

 

Provides management oversight.

Responsibilities include:

-         planning and logistics

-         agree on the mission

-         identify motivators

-         acquire appropriate resources

-         present management reporting

-         advocate the interests of test

-         evaluate effectiveness of test effort

Test Analyst

 

 

Identifies and defines the specific tests to be conducted.

Responsibilities include:

-         identify test ideas

-         define test details

-         determine test results

-         document change requests

-         evaluate product quality

Test Designer

 

 

Defines the technical approach to the implementation of the test effort.

Responsibilities include:

-         define test approach

-         define test automation architecture

-         verify test techniques

-         define testability elements

-         structure test implementation

Tester

 

Implements and executes the tests.

Responsibilities include:

-         implement tests and test suites

-         execute test suites

-         log results

-         analyze and recover from test failures

-         document incidents

Test System Administrator

 

Ensures test environment and assets are managed and maintained.

Responsibilities include:

-         administer test management system

-         install and support access to, and recovery of, test environment configurations and test labs

Database Administrator, Database Manager

 

Ensures test data (database) environment and assets are managed and maintained.

Responsibilities include:

-         support the administration of test data and test beds (database).

Designer

 

Identifies and defines the operations, attributes, and associations of the test classes.

Responsibilities include:

-         define the test classes required to support testability requirements as defined by the test team

Implementer

 

Implements and unit tests the test classes and test packages.

Responsibilities include:

-         create the test components required to support testability requirements as defined by the designer

 

5.2     Staffing and Training Needs

This section outlines how to approach staffing and training the test roles for the project.

[The way to approach staffing and training will vary from project to project. If this section is part of a Test Strategy, you should indicate at what points in the project lifecycle different skills and numbers of staff are needed. In the Iteration Test Plans, you should focus mainly on where and what training might occur during the Iteration.

Give thought to your training needs, and schedule this based on a Just-In-Time (JIT) approach there is often a temptation to attend training too far in advance of its usage when the test team has apparent slack. Doing this introduces the risk of the training being forgotten by the time it's needed.

Look for opportunities to combine the purchase of productivity tools with training on those tools, and arrange with the vendor to delay delivery of the training until just before you need it. If you have enough headcount, consider having training delivered in a customized manner for you, possibly at your own site.

The test team often requires the support and skills of other team members not directly part of the test team. Make sure you arrange in your strategy for appropriate availability of support staff: System Administrators, Database Administrators, and Developers who are required to enable the test effort.]

6.     Risks, Dependencies, Assumptions, and Constraints

[List any risks that may affect the successful execution of this Test Strategy, and identify mitigation and contingency strategies for each risk. Also indicate a relative ranking for both the likelihood of occurrence and the impact if the risk is realized.]

Risk

Mitigation Strategy

Contingency (Risk is realized)

 

 

 

 

 

 

 

 

 

 

[List any dependencies identified during the development of this Test Strategy that may affect its successful execution if those dependencies are not honored. Typically these dependencies relate to activities on the critical path that are prerequisites or post-requisites to one or more preceding (or subsequent) activities You should consider responsibilities you are relying on other teams or staff members external to the test effort completing, timing and dependencies of other planned tasks, the reliance on certain work products being produced.]

Dependency between

Potential Impact of Dependency

Owners

 

 

 

 

 

 

 

 

 

 

[List any assumptions made during the development of this Test Strategy that may affect its successful execution if those assumptions are proven incorrect. Assumptions might relate to work you assume other teams are doing, expectations that certain aspects of the product or environment are stable, and so forth].

Assumption to be proven

Impact of Assumption being incorrect

Owners

 

 

 

 

 

 

 

 

 

 

[List any constraints placed on the test effort that have had a negative effect on the way in which this Test Strategy has been approached.]

Constraint on

Impact Constraint has on test effort

Owners

 

 

 

 

 

 

 

 

 

 

7.     Management Process and Procedures

[Outline what processes and procedures are to be used when issues arise with the Test Strategy and its enactment.]

7.1     Problem Reporting, Escalation, and Issue Resolution

[Define how process problems will be reported and escalated, and the process to be followed to achieve resolution.]

7.2     Traceability Strategies

[Consider appropriate traceability strategies for:

-         Coverage of Testing against Specifications enables measurement the extent of testing

-         Motivations for Testing enables assessment of relevance of tests to help determine whether to maintain or retire tests

-         Software Design Elements enables tracking of subsequent design changes that would necessitate rerunning tests or retiring them

-         Resulting Change Requests enables the tests that discovered the need for the change to be identified and re-run to verify the change request has been completed successfully]

7.3     Approval and Signoff

[Outline the approval process and list the job titles (and names of current incumbents) that initially must approve the strategy, and sign off on the satisfactory execution of the strategy.]