<Project Name>

Iteration Test Plan

 

Version <1.0>

 

 

[Note: The following template is provided for use with the Rational Unified Process (RUP), and is designed for use in conjunction with the detailed guidance provided within RUP. As with most of the templates provided with RUP, this template should be customized to suit the context of the specific project it will be used on.]

[Note: text such as this you are currently reading-enclosed in square brackets and displayed in blue italics (style=InfoBlue)-is included to provide guidance to the author and should be deleted before publishing the document. A paragraph entered following this style will automatically be set to normal (style=Body Text).]

 


Revision History

Date

Version

Description

Author

<dd/mmm/yy>

<x.x>

<details>

<name>

 

 

 

 

 

 

 

 

 

 

 

 

 


Table of Contents

1. Introduction

1.1 Purpose

1.2 Scope

1.3 Intended Audience

1.4 Document Terminology and Acronyms

1.5 References

1.6 Document Structure

2. Evaluation Mission and Test Motivation

2.1 Evaluation Mission

2.2 Test Motivators

3. Target Test Items

4. Outline of Planned Tests

4.1 Outline of Test Inclusions

4.2 Outline of Other Candidates for Potential Inclusion

4.3 Outline of Test Exclusions

5. Test Approach

5.1 Measuring the Extent of Testing

5.2 Identifying and Justifying Tests

5.3 Conducting Tests

6. Entry and Exit Criteria

6.1 Iteration Test Plan

6.1.1 Iteration Test Plan Entry Criteria

6.1.2 Iteration Test Plan Exit Criteria

6.1.3 Suspension and Resumption Criteria

6.2 Test Cycles

6.2.1 Test Cycle Entry Criteria

6.2.2 Test Cycle Exit Criteria

6.2.3 Test Cycle Abnormal Termination

7. Deliverables

7.1 Test Evaluation Summaries

7.2 Reporting on Test Coverage

7.3 Perceived Quality Reports

7.4 Incident Logs and Change Requests

7.5 Smoke Test Suite and Supporting Test Scripts

8. Testing Workflow

9. Environmental Needs

9.1 Base System Hardware

9.2 Base Software Elements in the Test Environment

9.3 Productivity and Support Tools

9.4 Test Environment Configurations

10. Responsibilities, Staffing, and Training Needs

10.2 Staffing and Training Needs

11. Key Iteration Milestones

12. Iteration Plan Risks, Dependencies, Assumptions, and Constraints

13. Management Process and Procedures

13.1 Approval and Signoff


Iteration Test Plan

1.     Introduction

1.1     Purpose

The purpose of the Iteration Test Plan for the <complete lifecycle, specific-phase> of the <Project Name> is to:

-         Provide a central artifact to govern the planning and control of the test effort. It defines the general approach that will be employed to test the software and to evaluate the results of that testing, and is the top-level plan that will be used by managers to govern and direct the detailed testing work.

-         Provide visibility to stakeholders in the testing effort that adequate consideration has been given to various aspects of governing the testing effort, and where appropriate to have those stakeholders approve the plan.

This Iteration Test Plan also supports the following specific objectives:

- [Identifies the items that should be targeted by the tests.

- identifies the motivation for and ideas behind the test areas to be covered.

- Outlines the testing approach that will be used.

- identifies the required resources and provides an estimate of the test efforts.

- lists the deliverable elements of the test project.]

1.2     Scope

[Defines the types of testing-such as Functionality, Usability, Reliability, Performance, and Supportability-and if necessary the levels of testing-for example, Integration or System-that will be addressed by this Iteration Test Plan. It is also important to provide a general indication of significant elements that will be excluded from scope, especially where the intended audience might otherwise reasonably assume the inclusion of those elements.

Note: Be careful to avoid repeating detail here that you will define in sections 3, Target Test Items, and 4, Outline of Planned Tests.]

1.3     Intended Audience

[Provide a brief description of the audience for whom you are writing the Iteration Test Plan. This helps readers of your document identify whether it is a document intended for their use, and helps prevent the document from being used inappropriately.

Note: The document style and content usually alters in relation to the intended audience.

This section should only be about three to five paragraphs in length.]

1.4     Document Terminology and Acronyms

[This subsection provides the definitions of any terms, acronyms, and abbreviations required to properly interpret the Iteration Test Plan. Avoid listing items that are generally applicable to the project as a whole and that are already defined in the project's Glossary. Include a reference to the project's Glossary in the References section.]

1.5     References

[This subsection provides a list of the documents referenced elsewhere within the Iteration Test Plan. Identify each document by title, version (or report number if applicable), date, and publishing organization or original author. Avoid listing documents that are influential but not directly referenced. Specify the sources from which the "official versions" of the references can be obtained, such as intranet UNC names or document reference codes. This information may be provided by reference to an appendix or to another document.]

1.6     Document Structure

[This subsection outlines what the rest of the Iteration Test Plan contains and gives an introduction to how the rest of the document is organized. This section may be eliminated if a Table of Contents is used.]

2.     Evaluation Mission and Test Motivation

[Provide an overview of the mission and motivation for the testing that will be conducted in this iteration.]

2.1     Evaluation Mission

[Provide a brief statement that defines the mission(s) for the test and evaluation effort over the scope of the plan. The governing mission statement(s) might incorporate one or more concerns including:

-         find as many bugs as possible

-         find important problems, assess perceived quality risks

-         advise about perceived project risks

-         certify to a standard

-         verify a specification (requirements, design or claims)

-         advise about product quality, satisfy stakeholders

-         advise about testing

-         fulfill process mandates

-         and so forth

Each mission provides a different context to the test effort and changes the way in which testing should be approached.]

2.2     Test Motivators

[Provide an outline of the key items that will motivate the testing effort in this iteration. Testing will be motivated by many things-quality risks, technical risks, project risks, use cases, functional requirements, non-functional requirements, design elements, suspected failures or types of faults (fault models), change requests, and so forth. List the specific items from each applicable category that will motivate the testing this iteration and for which reporting will focus on.]

3.     Target Test Items

The listing below identifies those test items-software, hardware, and supporting product elements-that have been identified as targets for testing. This list represents what items will be tested.

[Provide a list of the major target test items. This list should include both items produced directly by the project development team, and items that those products rely on; for example, basic processor hardware, peripheral devices, operating systems, third-party products or components, and so forth. Consider grouping the list by category and assigning relative importance to each specific test motivator.]

4.     Outline of Planned Tests

[This section provides an outline of the testing that will be performed for the Iteration. This outline represents the intersection between targets and the test types or quality risks. As such, it can often we represented in a tabular or spreadsheet format.

The outline in this section represents an overview of both the tests that will be performed and those that will specifically be excluded.]

4.1     Outline of Test Inclusions

[Provide an outline of the major testing planned for the current iteration. Note what will be included in the plan and record what will explicitly not be included in the section titled Outline of Test Exclusions .]

4.2     Outline of Other Candidates for Potential Inclusion

[Separately outline test areas you suspect might be useful to investigate and evaluate, but that have not been sufficiently researched to know if they are important to pursue.]

4.3     Outline of Test Exclusions

[Provide an outline of the potential tests that might otherwise have been conducted but that have been explicitly excluded from this plan. If a type of test will not be implemented and executed, indicate this in a sentence stating the test will not be implemented or executed and stating the justification, such as:

-         "These tests do not help achieve the evaluation mission."

-         "There are insufficient resources to conduct these tests."

-         "These tests are unnecessary due to the testing conducted by xxxx."

As a heuristic, if you think it would be reasonable for one of your audience members to expect a certain aspect of testing to be included that you will not or cannot address, you should note its exclusion: If the team agrees the exclusion is obvious, you probably don't need to list it.]

5.     Test Approach

[The Test Approach presents an overview of the recommended strategy for analyzing, designing, implementing and executing the required tests. Sections 3 , Target Test Items , and 4 , Outline of Planned Tests , identified what items will be tested and what types of tests would be performed. This section describes how the tests will be realized.

As you identify each aspect of the approach, you should update Section 10 , Responsibilities, Staffing, and Training Needs , to document the test environment configuration and other resources that will be needed to implement each aspect.

In some cases the strategy you use will be common across the life of the project. As such, it can be documented in one or more separate Test Strategy artifacts or in a Master Test Plan, and reused across multiple Iterations. Where that will be done, in this section you can simply reference which artifacts contain the strategy that will be used, either under this main section heading or under sub-headings as appropriate.]

5.1     Measuring the Extent of Testing

[Describe what strategy you will use for measuring the progress of the testing effort. When deciding on a measurement strategy, it is important to consider the following advice from Cem Kaner, 2000 "Bug count metrics reflect only a small part of the work and progress of the testing group. Many alternatives look more closely at what has to be done and what has been done. These will often be more useful and less prone to side effects than bug count metrics."

A good measurement strategy will report on multiple dimensions. Consider the following dimensions, and select a subset that are appropriate for your project context: coverage (against the product and/ or against the plan), effort, results, obstacles, risks (in product quality and/ or testing quality), historical trend (across iterations and/ or across projects).]

5.2     Identifying and Justifying Tests

[Describe how specific tests will be identified and considered for inclusion in the scope of the test effort covered by this strategy. This is done to provide insight to the stakeholders in this plan because the plan itself doesn't usually list all of the detailed tests: these are provided in other test artifacts.

Provide a listing of resources that will be used to stimulate/ drive the identification and selection of specific tests to be conducted, such as Initial Test-Idea Catalogs, Requirements documents, User documentation and/ or Other Reference Sources. Examples of Test-Ideas Catalogs can be found in the process components shipped with RUP.]

5.3     Conducting Tests

One of the main aspects of the test approach is an explanation of how the testing will be conducted, covering the selection of quality-risk areas or test types that will be addressed and the associated techniques that will be used. If you are maintaining a separate test strategy artifact that covers this, simply list the test types or quality-risks areas that will be addressed by the plan, and refer to the test strategy artifact for the details. If there is no separate test strategy artifact, you should provide an outline here of how testing will be conducted for each technique: how design, implementation and execution of the tests will be done, and the criterion for knowing that the technique is both useful and successful. For each technique, provide a description of the technique and define why it is an important part of the test approach by briefly outlining how it helps achieve the Evaluation Mission(s).

 

6.     Entry and Exit Criteria

6.1     Iteration Test Plan

6.1.1     Iteration Test Plan Entry Criteria

[Specify the criteria that will be used to determine whether the execution of the Iteration Test Plan can begin.]

6.1.2     Iteration Test Plan Exit Criteria

[Specify the criteria that will be used to determine whether the execution of the Iteration Test Plan is complete or that continued execution provides no further benefit.]

6.1.3     Suspension and Resumption Criteria

[Specify the criteria that will be used to determine whether testing should be prematurely suspended or ended before the plan has been completely executed, and under what criteria testing can be resumed.]

6.2     Test Cycles

6.2.1     Test Cycle Entry Criteria

[Specify the criteria to be used to determine whether the test effort for the next Test Cycle of this Iteration Test Plan can begin.]

6.2.2     Test Cycle Exit Criteria

[Specify the criteria that will be used to determine whether the test effort for the current Test Cycle of this Iteration Test Plan is deemed sufficient.]

6.2.3     Test Cycle Abnormal Termination

[Specify the criteria that will be used to determine whether testing should be prematurely suspended or ended for the current test cycle, or whether the intended build candidate to be tested must be altered.]

7.     Deliverables

[In this section, list the various artifacts that will be created by the test effort that are useful deliverables to the various stakeholders of the test effort. Don't list all work products; only list those that give direct, tangible benefit to a stakeholder and those by which you want the success of the test effort to be measured.

Note: This section may be delegated in whole or part to the Test Strategy or Master Test Plan artifacts, in which case this section can simple note any adjustments or be deleted.]

7.1     Test Evaluation Summaries

[Provide a brief outline of both the form and content of the test evaluation summaries, and indicate how frequently they will be produced.]

7.2     Reporting on Test Coverage

[Provide a brief outline of both the form and content of the reports used to measure the extent of testing, and indicate how frequently they will be produced. Give an indication as to the method and tools used to record, measure, and report on the extent of testing.]

7.3     Perceived Quality Reports

[Provide a brief outline of both the form and content of the reports used to measure the perceived quality of the product, and indicate how frequently they will be produced. Give an indication about to the method and tools used to record, measure, and report on the perceived product quality. You might include some analysis of Incidents and Change Request over Test Coverage.]

7.4     Incident Logs and Change Requests

[Provide a brief outline of both the method and tools used to record, track, and manage test incidents, associated change requests, and their status.]

7.5     Smoke Test Suite and Supporting Test Scripts

[Provide a brief outline of the test assets that will be delivered to allow ongoing regression testing of subsequent product builds to help detect regressions in the product quality.]

8.     Testing Workflow

[Provide an outline of the workflow to be followed by the Test team in the development and execution of this Iteration Test Plan.]

For Iteration Test Plans, we recommend simply using this section for exceptions, noting any deviations or changes from the workflow outlined in the master planning artifacts.

Note: Where process and detailed planning information is recorded centrally and separately from this Iteration Test Plan, you will have to manage the issues that will arise from having duplicate copies of the same information. To avoid team members referencing out-of-date information, we suggest that in this situation you place the minimum amount of process and planning information within the Iteration Test Plan to make ongoing maintenance easier and simply reference the "Master" source material.]

9.     Environmental Needs

[This section presents the non-human resources required for the Iteration Test Plan.

Note: This section may be delegated in whole or part to the Test Strategy artifact, in which case this section can simple note any adjustments or be deleted.]

9.1     Base System Hardware

The following table sets forth the system resources for the test effort presented in this Iteration Test Plan.

[The specific elements of the test system may not be fully understood in early iterations, so expect this section to be completed over time. We recommend that the system simulates the production environment, scaling down the concurrent access and database size, and so forth, if and where appropriate.]

[Note: Add or delete items as appropriate.]

System Resources

Resource

Quantity

Name and Type

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

9.2     Base Software Elements in the Test Environment

The following base software elements are required in the test environment for this Iteration Test Plan.

[Note: Add or delete items as appropriate.]

Software Element Name

Version

Type and Other Notes

     
     
     
     

 

9.3     Productivity and Support Tools

The following tools will be employed to support the test process for this Iteration Test Plan.

[Note: Add or delete items as appropriate.]

Tool Category or Type

Tool Brand Name

Vendor or In-house

Version

       
       
       
       

 

9.4     Test Environment Configurations

The following Test Environment Configurations needs to be provided and supported for this project.

Configuration Name

Description

Implemented in Physical Configuration

     
     
     
     

 

10.     Responsibilities, Staffing, and Training Needs

[This section presents the required resources to address the test effort outlined in the Iteration Test Plan the main responsibilities, and the knowledge or skill sets required of those resources

Note: This section may be delegated in whole or part to the Test Strategy artifact, in which case this section can simple note any adjustments or be deleted.]

10.1     People and Roles

This table shows the staffing assumptions for the test effort.

Human Resources

Role

Minimum Resources Recommended

(Number of full-time roles allocated)

Specific Responsibilities or Comments

Test Manager

 

Provides management oversight.

Responsibilities include:

-         planning and logistics

-         agree on the mission

-         identify motivators

-         acquire appropriate resources

-         present management reporting

-         advocate the interests of test

-         evaluate effectiveness of test effort

Test Analyst

 

 

Identifies and defines the specific tests to be conducted.

Responsibilities include:

-         identify test ideas

-         define test details

-         determine test results

-         document change requests

-         evaluate product quality

Test Designer

 

 

Defines the technical approach to the implementation of the test effort.

Responsibilities include:

-         define test approach

-         define test automation architecture

-         verify test techniques

-         define testability elements

-         structure test implementation

Tester

 

Implements and executes the tests.

Responsibilities include:

-         implement tests and test suites

-         execute test suites

-         log results

-         analyze and recover from test failures

-         document incidents

Test System Administrator

 

Ensures test environment and assets are managed and maintained.

Responsibilities include:

-         administer test management system

-         install and support access to, and recovery of, test environment configurations and test labs

Database Administrator, Database Manager

 

Ensures test data (database) environment and assets are managed and maintained.

Responsibilities include:

-         support the administration of test data and test beds (database).

Designer

 

Identifies and defines the operations, attributes, and associations of the test classes.

Responsibilities include:

-         define the test classes required to support testability requirements as defined by the test team

Implementer

 

Implements and unit tests the test classes and test packages.

Responsibilities include:

-         create the test components required to support testability requirements as defined by the designer

 

 

10.2     Staffing and Training Needs

This section outlines how to approach staffing and training the test roles for the project.

[The way to approach staffing and training will vary from project to project. If this section is part of a Iteration Test Plan, you should indicate at what points in the project lifecycle different skills and numbers of staff are needed. If this is an Iteration Test Plan, you should focus mainly on where and what training might occur during the Iteration.

Give thought to your training needs, and plan to schedule this based on a Just-In-Time (JIT) approach there is often a temptation to attend training too far in advance of its usage when the test team has apparent slack. Doing this introduces the risk of the training being forgotten by the time it's needed.

Look for opportunities to combine the purchase of productivity tools with training on those tools, and arrange with the vendor to delay delivery of the training until just before you need it. If you have enough headcount, consider having training delivered in a customized manner for you, possibly at your own site.

The test team often requires the support and skills of other team members not directly part of the test team. Make sure you arrange in your plan for appropriate availability of System Administrators, Database Administrators, and Developers who are required to enable the test effort.]

11.     Key Iteration Milestones

[Identify the key schedule milestones that set the context for the Testing effort. Avoid repeating too much detail that is documented elsewhere in plans that address the entire project.]

 

Milestone

Planned Start Date

Actual Start Date

Planned End Date

Actual End Date

Iteration starts

 

 

 

 

Iteration Test Plan agreed

 

 

 

 

Test Approach Verified

 

 

 

 

First Build delivered to test

 

 

 

 

First Build BVT passed and accepted into test

 

 

 

 

First Build test cycle finishes

 

 

 

 

[Build Two will not be tested]

 

 

 

 

Third Build delivered to test

 

 

 

 

Third Build BVT passed and accepted into test

 

 

 

 

Third Build test cycle finishes

 

 

 

 

Fourth Build delivered to test

 

 

 

 

Fourth Build BVT passed and accepted into test

 

 

 

 

Iteration Assessment review

 

 

 

 

Iteration ends

 

 

 

 

 

12.     Iteration Plan Risks, Dependencies, Assumptions, and Constraints

[List any risks that may affect the successful execution of this Iteration Test Plan, and identify mitigation and contingency strategies for each risk. Also indicate a relative ranking for both the likelihood of occurrence and the impact if the risk is realized.]

Risk

Mitigation Strategy

Contingency (Risk is realized)

     
     
     

 

[List any dependencies identified during the development of this Iteration Test Plan that may affect its successful execution if those dependencies are not honored. Typically these dependencies relate to activities on the critical path that are prerequisites or post-requisites to one or more preceding (or subsequent) activities You should consider responsibilities you are relying on other teams or staff members external to the test effort completing, timing and dependencies of other planned tasks, the reliance on certain work products being produced.]

Dependency between

Potential Impact of Dependency

Owners

     
     
     

 

 

[List any assumptions made during the development of this Iteration Test Plan that may affect its successful execution if those assumptions are proven incorrect. Assumptions might relate to work you assume other teams are doing, expectations that certain aspects of the product or environment are stable, and so forth].

Assumption to be proven

Impact of Assumption being incorrect

Owners

     
     
     

 

[List any constraints placed on the test effort that have had a negative effect on the way in which this Iteration Test Plan has been approached.]

Constraint on

Impact Constraint has on test effort

Owners

     
     
     

 

13.     Management Process and Procedures

[Outline any refinements to the processes and procedures that were defined in the Master Test Plan to be used when issues arise with the test effort. If there is no Master test Plan or general development plan that covers these procedures, define what you need to here in the Iteration Test Plan.]

13.1     Approval and Signoff

[Outline the approval process and list the job titles (and names of current incumbents) that initially must approve the plan, and sign off on the plans satisfactory execution.]