Activity:
|
Purpose
|
|
Role: Test Manager | |
Frequency: This activity is typically conducted multiple times per iteration. . | |
Steps | |
Input Artifacts: | Resulting Artifacts: |
Tool Mentors: |
Workflow Details: |
Purpose: | To gain an initial understanding of the specific objectives behind the iteration plan. |
Examine the iteration plan, and identify the specific items that will govern the plan, and the key deliverables by which the execution of the plan will be measured. Key elements you should examine include: Risk lists, Change Request lists, Requirements sets, Use Cases lists, UML Models etc.
It's useful to supplement this examination with attending iteration kickoff meetings. If these aren't already planned, organize one for the test team that invites key management and software development resources (e.g. project manager, software architect, development team leads).
Purpose: | To gain a more detailed understanding of the scope of and specific deliverables of the iteration plan. |
Having examined the iteration plan, looking initially for tangible and clearly defined elements that would be good candidates for assessment. Examine the details behind the work to be done, including both "new work" and Change Request's etc. Study the risks that will be addressed by the plan to understand clearly what the potential impact of the risk is and what must be done to address it (mitigate, transfer, eliminate etc.)
Purpose: | To outline the test motivators that are candidates for this iteration. |
Using the understanding you've gained of the iteration plan, identify potential sources for things that will motivate the test effort. Motivation may come from one of any number of sources: an individual artifact, a set of artifacts, an event or activity, or the absence of any of these things. Sources might include: Risk List, Change Requests, Requirements Set, Use Cases, UML Models etc.
For each source, examine the detail for potential motivators. If you cannot find a lot of detail about, or you are unfamiliar with the motivation source, it may be useful to discuss the items with the analyst and management staff, usually by starting with the project manager or lead system analysts.
As you examine the information and discuss it with the relevant staff, enumerate a list of candidate test motivators.
Purpose: | To determine what quality risks are most relevant to this iteration. |
Using the list of candidate test motivators, consider each motivator in terms of the potential for quality risks. This will help you to better understand the relevant importance of each candidate, and may expose other candidate motivators that are missing from the list.
There are many different dimensions of quality risk, and it's possible that a single motivator may highlight the potential for risk in multiple categories. Highlight the potential quality risks against each candidate motivator and indicate the likelihood of the risk being encountered, and the impact of the risk eventuating.
Purpose: | To define the specific test motivators that will be the focus for this iteration. |
Using the list of candidate motivators and their quality risk information, determine the relative importance of the motivators. Determine the motivators that can be addressed in the current iteration ( you may want to retain the list of remaining candidates for subsequent iterations).
Define the motivator list, documenting it as appropriate. This may be as part of the iteration test plan, in a database or spreadsheet or as a list contained within some other artifact. It is useful to briefly describe why the motivator is important and what aspects of quality risk it will help to address.
Purpose: | To enable impact analysis and assessment reporting to be performed on the traced items. |
Using the Traceability requirements outlined in the Test Plan, update the traceability relationships as required.
Purpose: | To verify that the activity has been completed appropriately and that the resulting artifacts are acceptable. |
Now that you have completed the work, it is beneficial to verify that the work was of sufficient value, and that you did not simply consume vast quantities of paper. You should evaluate whether your work is of appropriate quality, and that it is complete enough to be useful to those team members who will make subsequent use of it as input to their work. Where possible, use the checklists provided in RUP to verify that quality and completeness are "good enough".
Have the people performing the downstream activities that rely on your work as input take part in reviewing your interim work. Do this while you still have time available to take action to address their concerns. You should also evaluate your work against the key input artifacts to make sure you have represented them accurately and sufficiently. It may be useful to have the author of the input artifact review your work on this basis.
Try to remember that that RUP is an iterative process and that in many cases artifacts evolve over time. As such, it is not usually necessary-and is often counterproductive-to fully-form an artifact that will only be partially used or will not be used at all in immediately subsequent work. This is because there is a high probability that the situation surrounding the artifact will change-and the assumptions made when the artifact was created proven incorrect-before the artifact is used, resulting in wasted effort and costly rework. Also avoid the trap of spending too many cycles on presentation to the detriment of content value. In project environments where presentation has importance and economic value as a project deliverable, you might want to consider using an administrative resource to perform presentation tasks.
Rational Unified Process |