Topics

Relationship to Other Plans To top of page

The Requirements Management Plan contains information which may be covered to a greater or lesser extent by other plans.

See Artifact: Requirements Management Plan, Tailoring for tailoring guidance.

Organization, Responsibility, and Interfaces To top of page

As described in the White Paper: Applying Requirements Management with Use Cases, requirements management is important to ensuring project success.  The most commonly cited causes of project failure include:

  • Lack of user input
  • Incomplete requirements
  • Changing requirements

Requirements errors are also likely to be the most common class of error, and are the most expensive to fix.

Having the right relationships with stakeholders can help with these problems.  The stakeholders are a key source of input for defining requirements and understanding priorities of requirements.  Many stakeholders, however, lack the insight into the cost and schedule impacts of requested features, and therefore the development organization must manage stakeholders expectations.

Establishing stakeholder relationships includes defining:

  • Responsibilities of the stakeholders: Will staff be available on site for consulting? At prearranged times?
  • Visibility of stakeholders into project artifacts: Open visibility to all artifacts? Visibility only at scheduled milestones?

Identifying Traceability Items To top of page

Describe traceability items, and define how they are to be named, marked, and numbered. See Concepts: Requirement Types, and Concepts: Traceability.

The most important traceability items are listed in Activity: Develop Requirements Management Plan.

Specifying Traceability To top of page

A typical traceability, with a limited set of essential artifacts, is described in Activity: Develop Requirements Management Plan.

In addition to identifying the traceability links, you should specify the cardinality of the links. Some common constraints are:

  • Each approved product feature must be linked to one or more supplemental requirements, or one or more use cases, or both.
  • Each supplemental requirement and each use case section must be linked to one or more test cases.

A more detailed discussion of traceability is provided in the white paper Traceability Strategies for Managing Requirements With Use Case.

Sample Attributes To top of page

The following are some example attributes which you may wish to select from, organized using the requirements types identified in Activity: Develop Requirements Management Plan.

Stakeholder Need To top of page

Source: The stakeholder originating the requirement. (This may also be implemented as a traceability to a "Stakeholder" traceability item.

Contribution: Indicates the problem contribution to the overall business opportunity or problem being addressed by the project. Percentage (0 to 100%). All contributions should sum to no greater than 100%. Below is an example Pareto Diagram showing the contribution for each of several Stakeholder Needs.

Graph showing relative impact of 5 problems to overall business opportunity

Features, Supplementary Requirements, and Use Cases To top of page

Status: Indicates whether the requirement has been reviewed and accepted by the "official channel". Example values are Proposed, Rejected, Approved.

This may be a contractual status, or a status set by a working group capable of making binding decisions.

Benefit: The importance from the stakeholder(s) viewpoint.

  • Critical (or primary). These have to do with the main tasks of the system, its basic function, the functions for which it is being developed. If they are missing the system fails to fulfill its primary mission. They drive the architectural design and tend to be the most frequently exercised use cases.
  • Important (or secondary). These have to do with the support of the system's functions, such as statistical data compilation, report generation, supervision, and function testing. If they are missing the system can still (for a while) fulfill its fundamental mission, but with degraded service quality. In modeling, less importance will be attached to them than to critical use cases
  • Useful (nice to have). These are "comfort" features, not linked to the system's primary mission but that help in its use or market positioning.

Effort: Estimated effort days to implement the requirement.

E.g. This could be categories such as Low, Medium, High. E.g. Low = < 1 day, Medium = 1-20 days, High = >20 days.

In defining Effort, it should be clearly indicated which overheads (management effort, test effort, requirements effort etc.) is included into the estimate.

Size: Estimated non-comment source lines of code (SLOCs), excluding any test code.

You may wish to distinguish between new and reused SLOCs, in order to better compute cost estimates.

Risk: % likelihood that implementation of the requirement will encounter significant undesirable events such as schedule slippage, cost overrun, or cancellation.

E.g. This could be categories such as Low, Medium, High.  E.g. Low = <10%, Medium = 10-50%, High = >50%.

Another option for Risk is separately tracking Technology Risk - % likelihood of running into serious difficulty implementing the requirement because of lack of experience in the domain and/or required technologies.  Then overall risk can be computed as a weighted sum based on other attributes, including size, effort, stability, technology risk, architectural impact, and organizational complexity.

Organizational Complexity: Categorization of control over the organization developing the requirement.

  • Internal: In-house development at one site
  • Geographic: Geographically distributed team
  • External: External organization within the company.  
  • Vendor: Subcontract or purchase of externally developed software.

Architectural Impact: Indicates how this requirement will impact the software architecture.

  • None: Does not affect the existing architecture.
  • Extends: Requires extending the existing architecture.
  • Modifies: The existing architecture must be changed to accommodate the requirement.  

Stability: Likelihood that this requirement will change, or that the development teams' understanding of the requirement will change. (>50% = High, 10..50% = Medium, <10%=Low)

Target Release: The intended product release in which the requirement will be met. (Release1, Release1.1, Release2, ...)

Hazard Level / Criticality: Ability to affect health, welfare, or economic consequences, typically as a result of the software failing to perform as required.

  • Negligible: Cannot result in significant personnel injury or equipment damage.
  • Marginal: Can be controlled without personnel injury or major system damage.
  • Critical: Can cause personnel injury or major system damage, or will require immediate corrective action for personnel or system survival.
  • Catastrophic: Can cause serious injury or death, or complete system loss.

Hazards may also be identified as separate requirements types, and linked to associated use cases.  You may also wish to track hazard probability, corrective actions and/or preventative measures.

Interpretation: In some cases where the requirements form a formal contract, it may be difficult and costly to change the wording the requirements.  As the development organization gains a better understanding of a requirement, it may be necessary to attach interpretation text, rather than simply change the official wording of the requirement.

Use Case To top of page

In addition to the above, it is also useful to track the following use case attribute:

%Detailed: Degree to which the Use Case has been elaborated:

  • 10%: Basic description is provided.
  • 50%: Main flows documented.
  • 80%: Completed but not reviewed. All preconditions and postconditions fully specified.
  • 100%: Reviewed and approved.

Test Case To top of page

Status: Tracks progress during test development.

  • No Activity: No work has been accomplished in developing this test case.
  • Manual: A manual script has been created and validated as capable of verifying the associated requirements.
  • Automated: An automated script has been created and validated as capable of verifying the associated requirements.

General Attributes To top of page

Some other requirement attributes which have general applicability are:

  • Planned Iteration
  • Actual Iteration
  • Responsible Party

Selecting Attributes To top of page

Attributes are used to track information associated with a traceability item, typically for status and reporting purposes.  Each organization may require specific tracking information unique to their organization.  Before assigning an attribute, you should consider:

  • Who will supply this information?
  • Who will use this information, and why is it useful?
  • Is the cost of tracking this information worth the benefit?

The essential attributes to track are Risk, Benefit, Effort, Stability and Architectural Impact, in order to permit prioritizing requirements for scope management and to assign requirements to iterations. These should be tracked initially on Features, and later on all Use Cases and Supplemental Requirements.

Consider Derived Information

In addition to directly using requirements attributes, you may wish to derive information from these requirements attributes via traceability to other requirements types. Some typical patterns of derivation are:

  • Deriving Downwards - Given the traceability above, suppose a Product Feature has an attribute "Target Release". One can derive that each Use Case Section traced to by this Product Feature must also be available at or before the specified Target Release.
  • Deriving Upwards - Given the traceability above, suppose a Use Case Section has an attribute "Estimated Effort". The cost of a Product Feature can be estimated by summing the Estimated Effort for the Use Case Sections that it traces to. This must be used with caution, as several Product Features could map to the same Use Case Section.

Thus, in order to assign requirements attributes to requirements types, you should consider:

  • What derived information / reports do we wish to generate from this attribute?
  • At what level in the traceability hierarchy should we track this attribute?

Dependency of Attributes

Some attributes may only be applicable to a certain level of development. For example, an estimated effort attribute for a use case may be replaced by effort estimates on the design elements once the use case is fully represented in the design.

Reports and Measures To top of page

The following are examples of requirement-related reports and measures. By selecting the required/desired set of reports and measures for your project, you can derive the necessary attributes for the Requirements Management Plan.

Report/Measure Description Used For
Development Priority of Use Cases (list of Use Cases sorted by Risk, Benefit, Effort, Stability, and Architectural Impact). This may be separately sorted lists, or a single list sorted by a weighted combination of these attributes. Used for Activity: Prioritize Use Cases.
Percent of Features in each Status Category. Tracks progress during definition of the project baseline.
 - classified by Target Release  - tracks progress on a per release basis
 - weighted by Effort  - provides a more precise measure of progress.
Features sorted by Risk  - identifies risky features.  Useful for scope management and assigning features to iterations.
 - classified by Target Release, with Development Risk summed for each Target Release  - useful for assessing whether risky features have been scheduled early or late in the project.
Use Case Sections sorted by Stability  - used for deciding which use case sections need to be stabilized.
 - weighted or sorted by Affects Architecture  - useful for prioritizing architecturally significant and/or high effort use cases to be stabilized first.
Requirements with Undefined Attributes When requirements are first proposed, you may not immediately assign all the attributes (e.g. by using a default "Undefined" value).  The Checkpoints: Software Requirements Specification uses this report to check for such undefined attributes.
Traceability Items with incomplete traceability links A report of incorrect or incomplete traceability links is used for the Checkpoints: Software Requirements Specification.

Requirements Change Management To top of page

Change is inevitable, and should be planned for. Changes occur because:

  • There was a change to the problem to be solved. This may be because of new regulations, economic pressures, technology changes, etc.
  • The stakeholders changed their minds or perceptions of what they wanted the system to do. This may be due to a variety of causes, including changes in responsible staff, a deeper understanding of the issues, etc.
  • Failure to include all stakeholders, or to ask all the right questions, when defining the original requirements.

Strategies to managing changing requirements include:

  • Baseline the Requirements
  • Establish a Single Channel to Control Change
  • Maintain a Change History

Baseline the Requirements To top of page

Toward the end of the elaboration phase, the System Analyst should baseline all known requirements. This typically is performed by ensuring there is version control on the requirements artifacts, and identifying the set of artifacts and their versions that form the baseline.

The purpose of the baseline is not to freeze the requirements. Rather it is to enable new and modified requirements to be identified, communicated, estimated, and controlled.

Also see Tool Mentor: Baselining a Rational RequisitePro Project.

Establish a Single Channel to Control Change To top of page

A stakeholder's wish for a change cannot be assumed to officially change the budget and schedule. Typically a negotiation or budget reconciliation process must be initiated before a change can be approved. Often changes must be balanced against one another.

It is crucial that every change go through a single channel, the Change Control Board (CCB), to determine its impact on the system and to undergo official approval. The mechanism for proposing a change is to submit a Change Request, which is reviewed by the CCB.

For additional information, see Activity: Establish Change Control Process.

Maintain a Change History To top of page

It is beneficial to maintain an audit trail of changes to individual requirements. This change history allows you to view all prior changes to the requirement as well as changes to attribute values, and the rationale for the change. This can be useful in assessing actual stability of requirements, and identifying cases where the change control process may not be working (e.g. identifying requirements changes that were not properly reviewed and approved).



Rational Unified Process   2003.06.13