Introduction
 Introduction 
Concepts
 Concepts 
Workflow
 Workflow 
Activities
 Activities 
Artifacts
 Artifacts 
Guidelines
 Guidelines 

Purpose To top of page

The Test discipline acts as a service provider to the other disciplines in many respects. Testing focuses primarily on evaluating or assessing product quality, which is realized through these core practices:

  • Find and document defects in software quality.
  • Advise on the perceived software quality.
  • Validate and prove the assumptions made in design and requirement specifications through concrete demonstration.
  • Validate that the software product works as designed.
  • Validate that the requirements are implemented appropriately.

An interesting difference exists between Test and the other disciplines in RUP - essentially Test is tasked with finding and exposing weaknesses in the software product. It's interesting because, to get the biggest benefit, you need a different general philosophy than what's used in the Requirements, Analysis & Design, and Implementation disciplines. A somewhat subtle difference is that those three disciplines focus on completeness, whereas Test focuses on incompleteness.

A good test effort is driven by questions such as:

  • How could this software break?
  • In what possible situations could this software fail to work predictably?

Test challenges the assumptions, risks, and uncertainty inherent in the work of other disciplines, and addresses those concerns using concrete demonstration and impartial evaluation. You want to avoid two potential extremes:

  • an approach that does not suitably or effectively challenge the software, and exposes its inherent problems or weaknesses
  • an approach that is inappropriately negative or destructive - adopting such a negative approach, you may find it impossible to consider the software product of acceptable quality and could alienate the Test effort from the other disciplines

Information presented in various surveys and essays states that software testing accounts for 30 to 50 percent of total software development costs. It is, therefore, somewhat surprising to note that most people believe computer software is not well tested before it's delivered. This contradiction is rooted in a few key issues:

  • Testing software is very difficult. How do you quantify the different ways in which a given program can behave?
  • Typically testing is done without a clear methodology, creating results that vary from project to project and from organization to organization. Success is primarily a factor of the quality and skills of the individuals.
  • Productivity tools are used insufficiently, which makes the laborious aspects of testing unmanageable. In addition to the lack of automated test execution, many test efforts are conducted without tools that let you effectively manage extensive Test Data and Test Results. Flexibility of use and complexity of software make complete testing an impossible goal. Using a well-conceived methodology and state-of-the-art tools can improve both the productivity and effectiveness of software testing.

High-quality software is essential to the success of safety-critical systems - such as air-traffic control, missile guidance, or medical delivery systems - where a failure can harm people. The criticality of a typical MIS system may not be as immediately obvious, but it's likely that the impact of a defect could cause the business using the software considerable expense in lost revenue and possibly legal costs. In this information age, with increasing demands on providing electronically delivered services over the Internet, many MIS systems are now considered mission-critical; that is, companies cannot fulfill their functions and they experience massive losses when failures occur.

A continuous approach to quality, initiated early in the software lifecycle, can lower the cost of completing and maintaining your software significantly. This greatly reduces the risk associated with deploying poor quality software.

Relation to Other Disciplines To top of page

The Test discipline is related to other disciplines, as follows:

  • The Requirements discipline captures requirements for the software product, which is one of the primary inputs for identifying what tests to perform.
  • The Analysis & Design discipline determines the appropriate design for the software product, which is another important input for identifying what tests to perform.
  • The Implementation discipline produces builds of the software product that are validated by the Test discipline. Within an iteration, multiple builds will be tested - typically one per test cycle.
  • The Deployment discipline delivers the completed software product to the end-user. While the software is validated by the Test discipline before this occurs, beta testing and acceptance testing are often conducted as part of Deployment.
  • The Environment discipline develops and maintains supporting artifacts that are used during Test, such as the Test Guidelines and Test Environment.
  • The Project Management discipline plans the project and the necessary work in each iteration. Described in an Iteration Plan, this artifact is an important input used when you define the correct evaluation mission for the test effort.
  • The Configuration & Change Management discipline controls change within the project team. The test effort verifies that each change has been completed appropriately.

Further Reading To top of page

We recommend reading Kaner, Bach & Pettichord's Lessons Learned in Software Testing [KAN01], which contains an excellent collection of important concerns for test teams.



Rational Unified Process   2003.06.13