Purpose
  • To implement one or more tests that enable the validation of the individual software components through physical execution
  • To develop tests that can be executed in conjunction with other tests as part of a larger test infrastructure
Role:  Implementer 
Frequency: Typically once for each corresponding activity that develops implementation elements
Steps
Input Artifacts:  Resulting Artifacts: 
Tool Mentors: 
More Information: 

Workflow Details:   

Refine the Scope and Identify the Tests To top of page

Purpose:  To identify the Component under Test and define a set of tests that are of most benefit in the current iteration

In a formal environment the components and the tests needed to be developed are specified in the Test Design artifact, making this step optional. There are other occasions when the developer tests are driven by Change Requests, bug fixes, implementation decisions that need to be validated, subsystem testing with only the Design Model as input. For each of these cases:

  • define the goal: subsystem/component interface validation, implementation validation, reproduce a defect
  • define the scope: subsystem, component, group of components
  • define the test type and details: black-box, white-box, pre-conditions, post-conditions, invariants, input/output and execution conditions, observation/control points, clean-up actions
  • decide what is the life span of the test; for example a test built specially for fixing a defect might be a throw-away one, but one that exercises the external interfaces will have the same lifecycle as the component under test

Select Appropriate Implementation Technique To top of page

Purpose:  To determine the appropriate technique to implement the test

There are various techniques available to implement a test, but they can be considered in terms of two general categories: manual and automated testing. Most of the developer tests are implemented using automated testing techniques:

  • programmed tests, using either the same software programming techniques and environment as the component under test, or less complex programming languages and tools ( e.g. scripting languages: tcl, shell based, etc.)
  • recorded or captured tests, built by using test automation tools which capture the interactions between the component under test and the rest of the system, and produce the basic tests
  • generated tests: some aspects of the test, either procedural or the test data, could be automatically generated using more complex test automation tools
Although the most popular approach is the "programmed test" one, in some cases - GUI related testing for example, the more efficient way to conduct a test is manually, following a sequence of instructions that have been captured in a textual description form.

Implement the Test To top of page

Purpose:  To implement the tests identified in the definition step/activity

Implement all the elements defined in the first step. Detail and clearly specify the test environment pre-conditions and what are the steps to get the component under test to the state where the test(s) could be conducted. Identify the clean-up steps to be followed in order to restore the environment to the original state. Pay special attention to the implementation of the observation/control points, as these aspects might need special support that has to be implemented in the component under test.

Establish External Data Sets To top of page

Purpose:  To create and maintain data, stored externally to the test, that are used by the test during execution

In most of the cases, decoupling the Test Data from the Test leads to a more maintainable solution. If the test's life span is very short, hardcoding the data within the test might be more efficient, but if many test execution cycles are needed using different data sets, the simplest way is to store them externally. There are some other advantages if the Test Data is decoupled from the Test:

  • more than one test could use the same data set
  • easy to modify and/or multiply
  • could be used to control the conditional branching logic within the Test

Verify the Test Implementation To top of page

Purpose:  To verify the correct workings of the Test

Test the Test. Check the environment setup and clean-up instructions. Run the Test, observe its behavior and fix the test's defects. If the test will be long-lived, ask a person with less inside knowledge to run it and check if there is enough support information. Review it with other people within the development team and other interested parties.

Maintain Traceability Relationships To top of page

Purpose:  To enable impact analysis and assessment reporting to be performed on the traced item

Depending on the level of formality, you may or may not need to maintain traceability relationships. If you do, use the traceability requirements outlined in the Test Plan to update the traceability relationships as required.


Rational Unified Process   2003.06.13