Concepts: Test Automation
and Tools
Test automation tools are increasingly being brought to the market to automate
Test activities. A number of automation tools exist, but it's unlikely that
a single tool is capable of automating all test activities. Most tools focus
on a specific activity or group of activities, whereas some only address one
aspect of an activity.
When evaluating different tools for test automation, it's important to be aware
of the type of tool you are evaluating, the limitations of the tool, and what
activities the tool addresses and automates. Test tools are often evaluated
and acquired based on these categories:
Test tools may be categorized by the functions they perform. Typical function
designations for tools include:
- Data acquisition tools that acquire data to be used in
the test activities. The data may be acquired through conversion, extraction,
transformation, or capture of existing data, or through generating use cases
or supplemental specifications.
- Static measurement tools that analyze information contained in the
design models, source code, or other fixed sources. The analysis yields information
on the logic flow, data flow, or quality metrics such as complexity, maintainability,
or lines of code.
- Dynamic measurement tools that perform an analysis during the execution
of the code. The measurements include the run-time operation of the code such
as memory, error detection, and performance.
- Simulators or drivers that perform activities, which for reasons
of timing, expense, or safety are not available for testing purposes.
- Test management tools that assist in planning, designing,
implementing, executing, evaluating, and managing the test activities or artifacts.
Test tools are often characterized as either white-box or black-box based upon
the manner in which tools are used, or the technology and knowledge needed to
use the tools.
- White-box tools rely upon knowledge of the code, design models, or
other source material to implement and execute the tests.
- Black-box tools rely only upon the use cases or functional description
of the target-of-test.
Whereas white-box tools have knowledge of how the target-of-test processes
the request, black-box tools rely upon the input and output conditions to
evaluate the test.
In addition to the broad classifications of tools previously presented, tools
may also be classified by specialization.
- Record and Playback tools combine data acquisition with dynamic measurement.
Test data is acquired during the recording of events (known as test implementation).
Later, during test execution, the data is used to playback the test script,
which is used to evaluate the execution of the target-of-test.
- Quality metrics tools are static measurement tools that perform a
static analysis of the design models or source code to establish a set of
parameters that describe the target-of-test's quality. The parameters may
indicate reliability, complexity, maintainability, or other measures of quality.
- Coverage monitoring tools indicate the completeness of testing by
identifying how much of the target-of-test was covered, in some dimension,
during testing. Typical classes of coverage include use cases (requirements-based),
logic branch or node (code-based), data state, and function points.
- Test case generators automate the generation of test data. Test case
generators use either a formal specification of the target-of-test's data
inputs, or the design models and source code to produce test data that tests
the nominal inputs, error inputs, and limit and boundary cases.
- Comparator tools compare test results with reference results
and identify differences. Comparators differ in their specificity to particular
data formats. For example, comparators may be pixel-based to compare bitmap
images or object-based to compare object properties or data.
- Data extractors provide inputs for test cases from existing
sources, including databases, data streams in a communication system, reports,
or design models and source code.
| |
|