Course Registration System
Test Plan
Version 1.0
Revision History
Date |
Version |
Description |
Author |
27/March/1999 |
1.0 |
Test Plan for
Release 1 and 2 |
K. Stone |
|
|
|
|
|
|
|
|
|
|
|
|
Table of Contents
- Objectives
- Scope
- References
- Test Requirements
- Test Strategy
- Testing Types
- Data and Database
Integrity Testing
- System Testing
- Business Cycle Testing
- User Interface Testing
- Performance Testing
- Load Testing
- Stress Testing
- Volume Testing
- Security and Access
Control Testing
- Failover / Recovery
Testing
- Configuration Testing
- Installation Testing
- Tools
- Resources
- Workers
- System
- Project Milestones
- Deliverables
- Test Suite
- Test Logs
- Defect Reports
- Project Tasks
Test Plan
1. Objectives
This document describes the plan for testing the C-Registration System.
This Test Plan document supports the following objectives:
- Identify existing project information and the software components that
should be tested.
- List the recommended test requirements (high level).
- Recommend and describe the testing strategies to be employed.
- Identify the required resources and provide an estimate of the test
efforts.
- List the deliverable elements of the test activities.
2. Scope
This Test Plan applies to the integration and system tests that will be
conducted on the C-Registration System Releases 1 and 2. Note that a separate
Test Plan [17] exists describing the test strategy for the Architectural
Prototype.
It is assumed that unit testing already provided thorough black box testing
through extensive coverage of source code and testing of all module
interfaces.
This Test Plan applies to testing all requirements of the C-Registration
System as defined in the Vision Document [3], Use Case Specifications [5-12],
and Supplementary Specification [13].
3. References
Applicable references are:
- Course Billing Interface Specification, WC93332, 1985, Wylie College
Press.
- Course Catalog Database Specification, WC93422, 1985, Wylie College
Press.
- Course Registration System Vision Document, WyIT387, V1.0, 1998, Wylie College IT.
- Course Registration System Glossary, WyIT406, V2.0, 1999, Wylie College IT.
- Course Registration System Use Case Spec - Close Registration, WyIT403, V2.0, 1999, Wylie College
IT.
- Course Registration System Use Case Spec - Login, WyIT401, V2.0, 1999, Wylie College IT.
- Course Registration System Use Case Spec - Maintain Professor Info, WyIT407, Version 2.0, 1999,
Wylie College IT.
- Course Registration System Use Case Spec - Register for Courses, WyIT402, Version 2.0, 1999, Wylie
College IT.
- Course Registration System Use Case Spec - Select Courses to Teach, WyIT405, Version 2.0, 1999,
Wylie College IT.
- Course Registration System Use Case Spec - Maintain Student Info, WyIT408, Version 2.0, 1999, Wylie
College IT.
- Course Registration System Use Case Spec - Submit Grades, WyIT409, Version 2.0, 1999, Wylie College
IT.
- Course Registration System Use Case Spec - View Report Card, WyIT410, Version 2.0, 1999, Wylie
College IT.
- Course Registration System Supplementary Specification,
WyIT400, V1.0, 1999, Wylie College IT.
- Course Registration System Software Development Plan, WyIT418, V2.0, 1999, Wylie College IT.
- Course Registration System Software Architecture Document,
WyIT431, V1.0, 1999, Wylie College IT.
- Course Registration System Requirements Attributes Guidelines,
WyIT404, V1.0, 1999, Wylie College IT.
- Course Registration System Test Plan for the Architectural
Prototype, WyIT432, V1.0, 1999, Wylie College IT.
4. Test
Requirements
The listing below identifies those items (use cases, functional requirements,
non-functional requirements) that have been identified as targets for
testing. This list represents what will be tested. Details on
each test will be determined later as Test Cases are identified and
Test Scripts developed.
(Note: Future release of this Test Plan may use Rational RequisitePro for
linking directly to the requirements in the Vision Document, Use Case
Documents and Supplementary Specification.)
Data and Database Integrity Testing
Verify access to Course Catalog Database.
Verify simultaneous record read accesses.
Verify lockout during Course Catalog updates.
Verify correct retrieval of update of database data.
System Testing (i.e. functional testing)
Verify Login Use Case [6]
Verify Close Registration Use Case [5]
Verify Maintain Student Information Use Case [10]
Verify Maintain Professor Information Use Case [7]
Verify Submit Grades Use Case [11]
Verify View Report Card Use Case [12]
Verify Register for Courses Use Case [8]
Verify Select Courses to Teach Use Case [9]
Supplementary Specification, Section 4.1: "All system errors shall
be logged. Fatal system errors shall result in an orderly shutdown of the
system."
Supplementary Specification, Section 4.1: " The system error
messages shall include a text description of the error, the operating system
error code (if applicable), the module detecting the error condition, a data
stamp, and a time stamp. All system errors shall be retained in the Error
Log Database."
Vision Document, Section 12.2: "The system shall interface with the
existing Course Catalog Database System. C-Registration shall support the
data format as defined in [2]."
Vision Document, Section 12.2: "The system shall interface with the
existing Billing System and shall support the data format as defined in
[1]."
Vision Document, Section 12.2: "The server component of the system
shall operate on the College Campus Server and shall run under the UNIX
Operating System."
Supplementary Specification, Section 9.3: "The server component of
the system shall operate on the Wylie College UNIX Server."
Vision Document, Section 12.2: "The client component of the system
shall operate on any personal computer with a 486 Microprocessor or
better."
Supplementary Specification, Section 9.3: "The client component of
the system shall operate on any personal computer with a 486 Microprocessor
or greater."
Supplementary Specification, Section 9.1: "The system shall
integrate with existing legacy system (course catalog database) which
operates on the College DEC VAX MainFrame."
Supplementary Specification, Section 9.2: "The system shall
integrate with the existing Course Billing System which operates on the
College DEC VAX MainFrame."
Business Cycle Testing
Verify operation following download of a new course catalog.
Verify operation across multiple semesters and multiple years.
Verify correct operation when semester spans year rollover.
User Interface Testing
Verify ease of navigation through a sample set of screens.
Verify sample screens conform to GUI standards.
Vision Document Section 10: "The System shall be easy-to-use and
shall be appropriate for the target market of computer-literate students and
professors."
Vision Document, Section 12.1: "The desktop user-interface shall be
Windows 95/98 compliant."
Supplementary Specification, Section 5.1: "The desktop
user-interface shall be Windows 95/98 compliant."
Supplementary Specification, Section 5.2: "The user interface of the
C-Registration System shall be designed for ease-of-use and shall be
appropriate for a computer-literate user community with no additional
training on the System."
Supplementary Specification, Section 5.3: "Each feature of the
C-Registration System shall have built-in online help for the user. Online
Help shall include step by step instructions on using the System. Online
Help shall include definitions for terms and acronyms."
Performance Testing
Verify response time to access external Finance system.
Verify response time to access external Course Catalog subsystem.
Verify response time for remote login.
Verify response time for remote submittal of course registration.
Vision Document, Section 12.3: "The system shall provide access to
the legacy Course Catalog Database with no more than a 10 second
latency."
Supplementary Specification, Section 7.2: "The system shall provide
access to the legacy Course Catalog Database with no more than a 10 second
latency."
Load Testing
Verify system response when loaded with 200 logged on students.
Verify system response when 50 simultaneous student accesses to the
Course Catalog.
Supplementary Specification, Section 7.1: "The system shall support
2000 simultaneous users against the central database at any given time, and
up to 500 simultaneous users against the local servers at any one
time."
Stress Testing
Verify system response during prime time use of the UNIX Server.
Verify system response during maximum student logins.
Volume Testing
Verify system response when Course Catalog Database at 90% capacity.
Security and Access Control Testing
Verify Logon from a local PC.
Verify Logon from a remote PC.
Verify Logon security through user name and password mechanisms.
Supplementary Specification, Section 4.2: "All functionality shall
be available remotely through an internet connection."
Failover / Recovery Testing
Supplementary Specification, Section 6.1: "The C-Registration System
shall be available 24 hours a day, 7 days a week. There shall be no more
than 4% down time."
Supplementary Specification, Section 6.2: "Mean Time Between
Failures shall exceed 300 hours."
Configuration Testing
Vision Document, Section 12.2: "The client component of the system
shall run on Windows 95, Windows 98, and Microsoft Windows NT."
Supplementary Specification, Section 9.4: "The web-based interface
for the C-Registration System shall run in Netscape 4.04 and Internet
Explorer 4.0 browsers.
Supplementary Specification, Section 9.5: "The web-based interface
shall be compatible with the Java 1.1 VM runtime environment.
Installation Testing
Supplementary Specification, Section 8.1: "Upgrades to the PC client
portion of the C-Registration shall be downloadable from the UNIX Server
over the internet."
Verify installation of server portion.
Verify installation of client portion.
5. Test
Strategy
The Test Strategy presents the recommended approach to the testing of the
software applications. The previous section on Test Requirements described what
will be tested; this describes how it will be tested.
The main considerations for the test strategy are the techniques to be used
and the criterion for knowing when the testing is completed.
In addition to the considerations provided for each test below, testing
should only be executed using known, controlled databases, in secured
environments.
The following test strategy is generic in nature and is meant to apply to
the requirements listed in Section 4 of this document.
- Testing
Types
1. Data
and Database Integrity Testing
The databases and the database processes should be tested as separate
systems. These systems should be tested without the applications (as the
interface to the data). Additional research into the DBMS needs to be
performed to identify the tools / techniques that may exist to support the
testing identified below.
Test Objective: |
Ensure Database access methods
and processes function properly and without data corruption. |
Technique: |
- Invoke each database access method and process, seeding each with
valid and invalid data (or requests for data).
- Inspect the database to ensure the data has been populated as
intended, all database events occurred properly, or review the returned
data to ensure that the correct data was retrieved (for the correct
reasons)
|
Completion Criteria: |
All database access methods and
processes function as designed and without any data corruption. |
Special Considerations: |
- Testing may require a DBMS development environment or drivers to enter
or modify data directly in the databases.
- Processes should be invoked manually.
- Small or minimally sized databases (limited number of records) should
be used to increase the visibility of any non-acceptable events.
|
2. System
Testing
Testing
of the application should focus on any target requirements that can be
traced directly to use cases (or business functions), and business rules.
The goals of these tests are to verify proper data acceptance, processing,
and retrieval, and the appropriate implementation of the business rules.
This type of testing is based upon black box techniques, that is,
verifying the application (and its internal processes) by interacting with
the application via the GUI and analyzing the output (results). Identified
below is an outline of the testing recommended for each application:
Test Objective: |
Ensure proper application
navigation, data entry, processing, and retrieval. |
Technique: |
- Execute each use case, use case flow, or function, using valid and
invalid data, to verify the following:
- The expected results occur when valid data is used.
- The appropriate error / warning messages are displayed when invalid
data is used.
- Each business rule is properly applied.
|
Completion Criteria: |
- All planned tests have been executed.
- All identified defects have been addressed.
|
Special Considerations: |
- Access to the Wylie College UNIX Server and the existing Course
Catalog System and Billing System is required.
|
3. Business
Cycle Testing
Business Cycle Testing should emulate the activities performed on the
system over time. A period should be identified, such as one year, and
transactions and activities that would occur during a year's period should
be executed. This includes all daily, weekly, monthly cycles and events that
are date sensitive, such as ticklers.
Test Objective |
Ensure proper application and
background processes function according to required business models and
schedules. |
Technique: |
- Testing will simulate several business cycles by performing the
following:
- The tests used for application function testing will be modified /
enhanced to increase the number of times each function is executed to
simulate several different users over a specified period.
- All time or date sensitive functions will be executed using valid and
invalid dates or time periods.
- All functions that occur on a periodic schedule will be executed /
launched at the appropriate time.
- Testing will include using valid and invalid data, to verify the
following:
- The expected results occur when valid data is used.
- The appropriate error / warning messages are displayed when invalid
data is used.
- Each business rule is properly applied.
|
Completion Criteria: |
- All planned tests have been executed.
- All identified defects have been addressed.
|
Special Considerations: |
- System dates and events may require special support activities
- Business model is required to identify appropriate test requirements
and procedures.
|
4. User
Interface Testing
User
Interface testing verifies a user's interaction with the software. The
goal of UI Testing is to ensure that the User Interface provides the user
with the appropriate access and navigation through the functions of the
applications. In addition, UI Testing ensures that the objects within the UI
function as expected and conform to corporate or industry standards.
Test Objective: |
Verify the following:
- Navigation through the application properly reflects business
functions and requirements, including window to window, field to field,
and use of access methods (tab keys, mouse movements, accelerator keys)
- Window objects and characteristics, such as menus, size, position,
state, and focus conform to standards.
|
Technique: |
- Create / modify tests for each window to verify proper navigation and
object states for each application window and objects.
|
Completion Criteria: |
Each window successfully
verified to remain consistent with benchmark version or within acceptable
standard |
Special Considerations: |
- Not all properties for custom and third party objects can be accessed.
|
5. Performance
Testing
Performance testing measures response times, transaction rates, and other
time sensitive requirements. The goal of Performance testing is to verify
and validate the performance requirements have been achieved. Performance
testing is usually executed several times, each using a different
"background load" on the system. The initial test should be
performed with a "nominal" load, similar to the normal load
experienced (or anticipated) on the target system. A second performance test
is run using a peak load.
Additionally, Performance tests can be used to profile and tune a system's
performance as a function of conditions such as workload or hardware
configurations.
NOTE: Transactions below refer to "logical business
transactions." These transactions are defined as specific functions
that an end user of the system is expected to perform using the application,
such as add or modify a given contract.
Test Objective: |
Validate System Response time
for designated transactions or business functions under a the following two
conditions:
- normal anticipated volume
- anticipated worse case volume |
Technique: |
- Use Test Scripts developed for Business Model Testing (System
Testing).
- Modify data files (to increase the number of transactions) or modify
scripts to increase the number of iterations each transaction occurs.
- Scripts should be run on one machine (best case to benchmark single
user, single transaction) and be repeated with multiple clients (virtual
or actual, see special considerations below).
|
Completion Criteria: |
- Single Transaction / single user: Successful completion of the test
scripts without any failures and within the expected / required time
allocation (per transaction)
- Multiple transactions / multiple users: Successful completion of the
test scripts without any failures and within acceptable time allocation.
|
Special Considerations: |
- Comprehensive performance testing includes having a
"background" load on the server. There are several methods
that can be used to perform this, including:
- "Drive transactions" directly to the server, usually in
the form of SQL calls.
- Create "virtual" user load to simulate many (usually
several hundred) clients. Remote Terminal Emulation tools are used to
accomplish this load. This technique can also be used to load the
network with "traffic."
- Use multiple physical clients, each running test scripts to place a
load on the system.
- Performance testing should be performed on a dedicated machine or at a
dedicated time. This permits full control and accurate measurement.
- The databases used for Performance testing should be either actual
size, or scaled equally.
|
6. Load
Testing
Load testing measures subjects the system-under-test to varying workloads
to evaluate the system's ability to continue to function properly under
these different workloads. The goal of load testing is to determine and
ensure that the system functions properly beyond the expected maximum
workload. Additionally, load testing evaluates the performance
characteristics (response times, transaction rates, and other time sensitive
issues).
NOTE: Transactions below refer to "logical business
transactions." These transactions are defined as specific functions
that an end user of the system is expected to perform using the application,
such as add or modify a given contract.
Test Objective: |
Verify System Response time for
designated transactions or business cases under varying workload conditions. |
Technique: |
- Use tests developed for Business Cycle Testing.
- Modify data files (to increase the number of transactions) or the
tests to increase the number of times each transaction occurs.
|
Completion Criteria: |
- Multiple transactions / multiple users: Successful completion of the
tests without any failures and within acceptable time allocation.
|
Special Considerations: |
- Load testing should be performed on a dedicated machine or at a
dedicated time. This permits full control and accurate measurement.
- The databases used for load testing should be either actual size, or
scaled equally.
|
7. Stress
Testing
Stress testing is intended to find errors due to
low resources or competition for resources. Low memory or disk space may
reveal defects in the software that aren't apparent under normal conditions.
Other defects might results from competition for shared resource like
database locks or network bandwidth. Stress testing identifies the peak load
the system can handle.
NOTE: References to transactions below refer to logical business
transactions.
Test Objective: |
Verify that the system and
software function properly and without error under the following stress
conditions:
- little or no memory available on the server (RAM and DASD)
- maximum (actual or physically capable) number of clients connected (or
simulated)
- multiple users performing the same transactions against the same data
/ accounts
- worst case transaction volume / mix (see performance testing above).
NOTES: Stress testing's goal might also be stated as identify and
document the conditions under which the system FAILS to continue functioning
properly. |
Technique: |
- Use tests developed for Performance Testing.
- To test limited resources, tests should be run on single machine, RAM
and DASD on server should be reduced (or limited).
- For remaining stress tests, multiple clients should be used, either
running the same tests or complementary tests to produce the worst case
transaction volume / mix.
|
Completion Criteria: |
All planned tests are executed
and specified system limits are reached / exceeded without the software or
software failing (or conditions under which system failure occurs is outside
of the specified conditions). |
Special Considerations: |
- Stressing the network may require network tools to load the network
with messages / packets.
- The DASD used for the system should temporarily be reduced to restrict
the available space for the database to grow.
- Synchronization of the simultaneous clients accessing of the same
records / data accounts.
|
8. Volume
Testing
Volume Testing subjects the software to large amounts of data to
determine if limits are reached that cause the software to fail. Volume
testing also identifies the continuous maximum load or volume the system can
handle for a given period. For example, if the software is processing a set
of database records to generate a report, a Volume Test would use a large
test database and check that the software behaved normally and produced the
correct report.
Test Objective: |
Verify that the application /
system successfully functions under the following high volume scenarios:
- maximum (actual or physically capable) number of clients connected (or
simulated) all performing the same, worst case (performance) business
function for an extended period.
- maximum database size has been reached (actual or scaled) and multiple
queries / report transactions are executed simultaneously.
|
Technique: |
- Use tests developed for Performance Testing.
- Multiple clients should be used, either running the same tests or
complementary tests to produce the worst case transaction volume / mix
(see stress test above) for an extended period.
- Maximum database size is created (actual, scaled, or filled with
representative data) and multiple clients used to run queries / report
transactions simultaneously for extended periods.
|
Completion Criteria: |
All planned tests have been
executed and specified system limits are reached / exceeded without the
software or software failing. |
Special Considerations: |
- What period of time would be considered an acceptable time for high
volume conditions (as noted above)?
|
9. Security
and Access Control Testing
Security and Access Control Testing focus on two key areas of security:
- Application security, including access to the Data or Business
Functions, and
- System Security, including logging into / remote access to the system.
Application security ensures that, based upon the desired security, users
are restricted to specific functions or are limited in the data that is
available to them. For example, everyone may be permitted to enter data and
create new accounts, but only managers can delete them. If there is security
at the data level, testing ensures that user "type" one can see
all customer information, including financial data, however, user two only
sees the demographic data for the same client.
System security ensures that only those users granted access to the
system are capable of accessing the applications and only through the
appropriate gateways.
Test Objective: |
Function / Data Security: Verify
that user can access only those functions / data for which their user type
is provided permissions.
System Security: Verify that only those users with access to the system
and application(s) are permitted to access them. |
Technique: |
- Function / Data Security: Identify and list each user type and the
functions / data each type has permissions for.
- Create tests for each user type and verify permission by creating
transactions specific to each user type.
- Modify user type and re-run tests for same users. In each case verify
those additional functions / data are correctly available or denied.
- System Access (see special considerations below)
|
Completion Criteria: |
For each known user type the
appropriate function / data are available and all transactions function as
expected and run in prior Application Function tests |
Special Considerations: |
- Access to the system must be reviewed / discussed with the appropriate
network or systems administrator. This testing may not be required as it
maybe a function of network or systems administration.
|
10. Failover
/ Recovery Testing
Failover
/ Recovery testing ensures that an application or entire system can
successfully failover and recover from a variety of hardware, software, or
network malfunctions with undue loss of data or data integrity.
Failover testing ensures that, for those systems that must be kept
running, when a failover condition occurs, the alternate or backup systems
properly "take over" for the failed system without loss of data or
transactions.
Recovery testing is an antagonistic test process in which the application
or system is exposed to extreme conditions (or simulated conditions) such as
device I/O failures or invalid database pointers / keys. Recovery processes
are invoked and the application / system is monitored and / or inspected to
verify proper application / system / and data recovery has been achieved.
Test Objective: |
Verify that recovery processes
(manual or automated) properly restore the database, applications, and
system to a desired, known, state. The following types of conditions are to
be included in the testing:
- Power interruption to the client
- Power interruption to the server
- Communication interruption via network server(s)
- Interruption, communication, or power loss to DASD and or DASD
controller(s)
- Incomplete cycles (data filter processes interrupted, data
synchronization processes interrupted).
- Invalid database pointer / keys
- Invalid / corrupted data element in database
|
Technique: |
Tests created for Application
Function and Business Cycle testing should be used to create a series of
transactions. Once the desired starting test point is reached, the following
actions should be performed (or simulated) individually:
- Power interruption to the client: power the PC down
- Power interruption to the server: simulate or initiate power down
procedures for the server
- Interruption via network servers: simulate or initiate communication
loss with the network (physically disconnect communication wires or
power down network server(s) / routers).
- Interruption, communication, or power loss to DASD and or DASD
controller(s): simulate or physically eliminate communication with one
or more DASD controllers or devices.
Once the above conditions / simulated conditions are achieved, additional
transactions should executed and upon reaching this second test point state,
recovery procedures should be invoked.
Testing for incomplete cycles utilizes the same technique as described
above except that the database processes themselves should be aborted or
prematurely terminated.
Testing for the following conditions requires that a known database state
be achieved. Several database fields, pointers and keys should be corrupted
manually and directly within the database (via database tools). Additional
transactions should be executed using the tests from Application Function
and Business Cycle Testing and full cycles executed. |
Completion Criteria: |
In all cases above, the
application, database, and system should, upon completion of recovery
procedures, return to a known, desirable state. This state includes data
corruption limited to the known corrupted fields, pointers / keys, and
reports indicating the processes or transactions that were not completed due
to interruptions. |
Special Considerations: |
- Recovery testing is highly intrusive. Procedures to disconnect cabling
(simulating power or communication loss) may not be desirable or
feasible. Alternative methods, such as diagnostic software tools may be
required.
- Resources from the Systems (or Computer Operations), Database, and
Networking groups are required.
- These tests should be run after hours or on an isolated machine(s).
|
11. Configuration
Testing
Configuration testing verifies operation of the software on different
software and hardware configurations. In most production environments, the
particular hardware specifications for the client workstations, network
connections and database servers vary. Client workstations may have
different software loaded (e.g. applications, drivers, etc.) and at any one
time many different combinations may be active and using different
resources.
Test Objective: |
Validate and verify that the
client Applications function properly on the prescribed client workstations. |
Technique: |
- Use Integration and System Test scripts
- Open / close various PC applications, either as part of the test or
prior to the start of the test.
- Execute selected transactions to simulate user activities into and out
of various PC applications.
- Repeat the above process, minimizing the available conventional memory
on the client.
|
Completion Criteria: |
For each combination
transactions are successfully completed without failure. |
Special Considerations: |
- What PC Applications are available, accessible on the clients?
- What applications are typically used?
- What data are the applications running (i.e. large spreadsheet opened
in Excel, 100 page document in Word).
- The entire systems, network servers, databases, etc. should also be
documented as part of this test.
|
12.
Installation Testing
Installation testing
has two purposes. The first is to insure that the software can be installed
on all possible configurations, such as a new installation, an upgrade, and
a complete installation or custom installation, and under normal and
abnormal conditions. Abnormal conditions include insufficient disk space,
lack of privilege to create directories, etc. The second purpose is to
verify that, once installed, the software operates correctly. This usually
means running a number tests that were developed for Function testing.
Test Objective: |
Verify and validate that the
client software properly installs onto each client under the following
conditions:
- New Installation, a new machine, never installed.
- Update machine previously installed with same version
- Update machine previously installed with older version
|
Technique: |
- Manually or develop automated scripts to validate the condition of the
target machine (new - never installed, same version or older version
already installed).
- Launch / perform installation.
- Using a predetermined sub-set of Integration or System test scripts,
run the transactions.
|
Completion Criteria: |
Transactions execute
successfully without failure. |
Special Considerations: |
- What transactions should be selected to comprise a confidence test
that the application has been successfully installed and no major
software components are missing?
|
2. Tools
The
following tools will be employed for testing of the system:
|
Tool |
Version |
Test Management |
Rational RequisitePro
Rational Unified Process |
TBD |
Test Design |
Rational
Rose |
TBD |
Defect Tracking |
Rational ClearQuest |
TBD |
Functional Testing |
Rational Robot |
TBD |
Performance Testing |
Rational Visual Quantify |
TBD |
Test Coverage Monitor or Profiler |
Rational Visual PureCoverage |
TBD |
Other Test Tools |
Rational Purify
Rational TestFactory |
TBD |
Project Management |
Microsoft Project
Microsoft Word
Microsoft Excel |
TBD |
DBMS tools |
TBD |
TBD |
6. Resources
This
section presents the recommended resources for testing the C-Registration
System, their main responsibilities, and their knowledge or skill set.
-
Workers
This table shows the staffing assumptions for the test activities.
Human Resources
Worker |
Minimum Resources Recommended (number of workers allocated full-time) |
Specific Responsibilities/Comments |
Test Manager |
1 - Kerry Stone |
Provides management oversight
Responsibilities:
- Provide technical direction
- Acquire appropriate resources
- Management reporting
|
Test Designer |
Margaret Cox
Carol Smith
Sophie King
|
Identifies, prioritizes, and
implements test cases
Responsibilities:
- Generate test plan
- Generate Test Suite
- Evaluate effectiveness of test effort
|
System Tester |
Carol Smith
Sophie King
Adrian Harmsen
|
Executes the tests
Responsibilities:
- Execute tests
- Log results
- Recover from errors
- Document defects
|
Test System Administrator |
Simon Jones |
Ensures test environment and
assets are managed and maintained.
Responsibilities:
- Administer test management system
- Install / manage worker access to test systems
|
Database Administration / Database
Manager |
Margaret Cox |
Ensures test data (database)
environment and assets are managed and maintained.
Responsibilities:
- Administer test data (database)
|
Designer |
Margaret Cox |
Identifies and defines the
operations, attributes, and associations of the test classes
Responsibilities:
- Identifies and defines the test class(es)
- Identifies and defines the test packages
|
Implementer |
Margaret Cox
Adrian Harmsen
|
Implements and unit tests the
test classes and test packages
Responsibilities:
- Creates the test classes and packages implemented in the Test
Suite.
|
2. System
The following table sets forth the system resources for the testing the
C-Registration System.
System Resources |
Resource |
Name / Type / Serial No. |
Wylie College Server |
Serial No: X179773562b |
Course
Catalog Database |
Version Id: CCDB-080885 |
Billing
System |
Version Id: BSSS-88335 |
Client Test PC's |
|
10
Remote PCs (with internet access) |
Serial No: A8339223
Serial No: B9334022
Serial No: B9332544
<7 TBD> |
6
Local PCs (connected via LAN) |
Serial No: R3322411 (Registrar's)
Serial No: A8832234 (IT Lab)
Serial No: W4592233 (IT Lab)
Serial No: X3333411 (Faculty Office)
Serial No: A987344 (Science Lab)
Serial No: X9834000 (Student Union) |
Test Repository |
|
Wylie
College Server |
Serial No: X179773562b |
Test Development PC's - 6 |
Serial No: A8888222
Serial No: R3322435
Serial No: I88323423
Serial No: B0980988
Serial No: R3333223
Serial No: Y7289732 |
Load Simulator |
Serial No: ABC-123 |
7. Project
Milestones
The test activities and milestones are very much dependant
upon the development iterations. The Construction Phase will be split into 3
iterations. Each iteration contains a full test cycle of test planning,
designing, development, execution, and evaluation.
Refer to the Software Development Plan [14] and the Iteration Plans for the
master schedule and Construction Phase plan that shows the development
iterations.
The following table shows the Test Milestones. Effort, start date, and end
date can be completed as the iteration content is planned.
Milestone Task |
Effort (pd) |
Start Date |
End Date |
Iteration C1: Beta Release
Test Planning
Test Design
Test Development
Test Execution
Test Evaluation |
TBD |
March 15 |
April 12 |
Iteration C2: R1.0 Release
Test Planning
Test Design
Test Development
Test Execution
Test Evaluation |
TBD |
April 13 |
May 14 |
Iteration C3: R2.0 Release
Test Planning
Test Design
Test Development
Test Execution
Test Evaluation |
TBD |
May 15 |
June 17 |
8. Deliverables
The deliverables of the test activities as defined in this Test Plan are
outlined in the table below.
Note that some of these deliverables are produced multiple times; once for
each test cycle or iteration. Other deliverables, such as the Test Plan, are
updated each development iteration.
Deliverable |
Owner |
Review / Distribution |
Due Date |
Test Plan |
K. Stone |
Senior Project Mgmt
Team |
March 28 |
Test Environment |
S. Jones |
- |
March 28 |
Test Suite |
C. Smith and M. Cox |
Internal Peer Review |
March 28 |
Test Data Sets |
M. Cox |
Internal Peer Review |
March 31 |
Test Scripts |
M. Cox |
- |
April 2 |
Test Stubs, Drivers |
M. Cox |
- |
April 4 |
Test Defect Reports
(for each iteration)
|
C. Smith |
Senior Project Mgmt
Team |
TBD |
Test Results
(for each iteration)
|
C. Smith |
Test Manager |
TBD |
Test Evaluation Report
(for each iteration)
|
C. Smith |
Senior Project Mgmt
Team |
TBD |
1. Test
Suite
The Test Suite will define all the test cases and the test scripts which are associated with each test case.
2. Test
Logs
It is planned to use RequisitePro to identify the test cases and to track
the status of each test case. The test results will be summarized in
RequisitePro as untested, passed, conditional pass, or failed. In summary,
RequisitePro will be setup to support the following attributes for each test
case, as defined in the Requirements Attributes Guidelines [16]:
- Test status
- Build Number
- Tested By
- Date Tested
- Test Notes
It will be the responsibility of the System Tester to update the test
status in RequisitePro.
Test results will be retained under Configuration Control.
Rational ClearQuest will be used for logging and tracking individual
defects.
9. Project
Tasks
Below are the test related tasks for testing the
C-Registration System:
Plan Test |
Identify Requirements for Test
|
Assess Risk
|
Develop Test Strategy
|
Identify Test Resources
|
Create Schedule
|
Generate Test Plan
|
Design Test |
Workload Analysis
|
Develop Test Suite
|
Identify and Describe Test Cases
|
Identify and Structure Test Scripts
|
Review and Access Test Coverage
|
Implement Test |
Setup Test Environment
|
Record or Program Test Scripts
|
Develop Test Stubs and Drivers
|
Identify Test-Specific functionality in the design and implementation
model
|
Establish External Data sets
|
Execute Test |
Execute Test Script
|
Evaluate Execution of Test
|
Recover from Halted Test
|
Verify the results
|
Investigate Unexpected Results
|
Log Defects
|
Evaluate Test |
Evaluate Test-Case Coverage
|
Evaluate Code Coverage
|
Analyze Defects
|
Determine if Test Completion Criteria and Success Criteria have been
achieved
|
Create Test Evaluation Report
|
|