After a system has been designed, built and implemented, it must be tested to ensure that it meets the requirements and works reliably. Effective testing is achieved through well‑structured test designs, clear test strategies and a comprehensive test plan.
Test Designs
Test design is the process of creating test cases that will uncover defects. The most common design approaches are:
Black‑box testing – focuses on inputs and expected outputs without considering internal code.
White‑box (structural) testing – examines the internal logic, paths and conditions of the code.
Boundary value analysis – tests values at the edges of input ranges.
Equivalence partitioning – groups inputs into classes that are expected to behave similarly.
Decision table testing – uses a table of conditions and actions to generate test cases.
Test Strategies
A test strategy defines the overall approach and the levels of testing that will be performed:
Unit testing – tests individual components or modules in isolation.
Integration testing – checks the interaction between combined modules.
System testing – validates the complete, integrated system against requirements.
Acceptance testing – performed by end‑users to confirm the system is ready for deployment.
Test Plan
The test plan records all the information needed to carry out testing and to evaluate results. The essential elements are:
Test data – the specific inputs used for each test case.
Expected outcome – the result that should be produced, denoted as \$E\$.
Actual outcome – the result observed during testing, denoted as \$A\$.
Remedial action – the steps taken if \$A \neq E\$ (e.g., bug fixing, retesting).
Sample Test Plan Table
Test Case ID
Test Data
Expected Outcome (\$E\$)
Actual Outcome (\$A\$)
Result
Remedial Action
TC01
Username: admin, Password: 1234
Login successful, dashboard displayed
Login successful, dashboard displayed
Pass
None
TC02
Username: admin, Password: wrong
Error message “Invalid password”
No error displayed – system logs in
Fail
Investigate authentication routine, fix, retest
TC03
Input value: 0 (boundary of range 0‑100)
Accepted, value stored as 0
Accepted, value stored as 0
Pass
None
Documenting Test Results
For each test case, record the following:
Test case identifier.
Date and tester’s name.
Test environment (hardware, OS, software versions).
Detailed observations, including any error messages.
Conclusion – Pass or Fail.
Remedial Actions and Re‑testing
If a test fails, the typical remedial cycle is:
Log the defect with a clear description.
Assign the defect to a developer for correction.
Developer implements a fix and updates the code.
Tester re‑runs the original test case (and any related cases).
Update the test plan with the new actual outcome and result.
Link to the Systems Life Cycle
Testing occurs after the Implementation stage and before Maintenance. Successful testing validates that the system is ready to move into operational use.
Suggested diagram: Systems Life Cycle showing the Testing phase positioned between Implementation and Maintenance.