Know and understand the need to test the system before implementation

7. The Systems Life Cycle – Testing Before Implementation

Objective

Explain why a system must be tested before it is implemented, describe the required test design and documentation, and evaluate when testing is sufficient to hand‑over the system for deployment (AO3 – Analyse & evaluate).

1. Why Test a System?

  • Detect and correct errors before the system is used in a live environment.
  • Confirm that the system meets the original requirements and specifications.
  • Reduce the cost, time and disruption caused by post‑implementation faults.
  • Increase user confidence and acceptance of the new system.
  • Meet legal, safety, security and data‑protection standards (e‑Safety).

2. Testing – What the Cambridge IGCSE 0417 Syllabus Requires

The syllabus lists four specific test types. The table links each type to the exact wording used in the syllabus.

Syllabus wordingTest type (as used in the notes)Purpose (linked to the syllabus)
Unit (Component) TestingUnit TestingTest individual modules or functions in isolation to show they work correctly.
Integration TestingIntegration TestingTest that combined modules interact correctly and data passes between them.
System TestingSystem TestingEvaluate the complete, integrated system against functional and non‑functional requirements.
User Acceptance Testing (UAT)User Acceptance TestingEnd‑users verify that the system performs as expected in real‑world scenarios.

3. Test Design & Documentation (Syllabus wording: “test‑design and documentation”)

  • Test Design artefact – a detailed Test‑Case Specification that lists each test case, test data, expected result and pass/fail criteria.
  • Test Documentation artefact – a Test Report summarising execution results, defects logged, remedial actions taken and the final sign‑off decision.

4. Test Strategies (What will be tested and how)

  • Unit testing – test each module in isolation.
  • Integration testing – test each function that links modules.
  • System testing – test the whole system against the specification.
  • User Acceptance testing – test the system with real users and real tasks.

5. Test Data – Normal, Abnormal, Extreme & Data Protection

When designing test cases, students must prepare three categories of data and consider e‑Safety.

  • Normal data – valid inputs the system will handle on a day‑to‑day basis.
  • Abnormal data – valid but unusual inputs (e.g., empty fields, special characters).
  • Extreme data – inputs that push system limits (e.g., maximum field length, very large numbers).
  • Data protection & e‑Safety – ensure any live data used in testing is anonymised, stored securely and accessed only by authorised testers.

6. Test Approaches

  • Black‑Box Testing – based only on input and expected output; internal code is unknown.
  • White‑Box (Structural) Testing – tests are designed using knowledge of internal logic and code paths.
  • Grey‑Box Testing – a combination of black‑box and white‑box techniques.

7. Test Planning & Documentation

A systematic test plan must be produced before any test is executed. The plan should contain:

  • Test objectives and scope.
  • Resources required (people, hardware, software).
  • Test schedule aligned with the project timeline.
  • Test strategy (unit → integration → system → acceptance).
  • Test data categories (normal, abnormal, extreme) and data‑protection measures.
  • Test‑case specification (see sample table below).
  • Pass/fail criteria and severity levels for defects.
  • Procedures for logging defects, assigning remedial actions and tracking resolution.
  • Criteria for sign‑off before implementation (hand‑over).

Sample Test‑Case Specification

Test Case IDDescriptionTest Data (type)Expected ResultActual ResultStatus (Pass/Fail)Remarks / Remedial Action
TC01Login with valid credentialsNormal – user1 / Pass@123System grants access to dashboard
TC02Login with invalid passwordAbnormal – user1 / wrongSystem displays “Invalid login” error
TC03Enter 255‑character commentExtreme – max‑length stringSystem accepts comment and stores it correctly

Defect‑Logging (Remedial Action) Table

Defect IDTest Case IDDescriptionSeverityAssigned ToStatusResolution / Comments
D001TC02Error message misspelt (“Invlid login”)LowDeveloper AOpen
D002TC03System crashes when 255‑character comment is savedHighDeveloper BOpen

8. Test Execution

  1. Set up the test environment (see Section 9).
  2. Execute test cases in the order defined by the test strategy.
  3. Record the actual result for each case.
  4. Mark the case as Pass if the actual result matches the expected result; otherwise mark as Fail and log a defect.
  5. Prioritise defects by severity and assign remedial actions.
  6. Retest any failed cases after defects have been corrected.

9. Evaluating Test Success (AO3 – Analyse & Evaluate)

Testing is considered complete when all of the following criteria are satisfied:

  • All test cases have been executed and results recorded.
  • All high‑severity defects are resolved and re‑tested.
  • Any remaining low‑severity defects have an agreed‑upon work‑around or are scheduled for a later release.
  • The total number of failed test cases is within the agreed tolerance (e.g., < 5 % of total cases).
  • Stakeholders (including end‑users) have signed off the User Acceptance Test report.

Reflective question (AO3): “What could be improved in the test strategy and why?” – students should suggest at least one improvement (e.g., adding security‑focused test cases) and justify its benefit.

10. Test Environment

The test environment should mirror the production environment as closely as possible to minimise environment‑related failures after implementation.

  • Hardware specifications (CPU, RAM, storage).
  • Operating system and middleware versions.
  • Network configuration and security settings.
  • Data volume and structure (including a suitably anonymised subset of live data where permitted).
  • Security controls to protect test data (encryption, access logs, disposal procedures).

11. Relationship to Other Life‑Cycle Phases

Testing sits between the Design and Implementation phases. Results from testing feed back into earlier stages:

  • Defects that reveal missing or ambiguous requirements may trigger a brief re‑analysis or redesign before final implementation.
  • Performance or usability issues identified during system testing can lead to minor design refinements.

12. Implementation – Hand‑over Criteria

Only after the testing success criteria are met should the project move to the implementation (deployment) phase.

  • Hand‑over checklist – sign‑off from the test manager, project sponsor and key end‑users.
  • Implementation methods (as required by the syllabus):

    • Direct change‑over – switch off the old system and switch on the new one at a single point in time. Suitable when the old system can be safely retired.
    • Parallel running – run the old and new systems side‑by‑side for a period. Used when the risk of failure is high and data integrity must be verified.
    • Pilot running – introduce the new system to a limited user group or department first. Allows real‑world testing before full roll‑out.
    • Phased implementation – roll the new system out in stages (e.g., module‑by‑module). Useful for large, complex systems.

13. Documentation – Technical & User

Beyond the test artefacts, the syllabus expects students to be able to produce the following documents:

  • System Specification (functional & non‑functional requirements).
  • Technical Manual (installation, configuration, maintenance procedures).
  • User Guide / User Manual (how end‑users operate the system).
  • Test Plan (already covered).
  • Test Report (already covered).

Simple User‑Guide Template (example: “How to run a test case”)

1. Open the Test‑Case Specification document.

2. Locate the required Test Case ID (e.g., TC03).

3. Prepare the test data as described (type, format, source).

4. Enter the data into the system exactly as shown.

5. Observe the system’s response and compare it with the Expected Result column.

6. Record the Actual Result in the table.

7. If the result matches, mark the status as “Pass”. If not, mark “Fail” and note the discrepancy in the Remarks column.

8. Submit any failures to the defect‑logging system.

14. Benefits of Thorough Testing

  • Lower maintenance and support costs.
  • Faster user adoption and reduced training time.
  • Improved system reliability, security and performance.
  • Compliance with quality standards (e.g., ISO 9001) and data‑protection legislation.

15. Risks of Skipping or Inadequate Testing

  • Critical bugs reaching users – possible data loss or security breaches.
  • Extended downtime while post‑implementation fixes are applied.
  • Loss of stakeholder confidence and potential financial penalties.
  • Re‑work that may exceed the original project budget.

16. Position of Testing in the Systems Life Cycle

Testing is a controlled checkpoint after the design stage and before deployment. A typical flow is:

  1. Analysis →
  2. Design →
  3. Testing (unit → integration → system → acceptance)
  4. Implementation (deployment) →
  5. Maintenance.

Suggested diagram: Systems Life Cycle flowchart with the “Testing” stage highlighted between “Design” and “Implementation”.

17. Summary Checklist for Testing Before Implementation

  1. Develop a comprehensive test plan (objectives, strategy, data categories, schedule, resources).
  2. Create a detailed test‑case specification covering normal, abnormal, extreme data and data‑protection measures.
  3. Set up a test environment that replicates production, including security controls.
  4. Execute unit, integration, system and acceptance tests, recording actual results.
  5. Log defects, assign remedial actions, and retest until all high‑severity defects are fixed.
  6. Evaluate test success against the AO3 criteria and obtain stakeholder sign‑off.
  7. Produce the final test report and hand‑over the system using the agreed implementation method.

Key Takeaway

Testing is not an optional add‑on; it is an essential, syllabus‑mandated phase that records, analyses and evaluates system performance against requirements, ensures data security and e‑Safety, and provides the evidence needed for a safe hand‑over to implementation.