Know and understand test designs, test strategies, test plan (test data, expected outcomes, actual outcomes, remedial action) following testing

7. The Systems Life Cycle – Testing Phase

Purpose of testing – to verify that the system meets the specified requirements, works reliably under expected conditions and that all validation routines (data‑type checks, range checks, format checks, mandatory‑field checks, etc.) function correctly before the system is released for operational use.

7.1 Test Designs – How test cases are created

Test designs are systematic ways of selecting inputs so that defects are most likely to be revealed. Each design is linked to a type of validation routine that appears in the IGCSE ICT syllabus.

  • Black‑box testing – focuses on inputs and expected outputs without looking at the code.

    Example: For a student‑record data‑entry form, enter a valid student name and ID and check that the record is saved; then enter a non‑numeric ID and verify that the system displays an “Invalid ID” error.

  • White‑box (structural) testing – examines the internal logic, paths and conditions of the program (e.g., using code inspection or coverage metrics).

    Example: Review the routine that calculates a student’s average grade to ensure every possible branch (division by zero, missing marks) is executed at least once.

  • Boundary‑value analysis – tests values at the extreme ends of an input range (minimum, maximum and just‑outside the range).

    Example: If the “Score” field accepts 0‑100, test with 0, 1, 99, 100 and 101 to confirm correct handling of the lower and upper limits.

  • Equivalence partitioning – groups inputs that are expected to behave the same way; one representative from each class is tested.

    Example: For a “Year of Birth” field, create three partitions: (a) valid years (1900‑2025), (b) years too early (e.g., 1800), (c) years in the future (e.g., 2030). Test one value from each partition.

  • Decision‑table testing – creates a table of conditions and actions; each row becomes a test case. Useful for complex validation logic such as “if‑then‑else” rules.

    Example: A login routine that checks (i) username entered, (ii) password entered, (iii) account not locked. The decision table lists all combinations (e.g., username missing + password entered → error “Username required”).

7.2 Test Strategies – What levels of the system are tested

A test strategy describes the overall approach and the sequence of testing activities. The table links each strategy to the relevant Cambridge assessment objectives (AO).

Test StrategyWhat is testedRelated AO
Unit testingIndividual modules, functions or procedures in isolation (e.g., the routine that validates a student ID)AO2 – apply skills to develop and test
Integration testingInteraction between combined modules (e.g., data passed from the entry form to the database update routine)AO2 – apply skills to develop and test
System testingComplete, integrated system against the full set of specifications, including overall input‑output validationAO3 – evaluate the solution
Acceptance testingPerformed by end‑users or clients to confirm the system is ready for deployment and meets business needsAO3 – evaluate the solution

7.3 Test Plan – The master document that drives testing

The test plan records every piece of information required to carry out the tests and to evaluate the results. It is a living document that is updated after each test cycle.

Test Case IDTest Data (Inputs)Expected Outcome (E)Actual Outcome (A)Result (Pass/Fail)JustificationRemedial Action
TC01Username: admin, Password: 1234Login successful; dashboard displayedLogin successful; dashboard displayedPassSystem behaved as specifiedNone
TC02Username: admin, Password: wrongError message “Invalid password” displayedNo error displayed – system logged inFailAuthentication routine did not enforce validationLog defect, fix routine, retest
TC03Input value: 0 (boundary of range 0‑100)Value accepted and stored as 0Value accepted and stored as 0PassBoundary‑value check succeededNone

*The columns “Expected Outcome (E)” and “Actual Outcome (A)” use the syllabus terminology; the footnote links to the AO3 evaluation criteria.*

Elements of the Test Plan

  • Test data (Inputs) – the specific values for each case (including valid, invalid and boundary values).
  • Expected outcome (E) – the result defined by the specification or validation rule.
  • Actual outcome (A) – the result observed during execution.
  • Result – “Pass” if A = E, otherwise “Fail”.
  • Justification – a brief note explaining why the result is Pass or Fail.
  • Remedial action – steps to be taken when a test fails (defect log, fix, retest).

7.4 Recording Results & Evaluation

For each test case, the following information must be recorded:

  1. Test case identifier.
  2. Date and tester’s name.
  3. Test environment (hardware, OS, software versions, database state).
  4. Detailed observations (error messages, screen captures, log entries).
  5. Result and justification.
  6. Evaluation against success criteria (e.g., “All critical validation routines passed; ≥ 95 % of test cases passed”).

AO3 Evaluation Checklist (8 % of the IGCSE ICT assessment)

  • Does the fix introduce new defects in other modules? (Regression testing)
  • Is system performance still within acceptable limits?
  • Are all validation routines (type, range, format, mandatory fields) now working?
  • Has the overall pass‑rate met the project’s acceptance threshold?
  • Is the documentation (test log, defect report, version number) up‑to‑date?

Note: AO3 carries an 8 % weighting in the IGCSE ICT exam. When answering exam questions, be sure to address at least three items from the checklist to demonstrate thorough evaluation.

7.5 Remedial Actions & Re‑testing Cycle

  1. Log the defect with a clear description, severity and steps to reproduce.
  2. Assign the defect to a developer or responsible team.
  3. Developer implements a fix and updates the code version.
  4. Tester re‑runs the original test case and any related cases (regression).
  5. Update the test plan with the new actual outcome, result, justification and any further remedial actions.
  6. Re‑evaluate against the success criteria before moving to the next test level.

7.6 Testing Documentation – What students must know

  • Test plan – the master document shown above.
  • Test log / test report – chronological record of tests performed, outcomes and any issues.
  • Defect (bug) report – includes ID, description, severity, status and resolution.
  • Version‑control notes – indicate which software version was tested.
  • Tools – simple spreadsheets for test logs, basic test‑case generators, or lightweight test‑management software (e.g., Trello, Excel).

7.7 Position of Testing in the Systems Life Cycle

Analysis

Design

Implementation

Testing

Maintenance

feedback to Design

feedback to Analysis

Testing sits between Implementation and Maintenance. Arrows show the normal flow, while the red arrows indicate feedback to earlier stages (Design and Analysis) when defects are found.