Software is never truly finished. After release it operates in a changing environment, so continuous maintenance is required to:
Neglecting maintenance can cause system crashes, security breaches, rising repair costs and loss of user confidence.
| Maintenance type | Purpose | Typical activities | When it is triggered |
|---|---|---|---|
| Corrective | Fix faults discovered after deployment. |
| When users or monitoring tools report errors. |
| Adaptive | Modify the system to work in a changed environment. |
| When external dependencies, standards or platforms evolve. |
| Perfective | Enhance functionality, performance or usability. |
| When stakeholders request improvements or competitive pressure arises. |
| Preventive | Reduce the risk of future failures. |
| Proactively, often as part of a scheduled maintenance cycle. |
| Fault type | Description | Typical example |
|---|---|---|
| Syntax fault | Violates the language grammar; the program will not compile or run. | Missing semicolon in Java: int x = 5 instead of int x = 5; |
| Logic fault | Program compiles but produces incorrect results because the algorithm is wrong. | Off‑by‑one error in a loop that processes an array of 10 elements but iterates only to index 8. |
| Run‑time fault | Occurs while the program is executing, often causing a crash or abnormal termination. | Division by zero or a NullPointerException when an object reference is not initialised. |
| Method | Purpose | Typical A‑Level use |
|---|---|---|
| Dry‑run (desk‑checking) | Manual trace of the algorithm on paper. | Checking the logic of a sorting routine before any code is written. |
| Walkthrough / Peer review | Identify logical errors and improve design through discussion. | Group review of pseudocode for a ticket‑booking system. |
| White‑box (structural) testing | Test internal paths, branches and conditions. | Unit tests covering every branch of a discount‑calculation function. |
| Black‑box (functional) testing | Validate external behaviour against specifications. | Input‑output tests for a calculator app without looking at its code. |
| Integration testing | Check that combined modules interact correctly. | Testing the interface between a GUI front‑end and a database back‑end. |
| Alpha testing | In‑house testing by developers or a dedicated test team. | Early release of a school‑management system to staff for feedback. |
| Beta testing | External testing by real users in a real environment. | Public download of a mobile app for a limited group of students. |
| Acceptance testing | Formal verification that the system meets the client’s requirements. | Final sign‑off by the school board before the timetable system goes live. |
| Stub testing | Replace undeveloped modules with simple placeholders. | Using a stub to simulate a payment gateway while the real API is not yet available. |
A test plan makes testing systematic, repeatable and auditable. The Cambridge syllabus expects students to be able to describe its likely contents and to produce a simple plan.
| Test Plan – Project title | |
|---|---|
| 1. Test objective | What is to be verified (e.g., “All user‑login scenarios must succeed or fail correctly”). |
| 2. Scope | Modules/features to be tested; items explicitly out of scope. |
| 3. Resources | People, hardware, software tools, time allocation. |
| 4. Test environment | OS, browsers, network configuration, database version. |
| 5. Test data selection | Representative, boundary and invalid data sets. |
| 6. Test cases | Table of ID, input, expected output, pass/fail criteria. |
| 7. Schedule & milestones | When each test phase (unit, integration, system, acceptance) will be executed. |
| 8. Risk & contingency | Known high‑risk areas and fallback actions. |
| Test Plan – Student‑Record Manager | |||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1. Test objective | Verify that the program correctly adds, searches and deletes student records and that input validation works. | ||||||||||||||||||||
| 2. Scope | Modules: AddStudent, SearchStudent, DeleteStudent. Out of scope: file‑export feature (not yet implemented). | ||||||||||||||||||||
| 3. Resources | 2 students (testers), 1 laptop with JDK 17, Eclipse IDE, 2 hours. | ||||||||||||||||||||
| 4. Test environment | Windows 10, Java console, no network connection. | ||||||||||||||||||||
| 5. Test data selection |
| ||||||||||||||||||||
| 6. Test cases |
| ||||||||||||||||||||
| 7. Schedule & milestones | Unit testing – 30 min; Integration testing – 45 min; System testing – 30 min; Review – 15 min. | ||||||||||||||||||||
| 8. Risk & contingency | Risk: Invalid input causing program crash. Contingency: Add try‑catch blocks and re‑run failed tests. | ||||||||||||||||||||
Scenario: A student‑record manager program (see the test plan above) fails when the user tries to add a record with a grade of 100.
PROCEDURE AddStudent(id, name, grade)
IF NOT IsNumeric(id) THEN
PRINT "ID must be numeric"
RETURN
END IF
IF grade < 0 OR grade > 99 THEN
PRINT "Grade must be between 0 and 99"
RETURN
END IF
STORE (id, name, grade) IN database
PRINT "Student added"
END PROCEDURE
AddStudent(102, "Bob", 100).IF passes (ID is numeric).IF evaluates: grade > 99 is true, so the error message “Grade must be between 0 and 99” is printed and the function returns.IF grade < 0 OR grade > 100 THEN // corrected upper bound
PRINT "Grade must be between 0 and 100"
RETURN
END IF
| TC‑ID | Input | Expected output | Pass/Fail criteria |
|---|---|---|---|
| TC‑01 (re‑vised) | Add 101, “Alice”, 85 | Record stored; confirmation message | Message shown within 1 s |
| TC‑05 | Add 103, “Charlie”, 100 | Record stored; confirmation message | Message shown within 1 s, record searchable |
Which testing technique is best suited for verifying that a new “export‑to‑CSV” button correctly writes a file when the program is run on Windows 10 and macOS?
Answer: B – black‑box functional testing validates the external behaviour (file creation) on the two operating systems.
During a project a new government regulation requires that all student ages be stored as a four‑digit year of birth instead of a two‑digit age field. Identify the type of maintenance required and give two specific activities that would be performed.
Answer (example): Adaptive maintenance. Activities: (1) modify the database schema to replace the age column with yearOfBirth; (2) update all input‑validation routines and existing data‑migration scripts to handle four‑digit years.
You are given the following fragment of pseudocode for a function that calculates the average of a list of marks.
PROCEDURE AvgMarks(list)
total ← 0
FOR i ← 1 TO LENGTH(list) - 1
total ← total + list[i]
END FOR
RETURN total / LENGTH(list)
END PROCEDURE
During testing the following data set is used:
The expected average is 80, but the function returns 75. Perform the following tasks:
Sample answer:
LENGTH(list) - 1, omitting the last element (70). Consequently, total = 80 + 90 = 170; 170 / 3 = 56.7 (rounded to 57) or, if integer division is used, 170 / 3 = 56, which after rounding in the exam may appear as 75 depending on the language. Either way the average is incorrect.PROCEDURE AvgMarks(list)
total ← 0
FOR i ← 1 TO LENGTH(list) // include the last element
total ← total + list[i]
END FOR
RETURN total / LENGTH(list)
END PROCEDURE
| TC‑ID | Input | Expected output | Pass/Fail criteria |
|---|---|---|---|
| TC‑A1 | list = {80, 90, 70} | 80 | Returned value equals 80 (±0.01) |
Assume a released system contains 5 000 lines of code (LOC) and the measured defect density is 0.8 defects per KLOC. The expected number of defects to be fixed during corrective maintenance is:
Defects = (5 000 / 1 000) × 0.8 = 4 defects
Such metrics help students estimate effort, plan resources and justify the need for corrective maintenance.
Your generous donation helps us continue providing free Cambridge IGCSE & A-Level resources, past papers, syllabus notes, revision questions, and high-quality online tutoring to students across Kenya.