Show understanding of the need for continuing maintenance of a system and the differences between each type of maintenance

12.3 Program Testing and Maintenance

1. Why Ongoing Maintenance Is Essential

Software is never truly finished. After release it operates in a changing environment, so continuous maintenance is required to:

  • Corrective maintenance – fix faults that escaped testing.
  • Adaptive maintenance – adapt the system to new hardware, operating systems, standards or third‑party APIs.
  • Perfective maintenance – improve performance, usability or add new functionality in response to user feedback or market pressure.
  • Preventive maintenance – reduce the risk of future failures by refactoring code, updating documentation and extending the test suite.

Neglecting maintenance can cause system crashes, security breaches, rising repair costs and loss of user confidence.

2. Types of Maintenance (IEEE Classification)

Maintenance typePurposeTypical activitiesWhen it is triggered
CorrectiveFix faults discovered after deployment.

  • Bug identification and debugging
  • Patch creation and distribution
  • Regression testing

When users or monitoring tools report errors.
AdaptiveModify the system to work in a changed environment.

  • Porting to a new OS or hardware platform
  • Updating third‑party libraries or APIs
  • Changing configuration files for new network settings

When external dependencies, standards or platforms evolve.
PerfectiveEnhance functionality, performance or usability.

  • Adding new features requested by users
  • Optimising algorithms for speed or memory
  • Improving UI design

When stakeholders request improvements or competitive pressure arises.
PreventiveReduce the risk of future failures.

  • Refactoring code to improve structure
  • Updating documentation and comments
  • Adding test cases and static‑analysis checks

Proactively, often as part of a scheduled maintenance cycle.

3. Fault Taxonomy – What Can Go Wrong?

Fault typeDescriptionTypical example
Syntax faultViolates the language grammar; the program will not compile or run.Missing semicolon in Java: int x = 5 instead of int x = 5;
Logic faultProgram compiles but produces incorrect results because the algorithm is wrong.Off‑by‑one error in a loop that processes an array of 10 elements but iterates only to index 8.
Run‑time faultOccurs while the program is executing, often causing a crash or abnormal termination.Division by zero or a NullPointerException when an object reference is not initialised.

4. Testing Methods – How Do We Find Faults?

MethodPurposeTypical A‑Level use
Dry‑run (desk‑checking)Manual trace of the algorithm on paper.Checking the logic of a sorting routine before any code is written.
Walkthrough / Peer reviewIdentify logical errors and improve design through discussion.Group review of pseudocode for a ticket‑booking system.
White‑box (structural) testingTest internal paths, branches and conditions.Unit tests covering every branch of a discount‑calculation function.
Black‑box (functional) testingValidate external behaviour against specifications.Input‑output tests for a calculator app without looking at its code.
Integration testingCheck that combined modules interact correctly.Testing the interface between a GUI front‑end and a database back‑end.
Alpha testingIn‑house testing by developers or a dedicated test team.Early release of a school‑management system to staff for feedback.
Beta testingExternal testing by real users in a real environment.Public download of a mobile app for a limited group of students.
Acceptance testingFormal verification that the system meets the client’s requirements.Final sign‑off by the school board before the timetable system goes live.
Stub testingReplace undeveloped modules with simple placeholders.Using a stub to simulate a payment gateway while the real API is not yet available.

5. Test Strategy & Test Plan – What Should Be Documented?

A test plan makes testing systematic, repeatable and auditable. The Cambridge syllabus expects students to be able to describe its likely contents and to produce a simple plan.

5.1 Test‑Plan Template (Syllabus‑aligned)

Test Plan – Project title
1. Test objectiveWhat is to be verified (e.g., “All user‑login scenarios must succeed or fail correctly”).
2. ScopeModules/features to be tested; items explicitly out of scope.
3. ResourcesPeople, hardware, software tools, time allocation.
4. Test environmentOS, browsers, network configuration, database version.
5. Test data selectionRepresentative, boundary and invalid data sets.
6. Test casesTable of ID, input, expected output, pass/fail criteria.
7. Schedule & milestonesWhen each test phase (unit, integration, system, acceptance) will be executed.
8. Risk & contingencyKnown high‑risk areas and fallback actions.

5.2 Sample Test Plan – “Student‑Record Manager” (simple console program)

Test Plan – Student‑Record Manager
1. Test objectiveVerify that the program correctly adds, searches and deletes student records and that input validation works.
2. ScopeModules: AddStudent, SearchStudent, DeleteStudent. Out of scope: file‑export feature (not yet implemented).
3. Resources2 students (testers), 1 laptop with JDK 17, Eclipse IDE, 2 hours.
4. Test environmentWindows 10, Java console, no network connection.
5. Test data selection

  • Valid record: ID = 101, Name = “Alice”, Grade = 85
  • Boundary ID: 0 and 9999
  • Invalid data: non‑numeric ID, empty name, grade = ‑5

6. Test cases

TC‑IDInputExpected outputPass/Fail criteria
TC‑01Add 101, “Alice”, 85Record stored; confirmation message displayedMessage shown within 1 s
TC‑02Search ID = 101Displays “Alice – 85”Exact match shown, no extra data
TC‑03Delete ID = 101Record removed; “Deleted” messageSubsequent search for 101 returns “Not found”
TC‑04Add ID = “ABC”Error “ID must be numeric”Error displayed, record not added

7. Schedule & milestonesUnit testing – 30 min; Integration testing – 45 min; System testing – 30 min; Review – 15 min.
8. Risk & contingencyRisk: Invalid input causing program crash. Contingency: Add try‑catch blocks and re‑run failed tests.

6. Techniques for Exposing and Avoiding Faults

  • Debugging (systematic)

    1. Reproduce the fault.
    2. Isolate the code segment (breakpoints, print‑statement tracing, IDE debugger).
    3. Identify the cause (incorrect variable, logic error, etc.).
    4. Apply a fix and re‑run relevant tests.

  • Static analysis – tools that examine source code without execution (linters, PMD, SonarQube) to detect syntax errors, unused variables and potential security issues.
  • Code reviews / pair programming – another pair of eyes can spot logical mistakes, improve readability and enforce coding standards.
  • Unit testing – write small, automated tests for individual functions; run them after every change to catch regressions early.

7. Worked Example – From Fault to Fix and Test‑Plan Update (AO3)

Scenario: A student‑record manager program (see the test plan above) fails when the user tries to add a record with a grade of 100.

7.1 Pseudocode (with the fault)

PROCEDURE AddStudent(id, name, grade)

IF NOT IsNumeric(id) THEN

PRINT "ID must be numeric"

RETURN

END IF

IF grade < 0 OR grade > 99 THEN

PRINT "Grade must be between 0 and 99"

RETURN

END IF

STORE (id, name, grade) IN database

PRINT "Student added"

END PROCEDURE

7.2 Dry‑run (desk‑checking)

  1. Call AddStudent(102, "Bob", 100).
  2. First IF passes (ID is numeric).
  3. Second IF evaluates: grade > 99 is true, so the error message “Grade must be between 0 and 99” is printed and the function returns.
  4. Result: a perfectly valid grade of 100 is rejected – a logic fault (incorrect boundary condition).

7.3 Fixing the Fault

IF grade < 0 OR grade > 100 THEN // corrected upper bound

PRINT "Grade must be between 0 and 100"

RETURN

END IF

7.4 Updating the Test Plan

  • Modify TC‑01 to allow a grade of 100 as a valid input.
  • Add a new test case TC‑05 to verify the corrected boundary.

TC‑IDInputExpected outputPass/Fail criteria
TC‑01 (re‑vised)Add 101, “Alice”, 85Record stored; confirmation messageMessage shown within 1 s
TC‑05Add 103, “Charlie”, 100Record stored; confirmation messageMessage shown within 1 s, record searchable

7.5 Mapping to Syllabus Requirements

  • Locate and identify the fault – logic fault, off‑by‑one boundary error (AO3).
  • Correct the error – change the comparison operator (AO3).
  • Update the test plan with new/modified test cases (AO2, AO3).
  • Run regression tests to ensure no other functionality broke (AO2).

8. Relationship Between Testing and Maintenance

  1. Regression testing – after corrective changes, verify that existing functionality still works.
  2. Compatibility testing – after adaptive changes, confirm the system runs on the new platform or with updated APIs.
  3. Performance testing – after perfective enhancements, measure speed or resource usage.
  4. Static analysis & code reviews – identify candidates for preventive maintenance.

9. Exam‑Style Questions

9.1 Multiple‑Choice (choose the most appropriate testing method)

Which testing technique is best suited for verifying that a new “export‑to‑CSV” button correctly writes a file when the program is run on Windows 10 and macOS?

  1. Dry‑run
  2. Black‑box functional testing
  3. White‑box structural testing
  4. Stub testing

Answer: B – black‑box functional testing validates the external behaviour (file creation) on the two operating systems.

9.2 Short answer (maintenance type justification)

During a project a new government regulation requires that all student ages be stored as a four‑digit year of birth instead of a two‑digit age field. Identify the type of maintenance required and give two specific activities that would be performed.

Answer (example): Adaptive maintenance. Activities: (1) modify the database schema to replace the age column with yearOfBirth; (2) update all input‑validation routines and existing data‑migration scripts to handle four‑digit years.

9.3 Extended problem (AO3 – locate, fix, update test plan)

You are given the following fragment of pseudocode for a function that calculates the average of a list of marks.

PROCEDURE AvgMarks(list)

total ← 0

FOR i ← 1 TO LENGTH(list) - 1

total ← total + list[i]

END FOR

RETURN total / LENGTH(list)

END PROCEDURE

During testing the following data set is used:

  • list = {80, 90, 70}

The expected average is 80, but the function returns 75. Perform the following tasks:

  1. Identify the fault type and explain why the result is wrong.
  2. Show the corrected pseudocode.
  3. Write a new test case (ID, input, expected output, pass/fail criteria) that would expose this fault.

Sample answer:

  1. Fault type: Logic fault – the loop iterates only to LENGTH(list) - 1, omitting the last element (70). Consequently, total = 80 + 90 = 170; 170 / 3 = 56.7 (rounded to 57) or, if integer division is used, 170 / 3 = 56, which after rounding in the exam may appear as 75 depending on the language. Either way the average is incorrect.
  2. Corrected pseudocode:

    PROCEDURE AvgMarks(list)

    total ← 0

    FOR i ← 1 TO LENGTH(list) // include the last element

    total ← total + list[i]

    END FOR

    RETURN total / LENGTH(list)

    END PROCEDURE

  3. Test case:

    TC‑IDInputExpected outputPass/Fail criteria
    TC‑A1list = {80, 90, 70}80Returned value equals 80 (±0.01)

10. Key Points to Remember

  • Maintenance is a continuous activity that extends the useful life of software.
  • The four IEEE maintenance types have distinct focuses but often overlap in real projects.
  • Effective testing (unit, integration, regression, compatibility, performance) underpins every maintenance activity.
  • Preventive maintenance, although less visible, can dramatically lower future corrective costs.
  • Documenting test strategies, test plans and defect metrics is essential for efficient maintenance and for answering exam questions.

11. Example Calculation – Defect Metrics

Assume a released system contains 5 000 lines of code (LOC) and the measured defect density is 0.8 defects per KLOC. The expected number of defects to be fixed during corrective maintenance is:

Defects = (5 000 / 1 000) × 0.8 = 4 defects

Such metrics help students estimate effort, plan resources and justify the need for corrective maintenance.