Analyse an existing program and make amendments to enhance functionality while ensuring reliability, maintainability and compliance with the Cambridge International AS & A Level Computer Science (9618) syllabus.
Why Testing and Maintenance Matter
Detect and correct faults before deployment.
Verify that the program satisfies the specified requirements.
Reduce long‑term costs by preventing error‑propagation.
Facilitate future enhancements and adaptations.
Key Terminology (AO1 – factual knowledge)
Test case – a set of inputs, expected outputs and conditions used to verify a piece of software.
Regression testing – re‑executing existing test cases after a change to ensure no previously working functionality has broken.
Coverage – a measure of how much of the code is exercised by the test suite (see “Coverage Measures” below).
Stub – a simplified implementation of a module used during testing of another module.
Focuses on inputs/outputs without looking at the code.
Input domain is numeric with clear limits
Boundary‑value analysis
Edges are where many faults occur.
Large set of valid inputs that can be grouped
Equivalence partitioning
Reduces number of test cases while keeping coverage.
Dependent module not yet implemented
Stub testing
Isolates the module under test.
Final product ready for real users
Beta testing
Gathers feedback from the intended audience.
Early prototype, internal team only
Alpha testing
Quick feedback before wider release.
Typical Test‑Case Format
Test ID
Purpose
Input(s)
Expected Output
Result
Comments
TC01
Validate lower boundary
n = 0
“Invalid input”
Pass
TC02
Typical case
n = 5
Result = 120
Fail
Off‑by‑one error in loop
Fault Types
Syntax errors – violations of language grammar detected at compile/interpret time (e.g., missing colon, unmatched parentheses).
Logical errors – algorithmic mistakes that produce incorrect results despite syntactically correct code (e.g., off‑by‑one loops, wrong operator).
Run‑time errors – faults that cause the program to fail during execution (e.g., division by zero, null reference, infinite loop, uncaught exceptions, resource leaks).
Missing or inadequate input validation.
Hard‑coded values that hinder re‑use.
Test Strategy / Test Plan – Core Components
A test strategy describes the overall approach; a test plan records the concrete details. Both are required by the syllabus.
Purpose and objectives
Scope – modules/features to be tested and those excluded.
Resources – personnel, hardware, software, and tools required.
Test environment – hardware/software configuration, data sets, stubs or simulators.
Entry and exit criteria – conditions for starting testing and for declaring it complete.
Risk assessment & mitigation – identification of high‑risk areas and contingency plans.
Sample Test‑Plan Template (one‑page)
Item
Details (example for a “Record‑search” module)
Purpose
Verify that the search function returns the correct record for a given ID and handles missing IDs gracefully.
Scope
Unit tests for search_record(); integration tests with file I/O module.
Resources
2 students, Python 3.11, VS Code, pytest, sample data file “students.txt”.
Test environment
Windows 10, 8 GB RAM, no internet connection, stub for database connection.
Entry criteria
Code compiled without syntax errors; test data file prepared.
Exit criteria
All test cases pass; no high‑severity defects remain.
Risks & mitigation
Risk: large data file causing performance issues → use a reduced data set for testing.
Analysing an Existing Program
Read the specification and list the functional requirements.
Perform a dry‑run or walkthrough with representative data.
Identify violations of coding standards, design principles or Cambridge terminology.
Execute the existing test suite (if supplied) and record failures.
Document each fault using the test‑case format.
Classify the fault (syntax, logic, run‑time) and decide which maintenance category applies.
Maintenance Concepts (Cambridge categories)
Corrective maintenance – fixing faults discovered after deployment.
Adaptive maintenance – modifying the system to work in a changed environment (e.g., new OS version).
Perfective maintenance – enhancing performance or adding features.
Preventive maintenance – refactoring or adding documentation to reduce future fault incidence.
Making Amendments – A Structured Approach
Identify the required enhancement – state the new functionality, performance goal or defect to be corrected.
Design the change – assess impact on existing modules, data structures, interfaces and on the test plan.
Update the code – implement the amendment, following good coding practice (meaningful names, comments, modular design, Cambridge terminology).
Extend the test suite – add test cases for the new behaviour and regression tests for unchanged parts.
Run all tests – verify that no existing functionality is broken (regression testing).
Document the change – update comments, design diagrams, the maintenance log and the test plan.
Peer review (if possible) – a walkthrough with a classmate or teacher to catch overlooked issues.
Example 1 – Refactoring a Factorial Function (Iterative & Recursive)
Original code (Python‑style pseudocode):
def factorial(n):
result = 1
for i in range(1, n):
result = result * i
return result
Issues identified:
Loop stops at n‑1 – off‑by‑one error (logic fault).
No input validation for negative numbers (run‑time fault).
Hard‑coded start value limits reuse for other product calculations.
Amended iterative version (Cambridge‑compatible comments):
def factorial(n):
"""Return n! for non‑negative integers (Cambridge: correct algorithm)."""
# ---- Input validation (preventive maintenance) ----
if n < 0:
raise ValueError("n must be non‑negative")
# ---- Core algorithm (corrected off‑by‑one) ----
result = 1
for i in range(2, n + 1):
result *= i
return result
Recursive alternative (demonstrates ADT use – integers):
def factorial_rec(n):
"""Recursive factorial – illustrates recursion (perfective maintenance)."""
if n < 0:
raise ValueError("n must be non‑negative")
if n == 0 or n == 1:
return 1
return n * factorial_rec(n - 1)
New test cases (added to the table above) cover n = 0, n = 1, a typical value (e.g., 5), and a negative input.
Example 2 – File‑Based Record Search (Arrays, I/O, Exception Handling)
Scenario: A program stores student records in a text file (students.txt). Each line contains ID,Name,Score. The function search_record(id) should return the matching line or raise RecordNotFound.
Original code:
def search_record(id):
f = open('students.txt', 'r')
for line in f:
fields = line.split(',')
if fields[0] == id:
f.close()
return line
f.close()
return "Not found"
Issues identified:
No handling of FileNotFoundError – run‑time fault.
Comparison uses string id vs. integer – logic fault.
Returns a generic string instead of raising an exception – reduces maintainability.
File is opened without a with statement – risk of resource leak.
Amended version (with ADT StudentRecord):
class RecordNotFound(Exception):
pass
class StudentRecord:
"""Simple ADT to hold a student record."""
def __init__(self, sid, name, score):
self.sid = int(sid)
self.name = name
self.score = float(score)
def __str__(self):
return f"{self.sid},{self.name},{self.score}"
def search_record(sid):
"""Return a StudentRecord for the given ID or raise RecordNotFound."""
try:
with open('students.txt', 'r') as f:
for line in f:
fields = line.strip().split(',')
if int(fields[0]) == int(sid):
return StudentRecord(*fields)
raise RecordNotFound(f"ID {sid} not found")
except FileNotFoundError:
raise FileNotFoundError("Data file 'students.txt' is missing")
Additional test cases (illustrating black‑box and exception handling):
Test ID
Purpose
Input(s)
Expected Output
Result
Comments
TC10
Valid ID exists
sid = 123
StudentRecord with ID 123
Pass
TC11
Non‑existent ID
sid = 999
RecordNotFound exception
Pass
TC12
Missing data file
sid = 123 (file removed)
FileNotFoundError exception
Pass
TC13
Boundary ID (first line)
sid = 001
StudentRecord with ID 1
Pass
Documentation and Maintenance Log
Date: 2025‑11‑22
Module: factorial.py
Change: Added input validation, corrected loop bounds, added recursive version.
Reason: Failed test case TC02 (off‑by‑one) and new requirement for negative‑input handling.
Tester: A. Smith
Result: All test cases now pass; regression testing confirmed no breakage.
Date: 2025‑12‑03
Module: search_record.py
Change: Implemented with‑statement, added RecordNotFound exception, introduced StudentRecord ADT, added file‑not‑found handling.
Reason: Multiple run‑time faults identified during integration testing (TC10‑TC13).
Tester: B. Lee
Result: All new test cases pass; existing system tests unchanged.
Checklist for a Successful Amendment
Requirement clearly understood and recorded.
Design impact assessed (dependencies, interfaces, test plan).
Code follows naming conventions, includes comments and uses Cambridge terminology.
All relevant test cases (old and new) executed – regression testing passed.
Documentation updated (code comments, design diagrams, maintenance log, test plan).
Peer review or walkthrough performed where possible.
Suggested Diagram
Testing cycle flowchart – Write test → Run test → Record result → Fix fault → Re‑run test → Pass.
Summary
Effective testing and maintenance require a systematic approach:
Know the terminology (AO1) and coverage measures.
Select the appropriate testing method using the decision‑guide.
Produce a concise test strategy and a filled‑in test‑plan template.
Analyse the existing code, classify faults and choose the correct maintenance category.
Apply the 7‑step amendment workflow, extend the test suite and perform regression testing.
Document every change in a maintenance log and, where possible, obtain a peer review.
Following this structured process enables students to meet the Cambridge 12.3 learning outcomes and to excel in both AO2 (analysis) and AO3 (design, implementation and evaluation) assessment tasks.
Your generous donation helps us continue providing free Cambridge IGCSE & A-Level resources,
past papers, syllabus notes, revision questions, and high-quality online tutoring to students across Kenya.