Analyse an existing program and make amendments to enhance functionality

12.3 Program Testing and Maintenance

Learning Objective

Analyse an existing program and make amendments to enhance functionality while ensuring reliability, maintainability and compliance with the Cambridge International AS & A Level Computer Science (9618) syllabus.

Why Testing and Maintenance Matter

  • Detect and correct faults before deployment.
  • Verify that the program satisfies the specified requirements.
  • Reduce long‑term costs by preventing error‑propagation.
  • Facilitate future enhancements and adaptations.

Key Terminology (AO1 – factual knowledge)

  • Test case – a set of inputs, expected outputs and conditions used to verify a piece of software.
  • Regression testing – re‑executing existing test cases after a change to ensure no previously working functionality has broken.
  • Coverage – a measure of how much of the code is exercised by the test suite (see “Coverage Measures” below).
  • Stub – a simplified implementation of a module used during testing of another module.
  • Alpha / Beta testing – pre‑release testing phases; alpha is internal, beta involves external users.
  • Dry‑run – a manual trace of the algorithm on paper before any code is executed.
  • Walk‑through – a peer review of design or code, often performed step‑by‑step.
  • White‑box (structural) testing – testing based on knowledge of the internal code (statement, branch, path, condition coverage).
  • Black‑box (functional) testing – testing based solely on inputs and expected outputs.
  • Boundary‑value analysis – testing at the edges of input domains.
  • Equivalence partitioning – dividing inputs into classes that should be processed similarly.

Coverage Measures

  • Statement coverage – each executable statement is executed at least once.
  • Branch coverage – each possible branch (true/false) of every decision point is taken.
  • Path coverage – every possible path through the program is exercised (often impractical for large programs).
  • Condition coverage – each Boolean sub‑condition within a decision evaluates to true and false.

In the Cambridge examinations, students are expected to name at least two of the above measures and explain when each is appropriate.

Types of Testing (Cambridge terminology)

  1. Unit testing – test individual modules or functions in isolation.
  2. Integration testing – verify interactions between combined modules.
  3. System testing – evaluate the complete integrated system against the specification.
  4. Acceptance testing – confirm the system satisfies the end‑user’s needs (often called “user acceptance testing”).

Testing Methods and When to Use Them

  • Dry‑run – before any code is compiled; useful for early algorithm design.
  • Walk‑through – peer review of design or code; catches logical omissions early.
  • White‑box testing – when the internal structure is known (e.g., to achieve statement or branch coverage).
  • Black‑box testing – when only the specification is available (e.g., for a third‑party library).
  • Boundary‑value analysis – for numeric ranges, string lengths, array indices.
  • Equivalence partitioning – when a large input domain can be split into a few representative classes.
  • Stub testing – when a called module is not yet available or is expensive to use.
  • Alpha testing – internal testing by developers or the teaching team.
  • Beta testing – external testing by real users or another class.

Decision‑Guide for Selecting a Testing Method

Problem characteristic Recommended method(s) Reason
Algorithmic logic is known, need to verify internal paths White‑box (statement/branch/path coverage) Ensures every part of the code is exercised.
Only the external behaviour is documented Black‑box (equivalence partitioning, boundary‑value) Focuses on inputs/outputs without looking at the code.
Input domain is numeric with clear limits Boundary‑value analysis Edges are where many faults occur.
Large set of valid inputs that can be grouped Equivalence partitioning Reduces number of test cases while keeping coverage.
Dependent module not yet implemented Stub testing Isolates the module under test.
Final product ready for real users Beta testing Gathers feedback from the intended audience.
Early prototype, internal team only Alpha testing Quick feedback before wider release.

Typical Test‑Case Format

Test ID Purpose Input(s) Expected Output Result Comments
TC01 Validate lower boundary n = 0 “Invalid input” Pass
TC02 Typical case n = 5 Result = 120 Fail Off‑by‑one error in loop

Fault Types

  • Syntax errors – violations of language grammar detected at compile/interpret time (e.g., missing colon, unmatched parentheses).
  • Logical errors – algorithmic mistakes that produce incorrect results despite syntactically correct code (e.g., off‑by‑one loops, wrong operator).
  • Run‑time errors – faults that cause the program to fail during execution (e.g., division by zero, null reference, infinite loop, uncaught exceptions, resource leaks).
  • Missing or inadequate input validation.
  • Hard‑coded values that hinder re‑use.

Test Strategy / Test Plan – Core Components

A test strategy describes the overall approach; a test plan records the concrete details. Both are required by the syllabus.

  1. Purpose and objectives
  2. Scope – modules/features to be tested and those excluded.
  3. Resources – personnel, hardware, software, and tools required.
  4. Test environment – hardware/software configuration, data sets, stubs or simulators.
  5. Entry and exit criteria – conditions for starting testing and for declaring it complete.
  6. Risk assessment & mitigation – identification of high‑risk areas and contingency plans.

Sample Test‑Plan Template (one‑page)

Item Details (example for a “Record‑search” module)
Purpose Verify that the search function returns the correct record for a given ID and handles missing IDs gracefully.
Scope Unit tests for search_record(); integration tests with file I/O module.
Resources 2 students, Python 3.11, VS Code, pytest, sample data file “students.txt”.
Test environment Windows 10, 8 GB RAM, no internet connection, stub for database connection.
Entry criteria Code compiled without syntax errors; test data file prepared.
Exit criteria All test cases pass; no high‑severity defects remain.
Risks & mitigation Risk: large data file causing performance issues → use a reduced data set for testing.

Analysing an Existing Program

  1. Read the specification and list the functional requirements.
  2. Perform a dry‑run or walkthrough with representative data.
  3. Identify violations of coding standards, design principles or Cambridge terminology.
  4. Execute the existing test suite (if supplied) and record failures.
  5. Document each fault using the test‑case format.
  6. Classify the fault (syntax, logic, run‑time) and decide which maintenance category applies.

Maintenance Concepts (Cambridge categories)

  • Corrective maintenance – fixing faults discovered after deployment.
  • Adaptive maintenance – modifying the system to work in a changed environment (e.g., new OS version).
  • Perfective maintenance – enhancing performance or adding features.
  • Preventive maintenance – refactoring or adding documentation to reduce future fault incidence.

Making Amendments – A Structured Approach

  1. Identify the required enhancement – state the new functionality, performance goal or defect to be corrected.
  2. Design the change – assess impact on existing modules, data structures, interfaces and on the test plan.
  3. Update the code – implement the amendment, following good coding practice (meaningful names, comments, modular design, Cambridge terminology).
  4. Extend the test suite – add test cases for the new behaviour and regression tests for unchanged parts.
  5. Run all tests – verify that no existing functionality is broken (regression testing).
  6. Document the change – update comments, design diagrams, the maintenance log and the test plan.
  7. Peer review (if possible) – a walkthrough with a classmate or teacher to catch overlooked issues.

Example 1 – Refactoring a Factorial Function (Iterative & Recursive)

Original code (Python‑style pseudocode):

def factorial(n):
    result = 1
    for i in range(1, n):
        result = result * i
    return result

Issues identified:

  • Loop stops at n‑1 – off‑by‑one error (logic fault).
  • No input validation for negative numbers (run‑time fault).
  • Hard‑coded start value limits reuse for other product calculations.

Amended iterative version (Cambridge‑compatible comments):

def factorial(n):
    """Return n! for non‑negative integers (Cambridge: correct algorithm)."""
    # ---- Input validation (preventive maintenance) ----
    if n < 0:
        raise ValueError("n must be non‑negative")
    # ---- Core algorithm (corrected off‑by‑one) ----
    result = 1
    for i in range(2, n + 1):
        result *= i
    return result

Recursive alternative (demonstrates ADT use – integers):

def factorial_rec(n):
    """Recursive factorial – illustrates recursion (perfective maintenance)."""
    if n < 0:
        raise ValueError("n must be non‑negative")
    if n == 0 or n == 1:
        return 1
    return n * factorial_rec(n - 1)

New test cases (added to the table above) cover n = 0, n = 1, a typical value (e.g., 5), and a negative input.

Example 2 – File‑Based Record Search (Arrays, I/O, Exception Handling)

Scenario: A program stores student records in a text file (students.txt). Each line contains ID,Name,Score. The function search_record(id) should return the matching line or raise RecordNotFound.

Original code:

def search_record(id):
    f = open('students.txt', 'r')
    for line in f:
        fields = line.split(',')
        if fields[0] == id:
            f.close()
            return line
    f.close()
    return "Not found"

Issues identified:

  • No handling of FileNotFoundError – run‑time fault.
  • Comparison uses string id vs. integer – logic fault.
  • Returns a generic string instead of raising an exception – reduces maintainability.
  • File is opened without a with statement – risk of resource leak.

Amended version (with ADT StudentRecord):

class RecordNotFound(Exception):
    pass

class StudentRecord:
    """Simple ADT to hold a student record."""
    def __init__(self, sid, name, score):
        self.sid   = int(sid)
        self.name  = name
        self.score = float(score)

    def __str__(self):
        return f"{self.sid},{self.name},{self.score}"

def search_record(sid):
    """Return a StudentRecord for the given ID or raise RecordNotFound."""
    try:
        with open('students.txt', 'r') as f:
            for line in f:
                fields = line.strip().split(',')
                if int(fields[0]) == int(sid):
                    return StudentRecord(*fields)
        raise RecordNotFound(f"ID {sid} not found")
    except FileNotFoundError:
        raise FileNotFoundError("Data file 'students.txt' is missing")

Additional test cases (illustrating black‑box and exception handling):

Test ID Purpose Input(s) Expected Output Result Comments
TC10 Valid ID exists sid = 123 StudentRecord with ID 123 Pass
TC11 Non‑existent ID sid = 999 RecordNotFound exception Pass
TC12 Missing data file sid = 123 (file removed) FileNotFoundError exception Pass
TC13 Boundary ID (first line) sid = 001 StudentRecord with ID 1 Pass

Documentation and Maintenance Log

Date: 2025‑11‑22
Module: factorial.py
Change: Added input validation, corrected loop bounds, added recursive version.
Reason: Failed test case TC02 (off‑by‑one) and new requirement for negative‑input handling.
Tester: A. Smith
Result: All test cases now pass; regression testing confirmed no breakage.

Date: 2025‑12‑03
Module: search_record.py
Change: Implemented with‑statement, added RecordNotFound exception, introduced StudentRecord ADT, added file‑not‑found handling.
Reason: Multiple run‑time faults identified during integration testing (TC10‑TC13).
Tester: B. Lee
Result: All new test cases pass; existing system tests unchanged.

Checklist for a Successful Amendment

  • Requirement clearly understood and recorded.
  • Design impact assessed (dependencies, interfaces, test plan).
  • Code follows naming conventions, includes comments and uses Cambridge terminology.
  • All relevant test cases (old and new) executed – regression testing passed.
  • Documentation updated (code comments, design diagrams, maintenance log, test plan).
  • Peer review or walkthrough performed where possible.

Suggested Diagram

Testing cycle flowchart – Write test → Run test → Record result → Fix fault → Re‑run test → Pass.

Summary

Effective testing and maintenance require a systematic approach:

  • Know the terminology (AO1) and coverage measures.
  • Select the appropriate testing method using the decision‑guide.
  • Produce a concise test strategy and a filled‑in test‑plan template.
  • Analyse the existing code, classify faults and choose the correct maintenance category.
  • Apply the 7‑step amendment workflow, extend the test suite and perform regression testing.
  • Document every change in a maintenance log and, where possible, obtain a peer review.

Following this structured process enables students to meet the Cambridge 12.3 learning outcomes and to excel in both AO2 (analysis) and AO3 (design, implementation and evaluation) assessment tasks.

Create an account or Login to take a Quiz

72 views
0 improvement suggestions

Log in to suggest improvements to this note.