Program Testing and Maintenance
Learning Objective
Show understanding of ways of exposing and avoiding faults in programmes (Cambridge syllabus 12.3).
Key Terminology (Cambridge wording)
- Fault (defect): A flaw in the design or source code that may cause an error when the programme is executed.
- Error: A human mistake (e.g., a logic mistake in an algorithm) that introduces a fault.
- Failure: The observable incorrect behaviour of a programme caused by a fault that is executed.
Types of Faults – AO Mapping
| Fault type | Typical cause | Example | Relevant AO(s) |
|---|
| Syntax fault | Incorrect use of the programming‑language grammar. | Missing semicolon, mismatched brackets. | AO1 – recognise the fault; AO2 – analyse why the compiler rejects it. |
| Logic fault | Wrong algorithmic reasoning or misplaced condition. | Using “>” instead of “>=” in a range check. | AO2 – analyse the algorithm; AO3 – propose a correction. |
| Run‑time fault | Resources not handled correctly at execution time. | Division by zero, null‑pointer dereference. | AO2 – analyse the failure; AO3 – design a fix (e.g., input validation). |
Fault‑identification checklist (mirrors syllabus wording)
- Identify the observed problem (failure).
- Locate the part of the programme where the problem occurs (e.g., line number, module).
- Classify the fault (syntax, logic, run‑time).
- Explain why the fault causes the failure (analysis).
- Propose a concrete correction and, where appropriate, a test to confirm the fix.
Testing Overview
Static vs Dynamic Testing
- Static testing: Examine artefacts (source code, design documents, test specifications) without executing the programme.
- Techniques: code reviews, walkthroughs, inspections, static‑analysis tools.
- Primarily AO1 – recognise potential faults.
- Dynamic testing: Run the programme with selected inputs and observe the output or behaviour.
- Can be performed manually or with automated test harnesses.
- Addresses AO2 (analyse) and AO3 (evaluate/correct).
Automated vs Manual Testing (syllabus note)
- Automated testing: Test scripts or frameworks execute test cases repeatedly (e.g., JUnit, pytest). Benefits: repeatability, regression testing, coverage measurement.
- Manual testing: Human tester selects inputs, observes results, and records outcomes. Useful for exploratory, usability or ad‑hoc testing.
White‑box, Black‑box and Grey‑box Testing
| Technique | What guides test‑case selection? | Typical AO(s) addressed |
|---|
| White‑box (structural) | Knowledge of internal code – statements, branches, paths. | AO2 – analyse computational terms; AO3 – evaluate solutions. |
| Black‑box (functional) | Specification, user requirements, functional description. | AO2 – analyse problems; AO3 – evaluate against requirements. |
| Grey‑box | Limited internal knowledge (e.g., module interfaces) combined with functional specs. | Both AO2 & AO3 – analysis plus evaluation. |
Test Levels – Matrix with AO Mapping
| Test level | Purpose | Typical scope | Key AO(s) |
|---|
| Unit testing | Verify a single routine, method or class works correctly. | One function or class in isolation. | AO2 – analyse the unit; AO3 – evaluate its correctness. |
| Integration testing | Check that two or more units interact correctly. | Combined modules (e.g., file‑I/O + data‑processing). | AO2 – analyse interfaces; AO3 – evaluate combined behaviour. |
| System testing | Validate the complete, integrated system against the specification. | Whole application in a controlled environment. | AO3 – evaluate the whole solution. |
| Acceptance testing | Confirm the system meets the user’s real‑world needs before delivery. | Real‑world scenarios, often performed by end‑users. | AO3 – evaluate suitability for intended users. |
Designing Effective Test Cases
- Equivalence Partitioning: Divide the input domain into classes that are expected to behave alike; select one representative value from each class.
- Boundary Value Analysis: Test values at, just inside and just outside the limits of each equivalence class.
- Decision‑Table Testing – worked example:
| Condition | Username valid? | Password valid? |
|---|
| C1 | Yes | Yes |
| C2 | Yes | No |
| C3 | No | Yes |
| C4 | No | No |
Actions derived from the table:
- C1 → grant access
- C2 or C3 → reject – “invalid credentials”
- C4 → reject – “username and password required”
Four test cases cover all possible condition combinations.
- State‑Transition Testing: Model the system as a finite‑state machine and create test cases that traverse every transition (e.g., login → logged‑in → logout → logged‑out).
- Use‑Case Testing: Derive test cases from typical user scenarios described in the use‑case diagram or narrative.
Mini‑Exercise – Applying EP and BVA
Given the pseudo‑code below, produce test cases using equivalence partitioning and boundary analysis.
function grade(score):
if score < 0 or score > 100:
return "invalid"
else if score >= 70:
return "A"
else if score >= 50:
return "B"
else:
return "C"
Expected answer:
- Equivalence classes: invalid‑low (score < 0), invalid‑high (score > 100), A (70‑100), B (50‑69), C (0‑49).
- Boundary values: 0, 49, 50, 69, 70, 100, 101 (and –1 for the low side).
Test Strategy & Test Plan – What to Include
A concise test plan demonstrates that the student can select appropriate test data, describe the testing approach and anticipate the evidence needed. The plan should contain:
- Scope: Which modules, features or requirements are covered.
- Objectives: What faults the plan aims to expose (e.g., boundary errors, interface mismatches).
- Test levels & techniques: Specify unit, integration, system and/or acceptance testing and whether white‑box, black‑box or grey‑box methods are used.
- Test data selection: Explain the use of equivalence partitions, boundary values, decision tables, etc.
- Test environment: Hardware, OS, compiler/interpreter, any required libraries or tools (including automated test frameworks).
- Pass/fail criteria: Expected output or behaviour for each test case.
- Schedule & resources: Approximate time, personnel and tools required.
- Risk & mitigation: Identify high‑risk areas and additional testing that will be performed.
Metrics for Test Coverage
Coverage quantifies how much of the programme has been exercised by the test suite.
Statement coverage:
\$C_{stmt} = \frac{\text{Number of statements executed}}{\text{Total number of statements}} \times 100\%\$
Branch (decision) coverage:
\$C_{branch} = \frac{\text{Number of decision outcomes exercised}}{\text{Total number of decision outcomes}} \times 100\%\$
Sample Calculation
Consider the fragment:
1 if (x > 0) // decision D1
2 y = 10;
3 else
4 y = -10;
5 if (y > 5) // decision D2
6 z = y * 2;
7 else
8 z = y / 2;
Test suite:
- Test A:
x = 1 → executes 1‑2‑5‑6 (D1‑true, D2‑true). - Test B:
x = -1 → executes 1‑3‑4‑5‑7‑8 (D1‑false, D2‑false).
Results:
- All 8 statements executed → 100 % statement coverage.
- Both outcomes of D1 and D2 exercised → 100 % branch coverage.
Exposing Faults
- Code reviews & walkthroughs: Peer inspection of design and source code to spot faults early (static testing).
- Static analysis tools: Automated scanners that flag common defects (unused variables, possible null dereference, style violations).
- Debugging: Use break‑points, watch variables and step‑through execution to locate the source of a failure.
- Profiling: Measure runtime characteristics (time, memory) to uncover performance‑related faults.
- Automated test suites & regression testing: Re‑run the full set of tests after any change to catch newly introduced faults.
Avoiding Faults – “Why It Matters”
- Good design practices: Clear modular structure and well‑defined interfaces limit the impact of a change, reducing logic faults.
- Structured programming: Using only sequence, selection and iteration constructs keeps code readable and easier to reason about.
- Coding standards: Consistent naming, indentation and commenting avoid misunderstandings that can lead to syntax or logic faults.
- Defensive programming: Validate all inputs and handle exceptional conditions to prevent run‑time faults such as crashes or security breaches.
- Code reuse & libraries: Re‑using well‑tested components lowers the probability of introducing new faults.
- Comprehensive documentation: Up‑to‑date design and API documents ensure developers understand intended behaviour.
- Version control: Tracks changes, enables rollback and supports collaborative development, helping to locate the introduction of a fault quickly.
- Test‑driven development (TDD): Writing tests before code clarifies requirements, forces small, testable units and catches faults at the earliest stage.
Maintenance Types – Illustrated Case Study
Scenario: A simple student‑record system has been in use for three years.
- Corrective maintenance – A bug: entering a negative grade crashes the programme.
- Fix: add a check
if grade < 0 then reject. - Testing: re‑test the grade‑entry module and run the full regression suite.
- Adaptive maintenance – The school upgrades to a new operating system.
- Fix: re‑compile with the new compiler, replace deprecated library calls.
- Testing: integration testing of the file‑I/O module on the new OS.
- Perfective maintenance – Users request faster search of student records.
- Fix: replace linear search with a binary‑search algorithm.
- Testing: add performance test cases; verify correctness with existing unit tests.
- Preventive maintenance – Code review reveals duplicated validation code.
- Fix: refactor validation into a single helper routine and update documentation.
- Testing: run the full regression suite; update the design diagram.
Testing During Maintenance
- Regression testing: Re‑execute the full set of existing test cases after any change to ensure no previously‑working functionality has broken.
- Re‑testing: Run the specific test case that originally failed to confirm the fault is now fixed.
- Impact analysis: Identify which modules, functions or data structures may be affected by a change and concentrate testing effort on those areas.
Summary Checklist (Student Revision)
- Identify potential fault sources (design, code, requirements) and use the fault‑identification checklist.
- Choose the appropriate test level (unit → integration → system → acceptance) and technique (static, dynamic, white‑box, black‑box, grey‑box).
- Design test cases using equivalence partitioning, boundary analysis, decision tables, state‑transition or use‑case techniques.
- Calculate statement and branch coverage; aim for high percentages and record the results.
- Carry out static testing (reviews, analysis) before dynamic testing.
- Apply defensive programming, coding standards, structured design, version control and, where appropriate, TDD to avoid faults.
- Maintain up‑to‑date documentation and use version control for traceability.
- During maintenance, perform impact analysis, regression testing and update test suites as required.