Show understanding of the methods of testing available and select appropriate data for a given method

12.3 Program Testing and Maintenance

Learning Objective (linked to Assessment Objectives)

Show understanding of the methods of testing available and select appropriate data for a given method.

  • AO1 – Knowledge & Understanding: define test levels, testing approaches and data‑selection techniques.
  • AO2 – Application: analyse a specification and decide which testing level, approach and data‑selection technique is most suitable.
  • AO3 – Evaluation: design test artefacts (test cases, test plan) and justify the choices made.

1. Why Test?

  • Detect faults before the software is released.
  • Confirm the program meets its specification.
  • Give confidence that future changes will not introduce new errors.
  • Testing is continuous – it starts during development and continues throughout maintenance.

2. Fault‑Avoidance vs Fault‑Exposure

Fault‑avoidance techniques are applied during design and coding; fault‑exposure techniques are applied during testing.

Fault‑Avoidance (Design / Coding)Fault‑Exposure (Testing)
Static analysis & code reviewsUnit testing – test each component in isolation.
Defensive programming (input validation, assertions, clear error messages)Integration testing – check interactions between components.
Use of standard libraries & design patternsSystem testing – validate the whole system against requirements.
Peer programming, pair reviewsRegression testing – re‑run previously successful tests after any change.

3. Types of Errors

Error TypeDescriptionTypical Example
Syntax errorViolates language grammar; program will not compile/run.Missing semicolon in Java: int x = 5int x = 5;
Logic errorProgram runs but produces incorrect results.Using < instead of <= in a range check.
Run‑time errorOccurs while the program is executing (exceptions, crashes).Division by zero, ArrayIndexOutOfBoundsException in Java.

4. Debugging Checklist (AO3 – Evaluation)

  1. Read the error message or stack trace carefully.
  2. Re‑produce the fault with the smallest possible test case.
  3. Dry‑run the algorithm on paper (trace values step‑by‑step).
  4. Insert temporary print / log statements to display variable values.
  5. Use an IDE debugger: set break‑points, step through code, inspect watches.
  6. Check assumptions (input ranges, initialisation, pre‑conditions).
  7. After fixing, re‑run all relevant tests (regression).

5. Levels of Testing (AO2 – Application)

  • Unit (Component) Testing – tests a single module or function in isolation.
  • Integration Testing – checks interactions between combined units.
  • System Testing – validates the complete, integrated system against the specification.
  • Acceptance Testing – performed by the client/end‑user to confirm fitness for purpose.
  • Regression Testing – re‑runs previously successful tests after any change.
  • Maintenance Testing – testing carried out during corrective, adaptive, perfective or preventive maintenance (see §9).

6. Testing Approaches

ApproachWhen to UseKey Techniques
Black‑Box (Functional)No knowledge of internal code – early development, acceptance testing.Equivalence Partitioning, Boundary‑Value Analysis, Decision Tables, State‑Transition, Use‑Case/Scenario.
White‑Box (Structural)Source code is available – aim for thorough structural coverage.Statement, Branch, Path coverage; control‑flow graphs; condition coverage.
Grey‑BoxPartial knowledge (e.g., module interfaces) – a balance of depth and efficiency.Combination of black‑box test design plus targeted white‑box checks.

Decision‑Tree for Choosing an Approach (exam‑friendly)

  1. Do you have access to the source code?

    • No → Black‑Box
    • Yes → go to 2

  2. Is the aim to prove structural correctness (e.g., 100 % statement coverage) or to find hidden logic errors?

    • Structural correctness → White‑Box
    • Hidden logic errors or limited time → Grey‑Box

7. Data‑Selection Techniques (AO2 – Application)

TechniquePurposeTypical Data Chosen
Equivalence PartitioningGroup inputs that are expected to behave the same.One representative from each valid and invalid class.
Boundary‑Value AnalysisTarget the edges of each equivalence class where errors are common.Minimum, just‑above minimum, nominal, just‑below maximum, maximum.
Decision Table TestingModel complex business rules as combinations of conditions and actions.Rows for each unique combination of condition outcomes.
State‑Transition TestingValidate behaviour of systems that change state in response to events.Sequences that cause transitions through all states and back to the start.
Use‑Case / Scenario TestingCheck that typical user interactions produce correct results.End‑to‑end sequences drawn from real‑world usage scenarios.

8. Mapping Test Levels to Data‑Selection Techniques

Testing LevelRecommended Data‑Selection Technique(s)
Unit TestingEquivalence Partitioning, Boundary‑Value Analysis, White‑Box coverage (statement/branch).
Integration TestingDecision Table (for interface contracts), State‑Transition (for protocol handling).
System TestingUse‑Case / Scenario testing, Decision Tables for end‑to‑end business rules.
Acceptance TestingUse‑Case / Scenario testing (real user stories).
Regression TestingReuse test cases from the level(s) being protected; focus on previously failed areas.

9. Coverage Metrics (White‑Box) – AO3

Metrics show how much of the program’s structure has been exercised.

  • Statement Coverage – % of executable statements run.

    Statement Coverage = (Executed statements ÷ Total statements) × 100 %

  • Branch (Decision) Coverage – % of true/false outcomes taken.

    Branch Coverage = (Executed decision outcomes ÷ Total decision outcomes) × 100 %

  • Path Coverage – % of possible execution paths traversed (usually impractical for large programs).

Generating coverage in the exam languages:

  • Java – IDE plugins such as JaCoCo or Cobertura.
  • Pythoncoverage.py (run coverage run … then coverage report).
  • VB.NET – Visual Studio’s built‑in Code Coverage tool (Test → Analyze Code Coverage).

10. Maintenance Types and Associated Testing (AO2)

Maintenance TypePurposeTypical Testing Activities
CorrectiveFix faults discovered after delivery.Targeted unit tests for the corrected component + full regression suite.
AdaptiveModify software to work in a changed environment (OS, hardware, regulations).Integration testing of affected interfaces; system testing of the whole application in the new environment.
PerfectiveEnhance performance, add new features, improve usability.Use‑case / scenario tests for new features; regression tests for unchanged functionality.
PreventiveImprove maintainability (refactoring, code clean‑up).Extensive regression testing; increase white‑box coverage (statement/branch) to ensure behaviour is unchanged.

11. Test Strategy & Test Plan (AO3 – Design & Evaluation)

11.1 Test‑Strategy Elements

  • Objectives – what we aim to prove (fault detection, compliance, performance).
  • Scope – which components, features and interfaces are in‑scope / out‑of‑scope.
  • Approach – black‑box, white‑box or grey‑box; manual vs automated.
  • Risk Assessment – high‑risk areas receive deeper testing.

11.2 Test‑Plan Template

SectionContent to Include
1. IntroductionPurpose, reference documents, definitions.
2. Test ItemsList of modules, features, or user stories to be tested.
3. Test ScopeIn‑scope items, out‑of‑scope items, assumptions.
4. Test ApproachLevels (unit, integration, system, acceptance, regression, maintenance), approaches (black/white/grey), data‑selection techniques.
5. Test ResourcesPersonnel, hardware, software tools (IDE, coverage tools), test data.
6. ScheduleMilestones, start/end dates for each testing level.
7. Entry / Exit CriteriaConditions to start testing (e.g., code compiled) and to stop (e.g., 95 % statement coverage, all critical defects fixed).
8. DeliverablesTest cases, test scripts, test reports, defect logs, coverage reports.

12. Selecting an Appropriate Testing Method – Step‑by‑Step (AO2)

  1. Identify the required testing level (unit, integration, system, acceptance, regression, maintenance).
  2. Determine knowledge of the code:

    • No internal knowledge → Black‑Box.
    • Source code available → White‑Box (or Grey‑Box if only parts are known).

  3. Match the nature of the input domain to a data‑selection technique (see §7).
  4. Consider cost‑benefit and risk:

    • Safety‑critical modules → aim for high statement and branch coverage.
    • Low‑risk modules → representative black‑box tests may be sufficient.

  5. Document the test plan** and ensure entry/exit criteria are met before moving to the next level.

13. Worked Example – Choosing Test Data for int grade(int mark)

Specification

  • Valid range: 0 ≤ mark ≤ 100
  • Grades:

    • 0–39 → “F”
    • 40–49 → “D”
    • 50–59 → “C”
    • 60–69 → “B”
    • 70–100 → “A”

13.1 Technique 1 – Equivalence Partitioning + Boundary‑Value Analysis (Black‑Box)

PartitionTest Values (Boundary + Typical)
Invalid low-1
Invalid high101
F (0–39)0, 1, 39
D (40–49)40, 45, 49
C (50–59)50, 55, 59
B (60–69)60, 65, 69
A (70–100)70, 85, 100

13.2 Technique 2 – Decision Table (Black‑Box, useful for complex rules)

Conditions: mark < 0, 0 ≤ mark ≤ 39, 40 ≤ mark ≤ 49, 50 ≤ mark ≤ 59, 60 ≤ mark ≤ 69, 70 ≤ mark ≤ 100, mark > 100. Actions: return “Invalid”, “F”, “D”, “C”, “B”, “A”.

Rulemark < 00‑3940‑4950‑5960‑6970‑100mark > 100Action
1YInvalid
2YF
3YD
4YC
5YB
6YA
7YInvalid

From the decision table we select at least one representative value for each rule – e.g., -5 (Rule 1), 20 (Rule 2), 45 (Rule 3), 55 (Rule 4), 65 (Rule 5), 85 (Rule 6), 150 (Rule 7).

13.3 What the Example Demonstrates

  • How to choose two different techniques for the same specification (requirement of the syllabus).
  • How each technique leads to a clear set of test cases that can be recorded in a test case table (AO3).
  • How the chosen technique matches the testing level – here unit testing with black‑box data selection.

14. Summary Checklist (AO3)

  • Identify the testing level → choose the matching data‑selection technique(s).
  • Decide on the testing approach (black/white/grey) using the decision‑tree.
  • Generate test cases (include at least two techniques where required).
  • Apply appropriate coverage metrics for white‑box testing.
  • When code changes, run regression tests and update the test plan.
  • Document each test case: description, input data, expected result, actual result, pass/fail.
  • During maintenance, map the maintenance type to its typical testing activities (see §10).

Suggested diagram: Testing lifecycle – unit → integration → system → acceptance, with regression loops after each maintenance activity.