Show understanding of the methods of testing available and select appropriate data for a given method

Published by Patrick Mutisya · 14 days ago

Cambridge A-Level Computer Science 9618 – 12.3 Program Testing and Maintenance

12.3 Program Testing and Maintenance

Learning Objective

Show understanding of the methods of testing available and select appropriate data for a given method.

1. Why Test?

Testing aims to discover faults, verify that the program meets its specifications, and provide confidence that future changes will not introduce new errors. It is a crucial part of the software development life‑cycle and continues into the maintenance phase.

2. Types of Testing

  • Unit (Component) Testing – tests individual modules or functions in isolation.
  • Integration Testing – checks interactions between combined units.
  • System Testing – validates the complete, integrated system against requirements.
  • Acceptance Testing – performed by the client or end‑user to confirm the system is fit for purpose.
  • Regression Testing – re‑runs previously successful tests after changes to ensure no new faults appear.
  • Maintenance Testing – testing performed during corrective, adaptive, perfective or preventive maintenance.

3. Testing Approaches

  • Black‑Box (Functional) Testing

    • Focuses on inputs and expected outputs.
    • Test design based on specifications, not on internal code.

  • White‑Box (Structural) Testing

    • Uses knowledge of the program’s internal structure.
    • Common techniques: statement coverage, branch coverage, path coverage.

  • Grey‑Box Testing

    • Combines black‑box and white‑box techniques.
    • Testers have limited knowledge of internal design.

4. Test Data Selection Techniques

TechniquePurposeTypical Data Chosen
Equivalence PartitioningDivide input domain into classes that are expected to behave similarly.One representative value from each valid and invalid class.
Boundary \cdot alue AnalysisTest values at the edges of equivalence classes where errors are most likely.Minimum, just‑above minimum, nominal, just‑below maximum, maximum.
Decision Table TestingModel complex business rules as combinations of conditions and actions.Rows representing each unique combination of condition outcomes.
State Transition TestingValidate behaviour of systems that change state in response to events.Sequences of inputs that cause transitions through all states.
Use‑Case / Scenario TestingCheck that typical user interactions produce correct results.End‑to‑end sequences drawn from real‑world usage scenarios.

5. Selecting an Appropriate Testing Method

  1. Identify the testing level required (unit, integration, system, acceptance).
  2. Determine the knowledge available about the code:

    • If no internal knowledge – use black‑box.
    • If source code is accessible – consider white‑box for thorough structural coverage.
    • If partial knowledge exists – grey‑box may be most efficient.

  3. Match the nature of the input domain to a data selection technique:

    • Numeric ranges → Equivalence Partitioning + Boundary \cdot alue Analysis.
    • Complex business rules → Decision Table Testing.
    • Finite state machines → State Transition Testing.
    • Typical user workflows → Use‑Case Testing.

  4. Consider the cost‑benefit of test depth:

    • For safety‑critical code, aim for high statement and branch coverage.
    • For less critical modules, a representative set of black‑box tests may suffice.

6. Coverage Metrics (White‑Box)

Coverage metrics quantify how much of the program’s structure has been exercised by the test suite.

  • Statement Coverage – proportion of executable statements executed.

    \$\text{Statement Coverage} = \frac{\text{Number of statements executed}}{\text{Total number of statements}} \times 100\%\$

  • Branch (Decision) Coverage – proportion of decision outcomes (true/false) taken.

    \$\text{Branch Coverage} = \frac{\text{Number of decision outcomes executed}}{\text{Total decision outcomes}} \times 100\%\$

  • Path Coverage – proportion of possible execution paths traversed (often impractical for large programs).

7. Maintenance Types and Associated Testing

Maintenance TypePurposeTypical Testing Required
CorrectiveFix faults discovered after delivery.Regression testing plus targeted unit tests for the corrected component.
AdaptiveModify software to work in a changed environment.Integration and system testing to verify compatibility.
PerfectiveEnhance performance or add features.Use‑case testing for new features and regression testing for existing functionality.
PreventiveImprove maintainability, e.g., refactoring.Extensive regression suite and possibly increased white‑box coverage.

8. Example: Choosing Test Data for a Simple Function

Consider the function int grade(int mark) which returns a grade based on the following rules:

  • 0 ≤ mark ≤ 100
  • 0–39 → “F”, 40–49 → “D”, 50–59 → “C”, 60–69 → “B”, 70–100 → “A”.

Appropriate test data using Equivalence Partitioning and Boundary \cdot alue Analysis:

  1. Invalid partitions: -1, 101.
  2. Valid partitions and boundaries:

    • F: 0, 39, 20.
    • D: 40, 49, 45.
    • C: 50, 59, 55.
    • B: 60, 69, 65.
    • A: 70, 100, 85.

9. Summary Checklist

  • Identify the testing level and appropriate approach (black‑box, white‑box, grey‑box).
  • Select test data using a suitable technique (equivalence, boundary, decision table, etc.).
  • Apply coverage metrics to gauge thoroughness of white‑box tests.
  • Plan regression tests whenever code is changed during maintenance.
  • Document test cases, expected results, and actual outcomes for traceability.

Suggested diagram: Testing lifecycle showing unit → integration → system → acceptance, with regression loops after each maintenance activity.