Show understanding of ways of exposing and avoiding faults in programs

Published by Patrick Mutisya · 14 days ago

Program Testing and Maintenance

Learning Objective

Show understanding of ways of exposing and avoiding faults in programs.

Key Terminology

  • Fault (defect): A flaw in the program code or design that may cause an error.
  • Error: A human mistake that leads to a fault.
  • Failure: The observable incorrect behaviour when a fault is executed.

Testing Overview

Testing is the process of executing a program with the aim of finding faults. It can be classified in several ways.

Static vs Dynamic Testing

  • Static testing: Examination of the program without execution (e.g., code reviews, walkthroughs, static analysis tools).
  • Dynamic testing: Execution of the program with selected inputs to observe behaviour.

White‑box, Black‑box and Grey‑box Testing

  • White‑box (structural) testing: Test cases are derived from the internal structure of the code (e.g., statement coverage, branch coverage).
  • Black‑box (functional) testing: Test cases are based on specifications and user requirements, without regard to internal code.
  • Grey‑box testing: A hybrid approach using limited knowledge of the internal design to guide test case selection.

Test Levels

LevelPurposeTypical Scope
Unit testingVerify individual modules or functions work correctly.Single routine, class, or method.
Integration testingCheck interactions between combined units.Two or more modules working together.
System testingValidate the complete, integrated system against requirements.Whole application in a test environment.
Acceptance testingConfirm the system meets user needs before delivery.Real‑world scenarios, often performed by end‑users.

Designing Effective Test Cases

  1. Equivalence Partitioning: Divide input domain into classes that are expected to behave similarly. Choose one representative value from each class.
  2. Boundary \cdot alue Analysis: Test values at the edges of equivalence classes, including just‑inside, on, and just‑outside the boundaries.
  3. Decision Table Testing: Use a table to represent combinations of conditions and actions. Example:

    Suggested diagram: Simple decision table for a login system (conditions: username valid, password valid; actions: grant access, reject).

  4. State Transition Testing: Model the system as a finite state machine and create test cases that cover all transitions.
  5. Use‑Case Testing: Derive test cases from typical user interactions described in use‑case scenarios.

Metrics for Test Coverage

Coverage helps quantify how much of the program has been exercised.

Statement coverage: \$C_{stmt} = \frac{\text{Number of statements executed}}{\text{Total number of statements}} \times 100\%\$

Branch coverage: \$C_{branch} = \frac{\text{Number of decision outcomes exercised}}{\text{Total number of decision outcomes}} \times 100\%\$

Exposing Faults

  • Code reviews and walkthroughs: Systematic inspection of source code by peers.
  • Static analysis tools: Automated detection of common coding errors (e.g., unused variables, potential null dereferences).
  • Debugging: Use of breakpoints, watch variables, and step‑through execution to locate the source of a failure.
  • Profiling: Measuring runtime characteristics to uncover performance‑related faults.
  • Automated test suites: Repeated execution of regression tests to catch newly introduced faults.

Avoiding Faults

  • Good design practices: Use modular design, clear interfaces, and well‑defined responsibilities.
  • Structured programming: Apply control structures (sequence, selection, iteration) that reduce complexity.
  • Adherence to coding standards: Consistent naming, indentation, and commenting reduce misunderstandings.
  • Defensive programming: Validate inputs, handle exceptional conditions, and use assertions.
  • Code reuse and libraries: Prefer well‑tested components over reinventing functionality.
  • Comprehensive documentation: Keep design documents, API specifications, and user manuals up‑to‑date.
  • Version control: Track changes, enable rollback, and support collaborative development.
  • Test‑driven development (TDD): Write tests before code to clarify requirements and drive correct implementation.

Maintenance Types

Maintenance TypeGoalTypical Activities
CorrectiveFix faults discovered after delivery.Bug fixing, regression testing.
AdaptiveModify software to operate in a changed environment.Porting to new OS, updating libraries.
PerfectiveEnhance performance or add new features.Optimisation, UI improvements.
PreventiveImprove future maintainability.Refactoring, code clean‑up, documentation updates.

Testing During Maintenance

  • Regression testing: Re‑run existing test suites to ensure changes have not introduced new faults.
  • Re‑testing: Execute test cases that previously failed to confirm the fault is now fixed.
  • Impact analysis: Identify which parts of the system may be affected by a change and target testing accordingly.

Summary Checklist

  1. Identify potential fault sources (design, code, requirements).
  2. Select appropriate testing level and technique.
  3. Design test cases using equivalence partitioning, boundary analysis, etc.
  4. Measure coverage and aim for high statement/branch coverage.
  5. Conduct static testing (reviews, analysis) before dynamic testing.
  6. Apply defensive programming and coding standards to avoid faults.
  7. Maintain thorough documentation and use version control.
  8. During maintenance, perform regression testing and update test suites.