Choose appropriate test data for a test plan

12.3 Program Testing and Maintenance

Objective

  • Explain why a test plan and a test strategy are required.
  • Identify the main components of a test plan and show how to complete each one.
  • Classify the three principal fault types (syntax, logic, run‑time) with code examples.
  • Choose appropriate test data for a given test plan using recognised techniques.
  • Understand the need for ongoing maintenance and the four maintenance categories.

1. Test plan vs. Test strategy – quick definitions

Test plan: a documented, project‑wide description of what will be tested, how it will be tested, who will do the testing, when it will be done and what the success criteria are. It is derived directly from the software specification.

Test strategy: the high‑level approach chosen to achieve the objectives of the test plan (e.g., black‑box vs. white‑box, risk‑based, regression, acceptance). The strategy is a subsection of the test plan.

Diagram (textual):

Specification → Test Plan → Test Strategy → Test Cases / Test Data → Execution → Results

2. Main components of a test plan (with a filled example)

Component Description / What to include Example (Student‑Grade program)
Objective & Scope Clear statement of the feature(s) to be tested and anything excluded. Test the calculation of a letter grade from a numeric mark (0‑100). Excludes UI layout testing.
Test Strategy Overall approach – black‑box, white‑box, regression, acceptance, smoke, risk‑based. Black‑box testing using EP, BVA, Decision‑Table and Error‑Guessing.
Test Environment Hardware, OS, language version, libraries, any required data files. PC, Windows 10, Python 3.11, no external libraries, no input files required.
Resources & Schedule People, time allocation, milestones. 1 student, 2 hours: 30 min design, 45 min test‑case creation, 45 min execution, 30 min reporting.
Test Cases / Test Data Table of test ID, purpose, input, expected output, pass/fail, comments. See section 8 “Sample Test‑Plan excerpt”.
Traceability Matrix Links each requirement to the test case(s) that verify it. R1 (grade calculation) → T01‑T08; R2 (invalid input handling) → T03‑T05.
Risk & Prioritisation Identify high‑risk features and assign priority (high/medium/low). High: boundary values (80, 0, 100); Medium: non‑numeric input; Low: extreme high values (150).
Pass/Fail Criteria Explicit rule for deciding if a test passes. Test passes if the program’s output exactly matches the “Expected Output” column.

3. Fault types with short pseudo‑code examples

Fault type Typical cause Example (pseudo‑code)
Syntax fault Miss‑spelled keyword, missing punctuation, wrong indentation (where relevant).
if mark >= 80   // missing colon
    grade = 'A'
Logic fault Incorrect condition, off‑by‑one, wrong formula, misplaced loop limits.
if mark > 80 and mark <= 100:
    grade = 'A'          // should be >= 80
else if mark >= 70 and mark <= 79:
    grade = 'B'
Run‑time fault Null reference, array‑index out of bounds, division by zero, file‑not‑found, resource leak.
total = 0
for i in range(0, len(marks)):   // marks may be empty
    total += marks[i]
average = total / len(marks)      // division by zero if list empty

4. Test‑data selection techniques

4.1 Equivalence Partitioning (EP)

Divide the input domain into classes that are expected to behave the same. One representative value from each class is sufficient.

4.2 Boundary Value Analysis (BVA)

For each numeric (or length) partition, test:

  • Minimum value
  • Just above minimum
  • Maximum value
  • Just below maximum
  • One typical interior value

4.3 Decision‑Table Testing

When output depends on several independent conditions, list all possible combinations in a table. Each column becomes a test case.

4.4 State‑Transition Testing

Identify distinct states, the events that cause transitions, and the expected result of each transition. Test every transition, especially those that lead to error states.

4.5 Error Guessing

Use experience to anticipate common programmer mistakes (off‑by‑one, null pointer, missing file‑close, etc.) and create test data that might trigger them.

4.6 Quick reference – which technique finds which fault type?

TechniquePrimarily detects
EPLogic/validation faults
BVALogic faults (range errors, off‑by‑one)
Decision‑TableLogic faults involving multiple conditions
State‑TransitionRun‑time faults, incorrect sequencing
Error GuessingAny fault type, especially those not covered by systematic techniques

5. Step‑by‑step walk‑through: applying EP & BVA to a single requirement

Requirement R1: “The program shall return a letter grade based on a numeric mark (0‑100). The grade boundaries are 80‑100 → A, 70‑79 → B, 60‑69 → C, 50‑59 → D, 0‑49 → F. Marks outside 0‑100 are reported as ‘Invalid’.”

  1. Identify inputs and outputs – Input: mark (integer). Output: grade (string) or “Invalid”.
  2. Equivalence Partitioning
    • Valid partitions: 0‑49, 50‑59, 60‑69, 70‑79, 80‑100.
    • Invalid partitions: <0, >100, non‑numeric.
  3. Boundary Value Analysis – For each numeric partition pick the five values described in the technique.
    PartitionTest values (BVA)
    0‑490, 1, 48, 49, 25
    50‑5950, 51, 58, 59, 55
    60‑6960, 61, 68, 69, 65
    70‑7970, 71, 78, 79, 75
    80‑10080, 81, 99, 100, 85
    Invalid <0-1, -10
    Invalid >100101, 150
    Invalid non‑numeric"A", null
  4. Decision‑Table (optional extension) – If a “late‑submission” flag is added, the table would combine Mark range and Late? conditions (see Example 1‑2).
  5. Record the test cases – Populate the test‑case table (see Section 8).

6. Mapping techniques to the “Student‑Grade” test cases

Test IDPurposeTechnique(s) used
T01Lower boundary of A grade (80)BVA (lower boundary of 80‑100)
T02Upper boundary of F grade (49)BVA (upper boundary of 0‑49)
T03Negative input handlingEP (invalid < 0) + Error Guessing
T04Late‑submission penalty for B gradeDecision‑Table (Mark range + Late flag)
T05Non‑numeric input handlingEP (invalid non‑numeric) + Error Guessing
T06File missing – CSV readerState‑Transition (OpenFile → End) + Error Guessing
T07Empty file – division‑by‑zero guardState‑Transition (Process → End) + Error Guessing
T08Malformed line – missing commaDecision‑Table (valid/invalid line) + Error Guessing

7. Mini case study – test plan for a CSV‑reader (2‑hour project)

ComponentFilled‑in example
Objective & Scope Verify that read_marks.py correctly reads marks.csv, calculates the average mark, and writes report.txt. Excludes GUI design and performance benchmarking.
Test Strategy Black‑box testing using EP, BVA, State‑Transition and Error Guessing. A small regression suite will be re‑run after any code change.
Test Environment PC, Windows 10, Python 3.11, no external libraries, test data files stored in test_data/.
Resources & Schedule 1 student, 2 hours: 30 min design, 45 min test‑case creation, 30 min execution, 15 min defect logging, 15 min review.
Risk & Prioritisation High risk: file‑not‑found, empty file, malformed line. Medium risk: large file (10 000 lines). Low risk: extra fields, whitespace variations.
Pass/Fail Criteria Program must produce the exact text shown in the “Expected Output” column and must not crash (i.e., exit with non‑zero status).

8. Sample Test‑Plan excerpt (common format)

Test ID Purpose Input Data Expected Output Pass/Fail Comments
T01 Lower boundary of A grade Mark = 80 Grade = “A”
T02 Upper boundary of F grade Mark = 49 Grade = “F”
T03 Negative input handling Mark = -5 Output = “Invalid”
T04 Late‑submission penalty for B grade Mark = 75, Late = true Grade = “C” (one grade lower)
T05 Non‑numeric input handling Mark = “A” Output = “Invalid”
T06 File missing – CSV reader No marks.csv present Message “File not found” and graceful termination
T07 Empty file – division‑by‑zero guard marks.csv is zero‑byte Message “No data to process” (no crash)
T08 Malformed line – missing comma Line: 12345 78 Message “Invalid line format” and line is skipped

9. Testing methods (brief reminder for the syllabus)

  • White‑box testing: uses knowledge of the internal structure (e.g., statement coverage, path coverage). Usually applied in unit testing.
  • Black‑box testing: treats the program as a “black box”, focusing on inputs and expected outputs. EP, BVA, Decision‑Table, State‑Transition and Error Guessing belong here.
  • Regression testing: re‑run a subset of test cases after a change to ensure existing functionality is unchanged.
  • Acceptance testing: verifies that the system meets the user’s requirements; often a high‑level black‑box test.
  • Smoke testing: a quick set of basic tests to confirm that the build is stable enough for further testing.
  • Risk‑based testing: prioritises test cases according to the probability and impact of failure.

10. Summary checklist for “Choose appropriate test data for a given test plan”

  1. Read the specification and list all inputs, outputs and state changes.
  2. Apply Equivalence Partitioning to create valid/invalid classes.
  3. For each numeric class, apply Boundary Value Analysis.
  4. If several conditions interact, build a Decision Table.
  5. Identify distinct states and transitions; create a State‑Transition diagram and derive tests.
  6. Add any obvious “gotchas” using Error Guessing.
  7. Map each test case to the technique(s) used (helps in the traceability matrix).
  8. Prioritise according to risk and record the test data in the test‑case table of the test plan.

Create an account or Login to take a Quiz

80 views
0 improvement suggestions

Log in to suggest improvements to this note.