Locate and identify the different types of errors

12.3 Program Testing and Maintenance – Locate and Identify Errors

Objective

Students will be able to:

  • Identify the different categories of errors that can occur in a program.
  • Explain how each error type is detected during the software development life‑cycle.
  • Design a systematic test strategy and a detailed test‑plan that satisfies the Cambridge International AS & A Level Computer Science (9618) requirements (AO1, AO2, AO3).

1. Classification of Errors

Errors are grouped into three broad categories. Recognising the category helps you choose the most appropriate testing technique.

  • Syntax (Compile‑time) Errors – violations of the language grammar; caught by the compiler or interpreter.
  • Runtime Errors – problems that cause abnormal termination while the program is executing (exceptions, crashes, resource failures).
  • Logical (Semantic) Errors – the program runs without crashing but produces incorrect results.

2. Detailed Error Types

Error Type Description Typical Detection Stage Example (Java‑like syntax)
Syntax Error Incorrect use of language constructs; the compiler cannot translate the source code. Compilation int x = 5 instead of int x = 5;
Type Mismatch Assigning a value of one data type to a variable of an incompatible type. Compilation int n = "hello";
Undeclared Identifier Using a variable or function name that has not been declared. Compilation total = calculateTotal(); // calculateTotal not defined
Division by Zero Attempting to divide an integer or floating‑point number by zero. Runtime (exception) int r = a / b; // b = 0
Null Reference Dereferencing a variable that holds no object reference. Runtime (exception) String s = null; int l = s.length();
Array Index Out of Bounds Accessing an array element with an index outside its declared range. Runtime (exception) int[] a = new int[5]; a[5] = 10; // valid indices 0‑4
Logical Error Algorithmic mistake that yields wrong output despite successful compilation and execution. Testing (system / integration) if (score > 50) … instead of if (score >= 50) … when the pass mark is 50.
Off‑by‑One Error Loop iterates one time too many or too few. Testing (unit) for (i = 0; i <= n; i++) when only n items should be processed.
Resource Leak Failure to release system resources such as files, sockets or memory. Testing (system) and Maintenance Opening a file with new FileReader("data.txt") but never calling close().
Concurrency Error Incorrect handling of shared data in multi‑threaded programs (e.g., race conditions, deadlock). Testing (stress) and Maintenance Two threads increment the same counter without synchronization.

3. Testing Techniques Required by the Syllabus (AO3)

  • White‑box (structural) testing – examine the internal logic; e.g., statement‑coverage, branch‑coverage, path‑coverage.
  • Black‑box (functional) testing – test against specifications without looking at the code; e.g., equivalence partitioning, boundary‑value analysis.
  • Integration testing – verify interactions between modules or components.
  • System testing – test the complete, integrated system against functional and non‑functional requirements.
  • Acceptance testing – final check that the system meets the client’s needs (often performed by the client).
  • Alpha testing – internal trial with a limited user group before release.
  • Beta testing – external trial with real users in a real environment.
  • Walk‑through / dry‑run – manual inspection of code or pseudocode to spot logical errors before execution.

4. Designing Test Data

Choosing effective test data is as important as choosing the right testing technique.

4.1 Equivalence Partitioning

  • Divide the input domain into classes that are expected to behave similarly.
  • Select at least one representative value from each class.
  • Example: For an input field that accepts ages 1‑120, use three classes – valid (e.g., 25), below range (e.g., 0), above range (e.g., 130).

4.2 Boundary‑Value Analysis

  • Test values at the edges of each equivalence class and just beyond them.
  • Typical test points: min‑1, min, min+1, max‑1, max, max+1.
  • Example: For the same age field, test 0, 1, 2, 119, 120, 121.

4.3 Special Cases

  • Null or empty inputs.
  • Maximum string length, special characters, and illegal formats.
  • Very large or very small numeric values (to expose overflow or under‑flow).

5. How Errors Are Detected – Mapping to Testing Levels

  1. Compilation – Syntax, type‑mismatch, undeclared identifier errors reported by the compiler.
  2. Static Analysis – Linters, IDE warnings, and formal tools flag potential runtime problems such as unused variables, possible null dereferences, or dead code.
  3. Unit (White‑box) Testing – Isolates individual functions/procedures; catches logical errors, off‑by‑one mistakes, and many runtime exceptions.
  4. Integration (Black‑box) Testing – Checks interfaces between modules; reveals mismatched data types, resource‑leak scenarios, and incorrect API usage.
  5. System / Acceptance Testing – Executes the whole application under realistic conditions; surfaces concurrency errors, performance bottlenecks, and user‑interface defects.
  6. Runtime Monitoring – Logging, assertions, and exception handling help locate errors that only appear in production.

6. Test‑Strategy Checklist (AO3)

  • Identify the testing objectives (e.g., verify arithmetic correctness, ensure no crashes).
  • Choose the testing level(s) required (unit, integration, system, acceptance).
  • Select appropriate testing techniques (white‑box, black‑box, walkthrough, alpha/beta).
  • Decide on manual vs. automated execution.
  • Define test‑data design (equivalence partitions, boundary values, special cases).
  • Set exit criteria (e.g., 100 % statement coverage, all high‑priority bugs fixed).
  • Allocate responsibilities and resources (who writes/executes the tests, tools to be used).
  • Plan for regression testing after each corrective change.

7. Sample Test‑Plan Template

Test ID Objective Test Case / Description Input Data Expected Result Pass/Fail Responsible
T01 Verify age validation Boundary‑value test for age field 0, 1, 2, 119, 120, 121 Accept 1‑120, reject others Student
T02 Check division routine Black‑box test of divide(a,b) (10, 2), (10, 0) 5 for first pair; runtime exception for second Student
T03 Detect off‑by‑one in loop White‑box walk‑through of for (i=0;i<=n;i++) n = 5 Loop executes 5 times, not 6 Student
T04 Resource‑leak check System test – open and close file repeatedly 1000 iterations No “Too many open files” error Student

8. Link to Maintenance Activities (Unit 12)

  • Corrective Maintenance – fixing bugs discovered after deployment (e.g., correcting a logical error found in testing).
  • Adaptive Maintenance – modifying the program to work in a changed environment (new OS, new hardware, updated libraries).
  • Perfective Maintenance – improving performance, adding minor enhancements, or refactoring to improve readability and maintainability.
  • Preventive Maintenance – restructuring code, adding comments, or introducing static‑analysis tools to reduce future error likelihood.

9. Action‑able Review of the Lecture Notes (relative to the 2026 Cambridge International AS & A Level Computer Science 9618 syllabus)

Review Area What the syllabus demands Typical gaps observed in the notes Suggested actions
1. Coverage of Required Topics (Unit 12)
  • Testing techniques: white‑box, black‑box, integration, system, acceptance, alpha, beta, walkthrough.
  • Error categories: syntax, runtime, logical, plus sub‑categories such as off‑by‑one, resource leak, concurrency.
  • Maintenance types: corrective, adaptive, perfective, preventive.
  • Only a brief mention of “walk‑through / dry‑run”; missing detailed discussion of test‑case design, test‑automation, and regression testing.
  • No explicit coverage of “stress testing” for concurrency errors or “performance testing” for resource‑leak detection.
  • Add subsections on test‑case design techniques (equivalence partitioning, boundary‑value analysis, decision tables).
  • Include a short paragraph on regression testing and the role of automated test suites.
  • Expand the “Testing Techniques Overview” to list stress and performance testing with examples.
2. Depth & Accuracy (AO1 & AO2)
  • Explain why each error type occurs (e.g., why division by zero raises an exception).
  • Show detection mechanisms (compiler messages, stack traces, log files).
  • Provide at least one pseudocode or assembly example illustrating each error.
  • Examples are limited to high‑level Java‑like code; no pseudocode or low‑level (assembly) illustrations.
  • Overflow detection and two’s‑complement arithmetic are absent, yet they are part of the “runtime errors” sub‑topic in the syllabus.
  • Insert a small pseudocode fragment that causes an off‑by‑one error and show the corrected version.
  • Provide an assembly‑language example (e.g., MIPS) that demonstrates a null‑reference‑like error via an invalid address load.
  • Explain overflow detection using a flag register and give a binary‑addition example that overflows.
3. Alignment with Syllabus Numbering
  • Each sub‑topic should be labelled with its official syllabus code (e.g., 12.1 Testing techniques, 12.2 Error types, 12.3 Maintenance).
  • The current note uses generic headings; no explicit syllabus codes are visible.
  • Rename headings to include the syllabus numbers (e.g., <h3>12.1 Testing Techniques</h3>).
  • Insert a “Syllabus Mapping” box at the start of the document that cross‑references each heading with its code.
4. Use of Visual Aids
  • Diagrams such as the testing lifecycle flowchart, decision‑table examples, and a simple state‑transition diagram for error handling are required for AO2.
  • Only a placeholder figure (“Suggested diagram”) is present.
  • Insert a clear flowchart showing the progression: Unit Test → Integration Test → System Test → Acceptance Test, with colour‑coded arrows indicating the error types most likely to be uncovered at each stage.
  • Add a decision‑table example for input validation (age field) to illustrate black‑box test design.
5. Assessment‑Focused Presentation
  • Students must be able to produce a concise test‑plan (AO3) and explain error detection methods (AO2) in exam questions.
  • The test‑plan template is present but lacks guidance on marking criteria and typical exam‑style wording.
  • Provide a short “Exam Tip” box after the template that lists the key points examiners look for (e.g., clear objective, complete input‑output mapping, appropriate level of detail).

10. Suggested Diagram (Insert after Section 9)

Testing lifecycle flowchart: Unit → Integration → System → Acceptance, with colour‑coded error types
Testing lifecycle flowchart – the colour of each arrow indicates the error type most commonly detected at that stage (e.g., syntax – red, runtime – orange, logical – green).

11. Quick Reference Summary (AO1)

Category Typical Errors Detection Method
Syntax (Compile‑time) Missing semicolon, type mismatch, undeclared identifier Compiler / interpreter messages
Runtime Division by zero, null reference, array‑index out of bounds, resource leak, concurrency error Exception handling, runtime monitoring, static analysis tools
Logical (Semantic) Off‑by‑one, incorrect condition, wrong formula, misplaced loop Unit/white‑box testing, walkthroughs, system testing

12. References (Cambridge 2026 Syllabus)

  • Cambridge International AS & A Level Computer Science (9618) – 2026 Specification, Units 1‑20.
  • “Software Development Life‑Cycle” – Cambridge Teaching Resources.
  • J. B. Miller, Software Testing Techniques, 3rd ed., 2024.

Create an account or Login to take a Quiz

79 views
0 improvement suggestions

Log in to suggest improvements to this note.