Show understanding of the need for a test strategy and a test plan, and describe their likely contents.
| Item | What It Covers (AO1) |
|---|---|
| Scope | Modules, features and interfaces to be tested; items explicitly out of scope. |
| Testing Levels | Unit, integration, system and acceptance testing. |
| Testing Types | Functional, performance, security, usability, reliability. |
| Test‑design Techniques | Black‑box (equivalence partitioning, boundary‑value analysis, decision tables, state‑transition); White‑box (statement/branch coverage, path testing). |
| Test Environments | Hardware, OS, network, simulators, stubs. |
| Resources | People, tools (test management, defect tracker, automation), training. |
| Schedule & Milestones | When each testing level starts/finishes, key review dates. |
| Risk Assessment | Identify high‑risk components, estimate impact, outline mitigation (AO2). |
| Entry & Exit Criteria | Conditions before a test level can begin and before it can be considered complete. |
| Metrics & Reporting | Defect density, test‑case coverage, pass/fail rates, progress reports (AO3). |
The test plan translates the high‑level strategy into a concrete, actionable document for the testing team. It tells how, when and by whom the testing will be carried out, ensuring that:
| Section | Details (AO1‑AO3) |
|---|---|
| 1. Introduction | Purpose, scope, objectives, reference to the test strategy. |
| 2. Test Items | List of software components, modules, interfaces, third‑party packages. |
| 3. Test Approach | Chosen test levels, types and design techniques (e.g., BVA for numeric inputs, white‑box for critical algorithms). |
| 4. Test Environment | Hardware, OS, network, databases, simulators, stubs and configuration details. |
| 5. Test Schedule | Timeline with start/end dates for each testing phase and major milestones. |
| 6. Test Resources | Roles and responsibilities, skill requirements, tools and training. |
| 7. Test Cases | Reference to detailed test‑case documents; optional summary matrix (feature vs. test‑case ID). |
| 8. Test Data | Specification of valid, invalid and edge‑case data sets; data‑generation method (e.g., equivalence partitioning). |
| 9. Entry/Exit Criteria | Specific conditions for each test level (e.g., “All unit tests passed with ≥ 80 % statement coverage”). |
| 10. Risk & Mitigation | Testing risks (resource shortage, environment instability) and contingency actions. |
| 11. Defect Life‑Cycle & Metrics | Defect reporting, fixing, re‑testing, closure; metrics such as defect density, mean time to repair. |
| 12. Reporting & Approval | Frequency and format of test reports; sign‑off from project manager, test lead and client. |
| Technique | When to Use |
|---|---|
| Equivalence Partitioning | Divide input domain into valid and invalid classes. |
| Boundary‑Value Analysis | Test values at the edges of each equivalence class. |
| Decision Tables | Complex business rules with multiple conditions. |
| State‑Transition | Systems with distinct states (e.g., login/logout). |
| Statement / Branch Coverage | White‑box testing to ensure each line/branch is executed. |
Test Strategy answers what and why – it sets the overall direction and quality goals. Test Plan answers how, when and by whom – it provides the detailed roadmap for execution. Alignment of the two ensures testing is both comprehensive and efficient.
| Section | Content (excerpt) |
|---|---|
| Test Items | Module BubbleSort – function sort(int[] a) |
| Test Approach |
|
| Test Cases (summary) |
|
| Entry/Exit Criteria |
Entry: Unit‑test framework installed, code compiled. Exit: All test cases executed, ≥ 95 % pass rate, defect density ≤ 0.1 per 100 lines. |
| Field | Description (AO1) |
|---|---|
| Test Case ID | Unique identifier (e.g., TC‑01) |
| Title | Brief description of the scenario |
| Pre‑conditions | State of the system before execution |
| Test Steps | Exact actions to be performed |
| Expected Result | What should happen if the software works correctly |
| Actual Result | Outcome observed during execution |
| Status | Pass / Fail |
| Comments | Notes on defects, variations or observations |

| AO | What the student demonstrates |
|---|---|
| AO1 – Knowledge | Define “test strategy”, “test plan”, entry/exit criteria, risk, metrics, defect life‑cycle, regression testing, static analysis. |
| AO2 – Analysis | Identify appropriate testing levels, types and design techniques for a given program; assess risks and select suitable test data. |
| AO3 – Design & Evaluation | Construct a coherent test plan from a supplied strategy; evaluate test coverage, propose improvements, and interpret test metrics. |
Note: This note focuses on the testing unit (12.3) of the Cambridge AS & A‑Level Computer Science (9618) syllabus. Other units (e.g., Data Representation, Networks, Security, AI, Databases) are covered in separate lecture notes.
Create an account or Login to take a Quiz
Log in to suggest improvements to this note.
Your generous donation helps us continue providing free Cambridge IGCSE & A-Level resources, past papers, syllabus notes, revision questions, and high-quality online tutoring to students across Kenya.