Show understanding of the need for a test strategy and test plan and their likely contents

12.3 Program Testing and Maintenance

Learning Objective

Show understanding of the need for a test strategy and a test plan, and describe their likely contents.

1. Why a Test Strategy Is Needed

  • Provides a high‑level, project‑wide description of how testing will achieve the quality objectives (AO1).
  • Ensures testing is:
    • Systematic and repeatable
    • Aligned with time, cost and resource constraints
    • Focused on the most critical risks (AO2)
    • Consistent across all development teams
  • Prevents ad‑hoc testing that can miss defects, waste effort and reduce confidence in the final product.

2. Typical Contents of a Test Strategy (high‑level)

ItemWhat It Covers (AO1)
ScopeModules, features and interfaces to be tested; items explicitly out of scope.
Testing LevelsUnit, integration, system and acceptance testing.
Testing TypesFunctional, performance, security, usability, reliability.
Test‑design TechniquesBlack‑box (equivalence partitioning, boundary‑value analysis, decision tables, state‑transition); White‑box (statement/branch coverage, path testing).
Test EnvironmentsHardware, OS, network, simulators, stubs.
ResourcesPeople, tools (test management, defect tracker, automation), training.
Schedule & MilestonesWhen each testing level starts/finishes, key review dates.
Risk AssessmentIdentify high‑risk components, estimate impact, outline mitigation (AO2).
Entry & Exit CriteriaConditions before a test level can begin and before it can be considered complete.
Metrics & ReportingDefect density, test‑case coverage, pass/fail rates, progress reports (AO3).

3. Why a Test Plan Is Needed

The test plan translates the high‑level strategy into a concrete, actionable document for the testing team. It tells how, when and by whom the testing will be carried out, ensuring that:

  • All required test cases are created, reviewed and executed.
  • Test data, environments and tools are prepared in advance.
  • Progress can be measured against defined milestones.
  • Stakeholders have clear visibility of testing activities and outcomes.

4. Typical Contents of a Test Plan (detail for implementation)

SectionDetails (AO1‑AO3)
1. IntroductionPurpose, scope, objectives, reference to the test strategy.
2. Test ItemsList of software components, modules, interfaces, third‑party packages.
3. Test ApproachChosen test levels, types and design techniques (e.g., BVA for numeric inputs, white‑box for critical algorithms).
4. Test EnvironmentHardware, OS, network, databases, simulators, stubs and configuration details.
5. Test ScheduleTimeline with start/end dates for each testing phase and major milestones.
6. Test ResourcesRoles and responsibilities, skill requirements, tools and training.
7. Test CasesReference to detailed test‑case documents; optional summary matrix (feature vs. test‑case ID).
8. Test DataSpecification of valid, invalid and edge‑case data sets; data‑generation method (e.g., equivalence partitioning).
9. Entry/Exit CriteriaSpecific conditions for each test level (e.g., “All unit tests passed with ≥ 80 % statement coverage”).
10. Risk & MitigationTesting risks (resource shortage, environment instability) and contingency actions.
11. Defect Life‑Cycle & MetricsDefect reporting, fixing, re‑testing, closure; metrics such as defect density, mean time to repair.
12. Reporting & ApprovalFrequency and format of test reports; sign‑off from project manager, test lead and client.

5. Key Testing Concepts Required by the Cambridge Syllabus

5.1 Types of Testing (AO2)

  • Unit testing – tests individual functions or methods.
  • Integration testing – tests interaction between modules.
  • System testing – tests the complete, integrated system.
  • Acceptance testing – verifies that the system meets user requirements.
  • Regression testing – re‑runs selected tests after changes to ensure no new defects.
  • Static analysis – reviews code without execution (e.g., linting, code walkthroughs).

5.2 Test‑design Techniques (AO2)

TechniqueWhen to Use
Equivalence PartitioningDivide input domain into valid and invalid classes.
Boundary‑Value AnalysisTest values at the edges of each equivalence class.
Decision TablesComplex business rules with multiple conditions.
State‑TransitionSystems with distinct states (e.g., login/logout).
Statement / Branch CoverageWhite‑box testing to ensure each line/branch is executed.

5.3 Test‑Driven Development (TDD) (AO3)

  • Write a failing test case before any code.
  • Write the minimum code to pass the test.
  • Refactor the code while keeping the test green.
  • Repeats for each new feature – encourages early detection of defects.

5.4 Defect Life‑Cycle (AO3)

  1. Defect identified (during testing).
  2. Defect recorded in a defect tracker.
  3. Defect classified (severity, priority).
  4. Defect assigned to a developer.
  5. Defect fixed and code checked in.
  6. Defect re‑tested (regression test).
  7. Defect closed if the fix is successful.

6. Relationship Between Strategy and Plan

Test Strategy answers what and why – it sets the overall direction and quality goals. Test Plan answers how, when and by whom – it provides the detailed roadmap for execution. Alignment of the two ensures testing is both comprehensive and efficient.

7. Example: Testing a Simple Sorting Routine

7.1 Strategy Excerpt

  • Scope: Sort routine and its public interface.
  • Testing Levels: Unit (sorting function), Integration (sorting module with I/O handling).
  • Testing Types: Functional (correct ordering) and non‑functional (performance on large arrays).
  • Design Techniques: Black‑box (equivalence partitioning of input size), White‑box (100 % statement coverage).
  • Risk: Incorrect handling of duplicate values, empty array, and boundary conditions.

7.2 Plan Excerpt (selected sections)

SectionContent (excerpt)
Test ItemsModule BubbleSort – function sort(int[] a)
Test Approach
  • Black‑box: equivalence classes – empty array, single element, already sorted, reverse‑sorted, random order, arrays with duplicates.
  • White‑box: achieve 100 % statement coverage using path‑testing.
Test Cases (summary)
  • TC‑01: Input [] → Expected []
  • TC‑02: Input [5] → Expected [5]
  • TC‑03: Input [9,1,4,7] → Expected [1,4,7,9]
  • TC‑04: Input [3,3,1,2] → Expected [1,2,3,3]
  • TC‑05: Large array (10 000 random ints) – execution time ≤ 0.5 s.
Entry/Exit Criteria Entry: Unit‑test framework installed, code compiled.
Exit: All test cases executed, ≥ 95 % pass rate, defect density ≤ 0.1 per 100 lines.

8. Test‑Case Structure (for reference)

FieldDescription (AO1)
Test Case IDUnique identifier (e.g., TC‑01)
TitleBrief description of the scenario
Pre‑conditionsState of the system before execution
Test StepsExact actions to be performed
Expected ResultWhat should happen if the software works correctly
Actual ResultOutcome observed during execution
StatusPass / Fail
CommentsNotes on defects, variations or observations

9. Suggested Diagram

Flowchart showing interaction between Test Strategy, Test Plan, Test Cases, Execution and Reporting

10. Mapping to Cambridge Assessment Objectives

AOWhat the student demonstrates
AO1 – KnowledgeDefine “test strategy”, “test plan”, entry/exit criteria, risk, metrics, defect life‑cycle, regression testing, static analysis.
AO2 – AnalysisIdentify appropriate testing levels, types and design techniques for a given program; assess risks and select suitable test data.
AO3 – Design & EvaluationConstruct a coherent test plan from a supplied strategy; evaluate test coverage, propose improvements, and interpret test metrics.

11. Key Take‑aways

  • Test strategy = overarching vision (what & why).
    Test plan = detailed roadmap (how, when, who).
  • Both documents must be reviewed and updated as the project evolves.
  • Clear entry and exit criteria prevent premature testing or endless cycles.
  • Metrics gathered from the test plan help assess quality, inform decisions and guide future improvements.
  • Linking strategy, plan and concrete test cases (as in the sorting‑algorithm example) demonstrates the full testing cycle required by the Cambridge syllabus.

Note: This note focuses on the testing unit (12.3) of the Cambridge AS & A‑Level Computer Science (9618) syllabus. Other units (e.g., Data Representation, Networks, Security, AI, Databases) are covered in separate lecture notes.

Create an account or Login to take a Quiz

92 views
0 improvement suggestions

Log in to suggest improvements to this note.