Know and understand comparing the solution with the original task requirements, identifying any limitations and necessary improvements to the system, evaluating the users' responses to the results of testing the system

Systems Life Cycle – Evaluation Phase (ICT 0417)

1. Why the Evaluation Phase is Required

  • Checks that the finished system truly satisfies the original task requirements.
  • Confirms that both functional and non‑functional (efficiency, usability, security, etc.) specifications have been met.
  • Identifies any limitations and proposes improvements before the system is handed over or entered into maintenance.
  • Provides documented evidence for stakeholders that the system is fit for purpose.

2. Linking Evaluation to Earlier Documentation

Evaluation must refer back to the artefacts produced in the earlier stages of the project. The table below shows the key documents and the purpose they serve during evaluation.

DocumentWhy it is needed in evaluation
Task Requirements DocumentBaseline against which every requirement is checked.
Technical Design & User DocumentationShows how the system was intended to work and how users should interact with it.
Test Plans, Validation Routines & Test DataProvides the evidence that the system behaved as expected during testing.
Risk & Safety Assessment (Section 8.2‑8.3)Ensures compliance with data‑protection, e‑safety and other legal requirements.

Quick reference diagram – the image maps the four documents to the main evaluation checklist items (requirements, performance, usability, security, compatibility).

Mapping of documentation to evaluation checklist

Mapping of documentation to the evaluation checklist.

3. Comparing the Solution with the Original Task Requirements

  1. List every functional and non‑functional requirement from the brief.
  2. For each requirement record its status:

    • ✔ Met
    • ✖ Not met
    • ➖ Partially met

  3. Note any deviations, omissions or enhancements and reference the supporting test evidence.
  4. For non‑functional requirements explicitly comment on:

    • Efficiency – resource use, response time, scalability.
    • Ease of Use – intuitiveness of the UI, accessibility.
    • Overall Appropriateness – does the solution meet the needs of the target users and context?

Example of a justified judgement (AO3)

“The average response time recorded during load testing was 3.4 s, exceeding the 2 s limit set in the task brief. Because the system is intended for real‑time ticket booking, this delay would cause a poor user experience and could lead to lost sales. Therefore, the system does not meet the efficiency requirement and must be optimised before release.”

4. Identifying Limitations and Planning Improvements

Use the table below to record each limitation, suggest an improvement, and give a rough effort estimate (Low / Medium / High). Add a row for Efficiency to satisfy the syllabus requirement.

AreaTypical LimitationsSuggested ImprovementEffort
FunctionalityMissing features; incorrect calculations.Add required modules; revise logic; retest.Medium
PerformanceSlow response, high CPU/memory usage, poor scalability.Optimise queries; introduce caching; review algorithms; load‑test.High
EfficiencyResponse time > required limit; excessive bandwidth use.Refactor code for faster execution; compress assets; implement lazy loading.Medium
UsabilityConfusing navigation, inadequate error messages, accessibility gaps.Redesign menu; add tooltips; apply WCAG 2.1 guidelines.Low
Security & RiskWeak authentication, XSS/SQL‑injection risks, missing audit trails, non‑compliance with GDPR/e‑safety.Implement input sanitisation; enforce strong password policy; add logging; produce GDPR privacy notice and age‑verification workflow.High
CompatibilityFails on certain browsers, OS or devices.Update CSS/JS for older browsers; test on all target platforms.Low
Ethical & LegalNon‑compliance with data‑protection laws, missing parental consent for under‑13 users.Review GDPR/Data Protection Impact Assessment; add consent workflow; display clear privacy notices.Medium

5. Evaluating Users’ Responses to Testing Results

Both qualitative and quantitative feedback should be collected and analysed.

  • Qualitative data – interview excerpts, open‑ended questionnaire comments, observation notes.
  • Quantitative data – error rates, task‑completion times, satisfaction scores (e.g., Likert 1‑5), number of support tickets.

Analysis Steps

  1. Enter all data into a single spreadsheet or simple database.
  2. Identify trends (e.g., “navigation dead‑ends reported by 8 % of users”).
  3. Prioritise issues using an Impact × Frequency matrix.
  4. Link each issue back to a specific requirement or performance metric.

Sample user‑feedback excerpt (qualitative)

“I liked the colour scheme, but after clicking ‘Submit’ the page froze for several seconds. I wasn’t sure whether my data had been saved, so I tried again and ended up creating duplicate entries.” – Test user, age 16

Sample quantitative result (quantitative)

  • Average task‑completion time: 42 seconds (target ≤ 30 seconds) – Impact: High
  • Satisfaction score (5‑point Likert): 3.2/5 – Impact: Medium

6. Sample Evaluation Checklist

Evaluation AreaCriteria (What to check)Findings (Evidence)Recommended Action
Requirement CoverageAll functional & non‑functional requirements met?5 of 12 functional requirements only partially met.Develop missing modules; retest against requirements.
Performance & EfficiencyResponse ≤ 2 s; CPU ≤ 40 %; scalable to 200 users.Avg. response = 3.4 s; CPU = 55 % under peak load; scalability failed at 150 users.Optimise DB queries; introduce caching; repeat scalability test.
Usability (Ease of Use)User satisfaction ≥ 80 %; error‑free navigation; WCAG compliance.Satisfaction = 72 %; 8 % reported dead‑ends; minor colour‑contrast issues.Redesign navigation; add contextual help; fix contrast.
Security & Risk (Legal & e‑Safety)No critical vulnerabilities; GDPR & Data‑Protection compliance.Two medium‑risk XSS issues; missing GDPR privacy notice; no consent record for under‑13 users.Implement sanitisation; add privacy notice; create age‑verification & parental‑consent workflow.
CompatibilityWorks on Windows, macOS, Android, iOS (all supported versions).Fails on iOS 13 Safari – layout broken.Update CSS for older WebKit; retest on iOS 13.
Ethical & LegalData handling complies with GDPR/e‑safety guidelines.No consent record for users under 13.Introduce age‑verification & parental‑consent workflow.

7. Structure of an Evaluation Report (Template)

Students should produce a concise, well‑organised report that follows the syllabus expectations. The suggested word‑count is 350‑500 words (excluding appendices).

  1. Executive Summary – brief overview of key findings and top‑level recommendations.
  2. Introduction

    • Purpose of the evaluation.
    • Reference to the original task brief.

  3. Methodology

    • Documents consulted (requirements, design, test plans, risk assessment).
    • Data‑collection methods (user questionnaires, performance monitoring tools, security scans).
    • Validation routines and test data used.

  4. Findings

    • Requirement coverage table (✔/✖/➖).
    • Performance & efficiency results.
    • Usability/ease‑of‑use scores.
    • Security, legal and ethical compliance.
    • Summary of user feedback – include at least one quoted comment and key quantitative scores.

  5. Risk & Compliance Assessment – identify any safety, security or legal risks and rate their severity.
  6. Recommendations

    • Improvement actions (linked to the limitation table).
    • Effort estimate (Low/Medium/High).
    • Prioritisation (based on Impact × Frequency).

  7. Conclusion – overall judgement on whether the system meets the brief and next steps (maintenance, further development, or sign‑off).
  8. Appendices – raw data, test scripts, screenshots, user‑feedback extracts, etc.

8. Connecting Evaluation to the Next Life‑Cycle Stage

  • Evaluation feeds directly into the Maintenance / Implementation of Improvements stage (Section 7.6 of the syllabus).
  • Approved recommendations are turned into new project tasks, timelines and resource allocations.
  • Re‑testing of the revised components is carried out before final sign‑off.
  • The system then moves to:

    • Bug‑fixing and feature enhancement.
    • Final user training and hand‑over.
    • Ongoing maintenance and future evaluation cycles.

9. Key Points to Remember for the IGCSE Exam

  • Always start the evaluation by referencing the original task requirements and the four key documents (requirements, design, test plans, risk assessment).
  • Explicitly comment on efficiency, ease of use and overall appropriateness of the solution.
  • Separate limitations (what is wrong) from improvements (what will be done).
  • Use both qualitative (quotes) and quantitative (scores, timings) data when analysing user feedback.
  • Include a clear security & legal component – GDPR, data‑protection and e‑safety compliance.
  • Present findings in organised tables or checklists; examiners look for tidy, easy‑to‑read answers.
  • Show how the evaluation informs the maintenance phase and any further development decisions.
  • Keep the report within the suggested word‑count (350‑500 words) and label any appendices clearly.