Know and understand the use of live data

7. The Systems Life Cycle – Use of Live Data

1. Introduction – Core ICT Concepts (Syllabus 1‑5)

Before considering live (real‑time) data, students must understand the basic hardware, software and network components that make a live‑data system possible.

  • Computer hardware – CPU, RAM, ROM, storage (SSD, HDD, cloud), motherboard, power supply.
  • Input devices for live data – sensors (temperature, pressure, motion), RFID readers, barcode scanners, cameras, GPS receivers, microphones.
  • Output devices – monitors, dashboards, printers, speakers, actuators (e.g., motor controllers).
  • Storage media – local files, databases, data‑warehouses, cloud storage; why streaming often bypasses permanent storage until required for analysis.
  • Networks – LAN, WLAN, WAN, Bluetooth, Wi‑Fi, cellular (4G/5G); protocols relevant to live data (HTTP‑S, WebSocket, MQTT, UDP). Include a brief note on firewalls, routers and the need for secure connections.
  • Effects of IT – social, economic and environmental impacts; how live data can improve efficiency, safety and decision‑making but also raise privacy concerns.

2. What Is Live Data?

Live (real‑time) data is information that is generated, captured and updated continuously as events occur, allowing it to be processed, analysed or displayed immediately with little or no perceptible delay.

3. Why Live Data Matters

  • Supports timely decision‑making (e.g., stock‑level alerts).
  • Enables monitoring of dynamic processes (sensor readings, GPS locations).
  • Improves user experience with up‑to‑date information.
  • Allows automated responses (fraud detection, predictive maintenance).

4. Safety & Security (Syllabus 8‑9)

Live‑data systems must address both physical safety and e‑safety.

  • Physical safety – safe installation of sensors, cable management, avoiding exposure to moving parts or high voltages.
  • E‑safety – password hygiene, use of anti‑malware, awareness of phishing and social engineering.
  • Data‑protection legislation – GDPR / Data Protection Act basics: consent, purpose limitation, secure storage.
  • Threats to live streams – interception, spoofing, denial‑of‑service, ransomware.
  • Mitigation – TLS/SSL encryption, API keys, firewalls, regular patching, access‑control lists, audit logs.

5. Systems Life Cycle – Live‑Data Focus (Syllabus 7)

Each stage of the life cycle must explicitly consider live‑data requirements. The table below follows the Cambridge terminology and shows which Assessment Objectives (AO) are addressed.

Phase (Syllabus term) Live‑Data Activities Key Considerations Assessment Objectives
Planning
  • Identify the need for live information (e.g., “system must show current weather”).
  • Research possible data sources (APIs, IoT sensors, web feeds) and estimate cost.
  • Pre‑liminary risk assessment – latency, reliability, security, physical safety.
Source availability, cost, update frequency, risk register. AO1 – knowledge of concepts; AO2 – analysis of needs.
Analysis of the current system
  • Gather requirements using research methods:
    • Observation – watch operators on the shop floor.
    • Interview – ask users what live information they need.
    • Questionnaire – collect requirements from a larger group.
    • Document review – examine logs, sensor specs, API docs.
  • Define functional requirements (update frequency, latency tolerance, accuracy).
  • Define non‑functional requirements (security, scalability, reliability).
  • Produce a simple data‑flow diagram: Source → Processing → Output.
Latency tolerance, data volume, security level, stakeholder expectations. AO1 – identify terminology; AO2 – analyse requirements; AO3 – propose solutions.
Design of file/data structures
  • Select technologies (WebSocket, MQTT, HTTP‑S, streaming services).
  • Design data formats:
    • Time‑stamped records: {timestamp, sensorID, value}
    • Choose CSV, JSON, XML or binary depending on volume.
  • Specify input & output formats (e.g., JSON API returns {"temp":22.5,"unit":"C"}).
  • Validation routines:
    • Type check, range check, format check (ISO‑8601), missing‑value handling.
  • Plan error handling, fallback sources and data‑integrity checks.
Scalability, fault tolerance, standardised data formats. AO2 – design appropriate structures; AO3 – justify design choices.
Development
  • Code modules to retrieve, process and display live data; integrate authentication and encryption.
  • Version control (e.g., Git) and regular builds.
Maintainability, coding standards, secure coding practices. AO3 – implement a solution.
Testing
  • Create a test plan covering:
    • Normal test data – typical traffic rates.
    • Abnormal test data – corrupted packets, out‑of‑range values.
    • Extreme test data – burst traffic, maximum‑size messages.
  • Simulate live streams with a test server or recorded logs.
  • Measure latency, data loss, duplication; verify validation routines.
  • Document expected vs. actual results.
Error handling, test coverage, performance metrics. AO3 – evaluate the solution.
Implementation
  • Choose an implementation method (Cambridge requirement):
    • Direct change‑over – switch instantly (high risk).
    • Parallel – run old and new systems together.
    • Pilot – limited rollout (recommended for live feeds).
    • Phased – introduce functionality step‑by‑step.
  • Deploy connections to production data sources; configure monitoring dashboards and alerts.
  • Provide user training and a rollback plan.
Rollback plan, monitoring tools, user acceptance. AO3 – implement and manage change.
Documentation
  • Technical documentation:
    • System specification (including live‑data requirements).
    • Data‑flow diagram and API reference.
    • Database schema, validation rules, file formats.
  • User documentation:
    • User guide – interpreting live dashboards.
    • Troubleshooting guide – “no update”, error messages.
  • Change‑log for any updates to data sources or protocols.
Clarity, completeness, version control. AO1 – present information; AO2 – organise documentation.
Evaluation
  • Measure performance against the original specification:
    • Latency < 200 ms for 95 % of updates.
    • Uptime ≥ 99 % over a month.
    • Data‑accuracy ≥ 99.5 % (validated against a trusted source).
    • User satisfaction ≥ 80 % (survey).
  • Assess ease of use, efficiency and appropriateness of the live‑data solution.
  • Identify improvements – additional sources, tighter security, better visualisation.
Objective criteria, feedback loops, cost‑benefit analysis. AO3 – evaluate the solution.
Maintenance
  • Continuously monitor source reliability and latency.
  • Update APIs, certificates or protocol versions when providers change.
  • Archive historic live data; optimise storage (compression, partitioning).
  • Review validation rules when new sensor types are added.
Change management, security patches, documentation updates. AO3 – maintain and improve the system.

Checklist for Students (AO1‑AO3)

  • AO1 – Knowledge: define live data, list relevant hardware, protocols and safety concepts.
  • AO2 – Application: analyse requirements, design appropriate data structures, choose suitable validation and error‑handling methods.
  • AO3 – Evaluation: develop, test, implement, document and critically evaluate a live‑data system against measurable criteria.

6. Benefits of Using Live Data

  • Immediate visibility of critical information.
  • Enhanced automation (alerts, control actions).
  • More accurate reports and dashboards.
  • Competitive advantage through faster response.

7. Challenges & Mitigation Strategies

  • Latency – use efficient protocols (WebSocket, UDP), minimise network hops.
  • Data volume – buffer, compress, or sample selectively.
  • Reliability – provide redundant sources and fallback mechanisms.
  • Security – encrypt streams (TLS), enforce authentication and access controls.
  • Data quality – implement validation routines and handle missing/corrupt values.
  • Physical safety – secure sensor installations, avoid exposed wiring.

8. Real‑World Examples (Contextual Applications – Syllabus 6)

  • Banking – live transaction feeds for fraud detection (ICT in finance).
  • Manufacturing – sensor data for predictive maintenance (control systems).
  • Retail – real‑time inventory displayed on e‑commerce sites (e‑business).
  • Transport – live GPS tracking of buses and trains (communication & navigation).
  • Healthcare – live patient monitoring (computers in medicine).
  • Weather services – continuously updated temperature and precipitation data (information systems).

9. Cross‑Curricular Links (Sections 11‑21)

ICT Area (Syllabus) Live‑Data Application
File management (Section 11) Storing time‑stamped CSV or JSON logs for later analysis; using file‑compression utilities.
Images & graphics (Sections 12‑13) Dynamic icons that change colour based on sensor status; overlaying live data on maps.
Charts & graphs (Section 16) Real‑time line chart of temperature, stock levels or traffic flow.
Spreadsheets (Section 20) Importing a live CSV feed into a spreadsheet for ad‑hoc calculations and what‑if analysis.
Website authoring (Section 21) Embedding a live data widget (e.g., weather, news ticker) using JavaScript and APIs.
Databases (Section 19) Storing live sensor readings in a relational database; using triggers to raise alerts.
Presentations (Section 18) Linking a presentation to a live chart that updates during a meeting.
Audience & communication (Section 10) Designing dashboards for different users – managers need summaries, technicians need raw values.
Document production (Section 15) Generating automated reports that pull the latest data at the moment of printing.

10. Summary Table – Live‑Data Activities Across the Life Cycle

Phase Live‑Data Activity Key Considerations
Planning Identify need, source options, risk assessment. Source availability, cost, update frequency, safety risks.
Analysis of the current system Gather requirements (observation, interview, questionnaire, document review); define functional & non‑functional specs. Latency tolerance, data volume, security level, stakeholder expectations.
Design of file/data structures Select protocols; design data formats; specify validation & error‑handling. Scalability, fault tolerance, standards compliance.
Development Code retrieval, processing and display modules; integrate authentication/encryption. Maintainability, secure coding, version control.
Testing Create test plan (normal, abnormal, extreme); simulate streams; measure latency, loss, duplication. Coverage, performance metrics, documentation of results.
Implementation Choose implementation method (pilot, parallel, phased, direct); connect to live feeds; train users; prepare rollback. Rollback plan, monitoring tools, user acceptance.
Documentation Produce technical spec, data‑flow diagram, API guide, user & troubleshooting manuals; maintain change‑log. Clarity, completeness, version control.
Evaluation Measure latency, uptime, accuracy, user satisfaction; recommend improvements. Objective criteria, feedback loops, cost‑benefit.
Maintenance Monitor performance; update APIs/certificates; archive data; revise validation rules. Change management, security patches, documentation updates.

11. Key Points to Remember

  • Live data is dynamic – it must be considered at every life‑cycle stage.
  • Analysis requires explicit research methods to capture real‑time requirements.
  • Design must include validation routines to ensure data quality and integrity.
  • Testing uses normal, abnormal and extreme data sets; results are recorded in a test plan.
  • Implementation can follow direct, parallel, pilot or phased methods; a pilot is often safest for live feeds.
  • Comprehensive technical and user documentation is mandatory.
  • Evaluation uses measurable criteria (latency, uptime, accuracy, user satisfaction).
  • Safety & security cover both physical hazards and e‑safety/legal obligations.
  • Live‑data systems link directly to file management, graphics, charts, spreadsheets, databases, presentations and web authoring – reinforcing other ICT syllabus sections.

Create an account or Login to take a Quiz

90 views
0 improvement suggestions

Log in to suggest improvements to this note.