Lesson Plan

Lesson Plan
Grade: Date: 17/01/2026
Subject: Computer Science
Lesson Topic: Show understanding of the characteristics of massively parallel computers
Learning Objective/s:
  • Describe the key characteristics of massively parallel computers, including concurrency, fine‑grained parallelism, interconnection networks, distributed memory, low power per PE, and fault tolerance.
  • Explain the performance metrics (speedup, efficiency, scalability, throughput) and how they are calculated.
  • Compare SIMD, MIMD, and hybrid architectures and identify real‑world examples such as GPUs, clusters, and TPUs.
  • Apply a parallel programming model (e.g., MPI or CUDA) to outline task distribution across processing elements.
  • Evaluate the advantages and challenges of massive parallelism in practical scenarios.
Materials Needed:
  • Projector or interactive whiteboard
  • Slide deck covering characteristics, metrics, and architectures
  • Handout with a 2‑D mesh interconnection diagram
  • Laptop with internet access for live demo of GPU/TPU specifications
  • Printed code snippets for MPI and CUDA examples
  • Worksheet with short questions on metrics and architecture
Introduction:
Imagine a computer that contains millions of tiny processors working together at the same time. Begin by recalling how a single‑core processor handles tasks sequentially, then ask students to predict what happens when many cores operate concurrently. Explain that today they will explore the defining features of massively parallel computers and how success will be measured by their ability to describe characteristics, compare architectures, and apply a parallel programming model.
Lesson Structure:
  1. Do‑now (5') – Quick quiz on serial vs. parallel execution; discuss answers.
  2. Mini‑lecture (15') – Present key characteristics and performance metrics using slides.
  3. Diagram activity (10') – Students label a 2‑D mesh network on the handout and identify local memory and routing links.
  4. Architecture comparison (10') – Small groups examine SIMD, MIMD, and hybrid examples; each group shares one advantage.
  5. Programming model showcase (15') – Live demo of a simple MPI send/receive and a CUDA kernel; highlight load‑balancing considerations.
  6. Guided practice (10') – Worksheet questions on metrics, fault tolerance, and real‑world systems; teacher circulates for formative feedback.
  7. Recap & exit ticket (5') – Students write one advantage and one challenge of massive parallelism on a sticky note; collect for review.
Conclusion:
Briefly recap the six learning objectives, confirming that students can now describe characteristics, explain metrics, and compare architectures. Use the exit‑ticket responses to highlight common misconceptions and celebrate correct insights. For homework, ask learners to research a current massively parallel system (e.g., a new GPU or supercomputer) and write a short summary of its architecture, primary use, and one performance metric.