Lesson Plan

Lesson Plan
Grade: Date: 17/01/2026
Subject: Computer Science
Lesson Topic: Show understanding of back propagation of errors and regression methods in machine learning
Learning Objective/s:
  • Describe the back‑propagation algorithm and its role in training neural networks.
  • Explain how gradient descent updates weights using error gradients.
  • Compare linear, multiple, and logistic regression methods and their loss functions.
  • Apply back‑propagation to a simple network and relate the process to linear regression.
  • Evaluate the impact of learning rate and regularisation on model training.
Materials Needed:
  • Projector and screen for slides/diagrams
  • Whiteboard and markers
  • Laptop with Python (Jupyter Notebook) and NumPy/matplotlib
  • Printed handout of back‑propagation steps and regression formulas
  • Sample dataset (e.g., house‑price data) for demonstration
  • Worksheets for guided practice
Introduction:

Begin with a quick poll: “What everyday technologies rely on machines that learn from data?” Connect this to students’ prior knowledge of supervised learning and ask them to recall the two main regression techniques they have seen. Explain that by the end of the lesson they will be able to trace how a neural network learns and see how this ties back to classic regression.

Lesson Structure:
  1. Do‑now (5’): Short quiz on supervised vs. unsupervised learning and basic regression concepts.
  2. Mini‑lecture (10’): Walk through the back‑propagation steps with a three‑layer diagram; highlight the loss function and weight‑update formula.
  3. Guided demo (15’): Use a Jupyter Notebook to run a forward pass and a single back‑propagation update on a tiny network; show how the output layer behaves like linear regression when using a linear activation and SSE loss.
  4. Group activity (10’): Students complete a worksheet calculating error terms and weight updates manually, then compare the result to an ordinary‑least‑squares solution for the same data.
  5. Check for understanding (5’): Exit‑ticket question: “In one sentence, describe how back‑propagation generalises linear regression.” Collect responses.
Conclusion:

Recap the key link: back‑propagation optimises weights by gradient descent, reducing to ordinary linear regression when the network’s output layer is linear and uses SSE. Students submit their exit tickets, and for homework they are asked to implement a simple back‑propagation routine on a new dataset and reflect on the effect of different learning rates.