15.1 Processors, Parallel Processing and Virtual Machines
Learning objective
Show understanding of the four basic computer architectures and the related concepts required by the Cambridge International AS & A Level Computer Science (9618) syllabus.
Communication & Internet technologies: OSI model, TCP/IP stack, common protocols (HTTP, FTP, SMTP, DNS), packet structure.
Hardware basics: CPU, ALU, registers, cache, main memory, I/O devices, bus structures.
System software: operating‑system purposes, process & thread management, virtual memory, compilation pipeline.
Security & ethics: symmetric vs. asymmetric encryption, SSL/TLS handshake, digital certificates, ethical considerations.
Algorithms & data structures: linear & binary search, insertion & bubble sort, time‑ and space‑complexity analysis.
Programming concepts: procedural, object‑oriented, functional paradigms; recursion; exception handling.
1. Flynn’s Taxonomy – The Four Basic Architectures
Flynn classifies computer architectures by the number of concurrent instruction streams and data streams they can handle.
Architecture
Instruction streams
Data streams
Typical examples
SISD
1
1
Early single‑core CPUs (e.g., Intel 8086), most microcontrollers
SIMD
1
Multiple
GPU vector units, SIMD extensions such as SSE, AVX, NEON
MISD
Multiple
1
Fault‑tolerant pipelines, some specialised signal‑processing hardware (rare in practice)
MIMD
Multiple
Multiple
Multi‑core CPUs, clusters, distributed systems
Key characteristics
SISD – one instruction operates on one datum at a time; no built‑in parallelism.
SIMD – a single instruction is applied simultaneously to many data items; ideal for data‑parallel tasks such as image or signal processing.
MISD – several instructions act on the same data stream; mainly theoretical, used in specialised fault‑tolerant designs.
MIMD – independent processors execute different instructions on different data; supports task‑parallelism and forms the basis of modern multi‑core and distributed systems.
2. RISC vs CISC Instruction Sets
Understanding the two dominant design philosophies helps explain why some architectures are easier to pipeline than others.
Aspect
RISC (Reduced Instruction Set Computing)
CISC (Complex Instruction Set Computing)
Instruction length
Fixed (usually 32 bits)
Variable (1–15 bytes)
Number of instructions
Few, simple (≈ 50‑100)
Many, complex (≈ 200‑500)
Typical operations per instruction
One simple operation (load, add, branch)
Multiple micro‑operations (e.g., string copy)
Pipeline friendliness
Highly amenable – regular format reduces decode time and hazards.
Harder – variable length and complex decoding can cause stalls.
Examples
ARM, RISC‑V, MIPS
x86, IBM System/360
3. Pipelining & the Register File
A pipeline breaks the execution of an instruction into several stages that can operate concurrently, increasing instruction throughput.
Structural – two instructions need the same hardware resource.
Data – a later instruction depends on the result of an earlier one.
Control – a branch changes the program flow.
Registers act as a fast buffer between stages. A larger register file reduces the need to access slower main memory, which in turn lowers pipeline stalls.
RISC advantage: Fixed‑length, single‑operation instructions make the decode stage simple, keeping the pipeline full with minimal stalls.
CISC challenge: Variable‑length instructions and complex addressing modes can cause decode bottlenecks; modern CISC CPUs (e.g., x86) mitigate this by internally translating complex instructions into RISC‑like micro‑ops.
4. Massively‑Parallel Computers
These systems contain thousands to millions of processing elements and rely heavily on SIMD‑style execution.
GPU‑based compute clusters – e.g., NVIDIA DGX stations or cloud GPU farms. Thousands of small cores, wide SIMD lanes, very high memory bandwidth.
IBM Blue Gene/Q – a supercomputer with 16‑core nodes; each core supports SIMD operations and is designed for energy‑efficient large‑scale parallelism.
Network‑on‑chip or high‑speed interconnects for distributed communication.
5. Parallel‑Processing Models
Shared‑Memory Parallelism – multiple processors access a single address space. Common in MIMD multi‑core CPUs. Synchronisation is achieved with locks, semaphores, atomic instructions, or higher‑level constructs such as OpenMP.
Distributed‑Memory Parallelism – each processor has its own private memory; communication occurs via explicit message passing (e.g., MPI). Typical for clusters and massively‑parallel systems.
6. Virtual Machines (VMs)
A virtual machine provides an abstract execution environment that isolates software from the underlying hardware.
System VMs – emulate an entire physical computer, allowing several guest operating systems to run concurrently (e.g., VMware Workstation, VirtualBox).
Process VMs – provide a runtime environment for a single program; they translate platform‑independent bytecode into native instructions at execution time (e.g., Java Virtual Machine, .NET Common Language Runtime).
Benefits of virtualisation
Isolation of workloads – faults or security breaches in one VM do not affect others.
Dynamic resource allocation – CPU, memory and storage can be re‑assigned without rebooting.
Platform independence – the same bytecode runs on any hardware that hosts the appropriate VM.
7. Additional A‑Level Topics Required by the Syllabus
7.1 Data Representation
Binary, hexadecimal and BCD encodings.
Two’s complement for signed integers.
IEEE‑754 single‑ and double‑precision floating‑point format.
File‑organisation methods: sequential, random (direct), indexed, and hashing.
7.2 Communication & Internet Technologies
OSI model (7 layers) and the simplified TCP/IP model (4 layers).
Scales with vector length and number of SIMD units (e.g., multiple GPU SMs).
Scales with core count and inter‑node communication efficiency.
Power efficiency
Generally lower per operation.
Very high for data‑parallel workloads because many operations share control logic.
Varies; modern CPUs balance performance and power using dynamic frequency scaling.
10. Suggested diagram
Visual comparison of SISD, SIMD, MISD and MIMD architectures – instruction streams (horizontal arrows) and data streams (vertical arrows) are shown for each model.
Summary
The four basic architectures—SISD, SIMD, MISD and MIMD—form the conceptual foundation for modern processor design. RISC and CISC philosophies explain why some CPUs are easier to pipeline, while pipelining and a rich register file are essential for high instruction throughput. Massively‑parallel computers extend SIMD ideas to thousands of cores, enabling today’s supercomputers and GPU clusters. Parallel‑processing models (shared vs. distributed memory) and virtual machines illustrate how software exploits these hardware capabilities, and they demonstrate the close relationship between architecture and operating‑system concepts such as scheduling, virtual memory, and security. The additional A‑level sections on data representation, networking, system software, security, AI, algorithm analysis, and programming paradigms complete the coverage required by the Cambridge International AS & A Level Computer Science syllabus.
Your generous donation helps us continue providing free Cambridge IGCSE & A-Level resources,
past papers, syllabus notes, revision questions, and high-quality online tutoring to students across Kenya.