Show understanding of how various factors contribute to the overall performance of a computer system.
| Register | Purpose (one‑line) |
|---|---|
| Program Counter (PC) | Holds the address of the next instruction to fetch. |
| Instruction Register (IR) | Holds the currently fetched instruction. |
| Memory Address Register (MAR) | Provides the address for a memory read or write. |
| Memory Data Register (MDR) | Temporarily stores data transferred to/from memory. |
| Accumulator (ACC) | Primary arithmetic result register. |
| General‑purpose registers (R0‑R7, etc.) | Hold operands and intermediate results. |
| Index Register (IX) | Used for address calculation in indexed addressing modes. |
| Stack Pointer (SP) | Points to the top of the runtime stack. |
| Status / Flag Register | Contains condition flags (zero, carry, overflow, etc.). |
| Pipeline Registers (IF/ID, ID/EX, EX/MEM, MEM/WB) | Separate stages of a pipeline, allowing overlapping execution. |
Register‑Transfer Notation (RTN) example (simplified)
PC → MAR → MEM → IR IR[opcode] → CU IR[operand] → R1 R1, R2 → ALU → ACC ACC → R3 (or MEM) PC ← PC + 1
The fundamental performance equation is:
$$\text{CPU Time} = \frac{\text{Instruction Count} \times \text{CPI}}{\text{Clock Rate}}$$
| Factor | Effect on Performance | Typical Design / Mitigation |
|---|---|---|
| Clock Speed | Higher frequency shortens each cycle → lower CPU time. | Smaller transistor geometries, dynamic frequency scaling (Turbo Boost), improved cooling. |
| CPI (Cycles per Instruction) | Lower CPI → fewer cycles per instruction → faster execution. | Pipelining, superscalar issue, micro‑op fusion, out‑of‑order execution, reducing cache‑miss penalties. |
| Instruction Count | Fewer instructions → less total work. | Optimised compilers, algorithmic improvements, choice of ISA (CISC vs. RISC). |
| Cache Hierarchy (L1/L2/L3) | Effective caching cuts memory‑access latency → lower CPI for memory‑bound code. | Multi‑level caches, larger cache lines, write‑back policies, hardware prefetching, coherence protocols. |
| Pipelining | Overlaps instruction stages, effectively reducing CPI. | Deep pipelines, hazard detection, forwarding, branch prediction. |
| Branch Prediction | Accurate prediction avoids pipeline stalls caused by control hazards. | Dynamic two‑level predictors, hybrid schemes, return‑address stacks. |
| Parallelism – Multi‑core | Independent threads run simultaneously → reduced overall program time. | Multiple cores, symmetric multiprocessing (SMP), thread‑level scheduling. |
| Parallelism – SIMD / Vector Units | One instruction processes many data elements → large speed‑up for data‑parallel workloads. | SIMD extensions (SSE, AVX), GPU off‑loading, vectorised libraries. |
| Instruction‑set design (RISC vs. CISC) | RISC: simple instructions, lower CPI; CISC: complex instructions, fewer total instructions. | ISA choice influences compiler optimisation, decoder complexity, and the IC/CPI trade‑off. |
Program: 1.2 × 10⁹ instructions
Clock rate: 2.5 GHz
Baseline CPI: 1.8
$$\text{CPU Time}_{\text{baseline}} = \frac{1.2\times10^{9}\times1.8}{2.5\times10^{9}} = 0.864\ \text{s}$$
If cache optimisation lowers the average CPI to 1.4:
$$\text{CPU Time}_{\text{optimised}} = \frac{1.2\times10^{9}\times1.4}{2.5\times10^{9}} = 0.672\ \text{s}$$
Assume 60 % of the work can be perfectly parallelised across 4 cores.
Speed‑up for the parallel portion: \(S_{p}=4\).
Overall speed‑up:
$$S = \frac{1}{(1-P) + \frac{P}{S_{p}}} = \frac{1}{0.40 + \frac{0.60}{4}} = \frac{1}{0.55} \approx 1.82$$
New execution time:
$$\text{CPU Time}_{\text{parallel}} = \frac{0.864\ \text{s}}{1.82} \approx 0.475\ \text{s}$$
This illustrates that, when a substantial portion of a program can be parallelised, adding cores yields a larger performance gain than a modest CPI improvement.
Create an account or Login to take a Quiz
Log in to suggest improvements to this note.
Your generous donation helps us continue providing free Cambridge IGCSE & A-Level resources, past papers, syllabus notes, revision questions, and high-quality online tutoring to students across Kenya.