Explain the differences between Static RAM (SRAM) and Dynamic RAM (DRAM)

3.1 Computers and Their Components – RAM Types

1. Memory hierarchy – why we need both SRAM and DRAM

To give the CPU fast data access while keeping the overall cost reasonable, modern computers use a hierarchy of volatile storage:

LevelTypical technologyTypical latencyTypical capacity
CPU registersStatic registers (hard‑wired)< 1 nsfew bytes
CPU cache – L1, L2, L3SRAM (4‑6 MOSFET flip‑flop)≈ 1–3 CPU cycles (≈ 10 ns)tens‑to‑hundreds KB
Main system memoryDRAM (1 MOSFET + capacitor)≈ 50–100 nsseveral GB
Secondary storage (SSD/HDD)Non‑volatile flash or magnetic mediaµs–msTB‑scale

SRAM’s very low access time makes it ideal for cache, which sits directly on or next to the processor and is accessed on every instruction fetch. DRAM, although slower, can be packed at a much higher density, providing the large amount of volatile storage required for programmes and data.

2. Static RAM (SRAM)

  • Cell structure: 4–6 MOSFETs forming a bistable flip‑flop (two cross‑coupled inverters).
  • Data retention: As long as power is supplied – no refresh required.
  • Typical access time: 10–20 ns (≈ 1 CPU clock for many modern cores).
  • Power consumption: Higher static power per bit because several transistors are constantly biased.
  • Density: Low – each bit occupies a relatively large silicon area.
  • Typical uses: L1/L2/L3 caches, register files, small high‑speed buffers.

3. Dynamic RAM (DRAM)

  • Cell structure: 1 MOSFET + 1 tiny capacitor per bit.
  • Data retention: Charge on the capacitor leaks; each cell must be refreshed about every 64 ms.
  • Typical access time: 50–100 ns (including sense‑amplification and refresh overhead).
  • Power consumption: Lower static power per bit, but extra dynamic power for periodic refresh cycles.
  • Density: High – enables large main‑memory capacities at a reasonable cost.
  • Typical uses: Main system memory (DDR4, DDR5, LPDDR), graphics memory (with modifications), specialised buffers.

4. Direct comparison – SRAM vs DRAM

FeatureSRAMDRAM
Cell structure4–6 transistors (flip‑flop)1 transistor + 1 capacitor
Data retentionNo refresh neededRefresh ≈ 64 ms
Access time≈ 10–20 ns (fast)≈ 50–100 ns (slower)
Density (bits / mm²)LowHigh
Power consumptionHigher static power per bitLower static power, extra refresh power
Cost per bitMore expensiveCheaper
Typical location in hierarchyCPU caches (L1–L3)Main system memory

5. Information representation – linking RAM to the syllabus

All data stored in SRAM or DRAM is ultimately a sequence of binary digits. Understanding the representation helps students see why memory size matters.

  • Binary & hexadecimal: One byte = 8 bits = 2 hex digits. RAM addresses are usually shown in hex for brevity.
  • Two’s‑complement integers: Used for signed arithmetic. Example: the 8‑bit pattern 1111 1010 represents –6.
  • ASCII & Unicode: Character codes are stored in RAM as bytes (ASCII) or as 2‑/4‑byte code units (UTF‑16/UTF‑32). A string “Hi” occupies 2 bytes in ASCII, 4 bytes in UTF‑16.
  • Floating‑point (IEEE‑754): 32‑bit single‑precision numbers need 4 bytes; 64‑bit double‑precision need 8 bytes.

6. Multimedia data in RAM (syllabus link)

Images, audio and video are stored in RAM while they are being processed or displayed. The size of these data blocks explains why large DRAM capacities are essential.

Data typeTypical uncompressed sizeEffect on RAM usage
Bitmap (24‑bit colour, 800×600)≈ 1.4 MBFits easily in modern DRAM but not in cache.
Vector graphic (SVG)Variable – usually a few KBSmall enough for cache if frequently accessed.
CD‑quality audio (44.1 kHz, 16‑bit stereo, 1 min)≈ 5 MBRequires several DRAM pages; streaming buffers sit in cache.
HD video (1920×1080, 8‑bit YUV, 1 s)≈ 3 MBLarge buffers; cache can hold only a few frames.

Compression (JPEG, MP3, H.264) reduces the amount that must be held in RAM at any one time, which is why the syllabus mentions multimedia storage.

7. RAM in communication & networking (syllabus link)

Network stacks use RAM for:

  • Receive and transmit buffers (FIFO queues).
  • Packet re‑assembly and fragmentation tables.
  • Routing tables and socket descriptors.

These structures reside in DRAM because they can be many megabytes in size, but the most frequently accessed control registers (e.g., NIC status) are cached in SRAM for low‑latency access.

8. Hardware beyond memory – quick refresher (required for 3.2)

To place RAM in the broader context of the Cambridge 9618 syllabus, the following components are highlighted.

  • CPU registers – the fastest storage, directly accessed by the ALU.
  • ALU (Arithmetic‑Logic Unit) – performs logical and arithmetic operations on register contents.
  • Control unit – generates control signals for fetching, decoding and executing instructions.
  • Bus architecture – address bus, data bus, and control bus connect CPU, cache, DRAM and I/O devices.
  • Logic‑gate refresher – NOT, AND, OR, NAND, NOR, XOR. Truth tables are shown in the sidebar below.

9. Processor fundamentals – fetch‑execute cycle & cache interaction

  1. Fetch: The PC (program counter) places the address of the next instruction on the address bus.
  2. Cache lookup: The cache controller checks the SRAM tags. If the line is present (a *hit*), the instruction is returned in ≈ 1 cycle.
  3. Miss handling: On a *miss*, the address is sent to the memory controller, which activates the appropriate DRAM row, reads the line, and writes it into the cache.
  4. Decode & Execute: The instruction is decoded by the control unit and executed by the ALU or other functional units.
  5. Write‑back / Write‑through: Results that modify memory are first written to the cache (SRAM) and later propagated to DRAM according to the chosen policy.

Interrupts cause the CPU to suspend the current fetch‑execute sequence, push the current PC onto a stack (usually in SRAM‑based cache), and jump to an interrupt‑service routine stored in DRAM or ROM.

10. Assembly language & bit‑manipulation (syllabus 4.2 & 4.3)

Below is a tiny example that shows both an assembly instruction and the equivalent C‑style bit‑mask operation used to control a memory‑mapped I/O register.

// -------------------------------------------------
// C‑style bit‑mask (already shown in the original notes)
// -------------------------------------------------
#define CONTROL_REG   (*(volatile unsigned char *)0xFF00)
#define ENABLE_REFRESH   0x04   // 0000 0100
#define POWER_DOWN       0x02   // 0000 0010

void enableRefresh(void) {
    unsigned char reg = CONTROL_REG;   // read
    reg |= ENABLE_REFRESH;              // set bit 2
    reg &= ~POWER_DOWN;                // clear bit 1
    CONTROL_REG = reg;                  // write back
}
// -------------------------------------------------
// Equivalent pseudo‑assembly for a generic 8‑bit CPU
// -------------------------------------------------
        LDA   0xFF00          ; Load CONTROL_REG into accumulator
        ORA   #0x04           ; Set ENABLE_REFRESH bit
        AND   #0xFB           ; Clear POWER_DOWN bit (0xFB = 1111 1011)
        STA   0xFF00          ; Store back to CONTROL_REG
        RTS                    ; Return from sub‑routine

Key points for students:

  • Use OR to set bits, AND with the complement to clear bits.
  • Registers in the CPU act as temporary storage for the mask operations.
  • When the same address is accessed repeatedly, the cache (SRAM) reduces the effective latency.

11. System software – how the OS uses RAM

  • Memory management: The OS maintains a page table that maps virtual addresses to physical DRAM frames. Page‑fault handling brings required pages from secondary storage into DRAM.
  • Virtual memory: Allows programmes to use more memory than physically present; the OS swaps pages between DRAM and disk.
  • Process scheduling: Each process receives its own region of virtual memory; the OS uses DRAM to store the process’s code, data, stack, and heap.
  • Cache control: Modern OSes can give hints (e.g., prefetch, cache‑flush) to optimise SRAM cache usage.

12. Key points to remember

  1. SRAM uses a 4–6‑transistor flip‑flop → very fast, low density, high cost; used for CPU caches and register files.
  2. DRAM stores charge on a capacitor → needs periodic refresh, slower, high density, low cost; used for main system memory.
  3. All data in RAM is binary; understanding two’s‑complement, ASCII/Unicode and IEEE‑754 helps explain memory‑size requirements.
  4. Multimedia and networking data occupy large DRAM buffers; compression and caching reduce the memory footprint.
  5. The fetch‑execute cycle relies on SRAM cache to keep instruction latency to a few nanoseconds.
  6. Bit‑mask techniques and simple assembly instructions are essential for low‑level control of memory‑mapped hardware.
  7. The operating system manages virtual‑to‑physical translation, paging and cache policies to make the most efficient use of both SRAM and DRAM.
Suggested diagram: cross‑section of an SRAM cell (flip‑flop) versus a DRAM cell (transistor + capacitor), illustrating the need for refresh in DRAM and the larger area occupied by an SRAM bit.

Create an account or Login to take a Quiz

92 views
0 improvement suggestions

Log in to suggest improvements to this note.