Show understanding of how an OS can maximise the use of resources

16.1 Operating‑System Purposes & Management Tasks

Objective

Show understanding of how an operating system (OS) can maximise the use of hardware resources.

1. Why an OS Is Needed – High‑level Overview

  • The OS provides a common abstract interface between user programmes and the underlying hardware, hiding the complexity of devices such as the CPU, memory, storage and I/O peripherals.
  • It coordinates and shares* resources** so that many programmes can run at the same time without interfering with each other.
  • It ensures system stability, security and efficiency by protecting memory, controlling access to devices and handling errors.
  • It offers a user interface (graphical or command‑line) that lets users start, control and monitor programmes.

2. Core Functions of an OS (Cambridge 5.1)

  1. Process Management
  2. Memory Management
  3. File‑System Management
  4. Device Management
  5. Security & Protection
  6. User Interface

3. Process Management – Maximising CPU Utilisation

3.1 Process Life‑Cycle

  • New – being created.
  • Ready – waiting for a CPU slice.
  • Running – currently executing.
  • Blocked (Waiting) – waiting for I/O or an event.
  • Terminated – execution finished.

3.2 Process‑State Diagram

Insert diagram here – a standard state‑transition diagram showing the five states above and the events (admit, dispatch, timeout, I/O request, I/O completion, exit) that cause transitions.

3.3 Scheduling Algorithms

AlgorithmKey IdeaTypical Use‑Case
Round‑Robin (RR)Equal time‑slice (quantum) for each ready process.Interactive systems – guarantees responsiveness.
Shortest Job First (SJF) / Shortest Remaining Time (SRT)Process with the smallest CPU burst runs first.Batch workloads – maximises throughput.
Priority SchedulingHigher‑priority processes are selected before lower‑priority ones.Mixed workloads – real‑time or critical tasks get preference.
Multilevel Queue & Multilevel Feedback QueueSeparate queues for different priority classes; feedback moves processes between queues.Complex environments – balances interactivity and background work.

3.4 Worked Example – Choosing a Scheduler

Three processes arrive at time 0:

  • P1 – CPU burst 8 ms, interactive.
  • P2 – CPU burst 4 ms, interactive.
  • P3 – CPU burst 12 ms, background.

If the OS uses Round‑Robin (quantum = 4 ms), the order of execution is:

0‑4 ms: P1 (first quantum)

4‑8 ms: P2 (finishes)

8‑12 ms: P1 (second quantum)

12‑16 ms: P3 (first quantum)

16‑20 ms: P1 (finishes)

20‑24 ms: P3 (second quantum)

24‑28 ms: P3 (finishes)

Total turnaround time = 28 ms. All interactive processes receive CPU time within the first two quanta, keeping the UI responsive.

Using SJF** would run P2, then P1, then P3, giving a lower average turnaround time (≈ 20 ms) but the background task (P3) would start later, possibly causing UI lag. Hence, for interactive environments the OS prefers RR.

3.5 Quantitative CPU‑Utilisation Example

Assume three processes use the CPU for 30 ms, 20 ms and 10 ms during a 100 ms observation interval.

\[

U_{CPU}= \frac{30+20+10}{100}\times100\% = 60\%

\]

Techniques to raise the 60 % figure include:

  • Reducing the time‑slice for interactive tasks.
  • Dynamic priority adjustment (aging).
  • Load‑balancing across multiple cores (symmetrical multiprocessing).

3.6 Synchronisation & Inter‑Process Communication (IPC)

  • Mutexes, semaphores and monitors prevent race conditions.
  • Pipes, message queues and shared memory allow safe data exchange.

3.7 Interrupt Handling (OS Kernel Role)

  • The CPU generates an interrupt request (IRQ) when a device needs attention.
  • The OS kernel saves the current context, executes the appropriate interrupt service routine (ISR), then restores the context.
  • Interrupt‑driven I/O eliminates wasteful polling and keeps the CPU busy with useful work.

4. Memory Management – Maximising Memory Utilisation

4.1 Virtual Memory

  • Gives each process the illusion of a large, contiguous address space.
  • Implemented with paging (fixed‑size pages) and/or segmentation (logical sections such as code, data, stack).

4.2 Paging vs. Segmentation

AspectPagingSegmentation
UnitPages (e.g., 4 KB)Segments (code, data, stack)
Address TranslationPage # + offset → frame # + offsetSegment # + offset → base address + offset
FragmentationInternal onlyExternal possible
SharingEasy – share whole pagesNatural – share code or data segments

4.3 Swapping, Paging & Thrashing

  • Swapping moves entire processes between RAM and secondary storage to free memory for active tasks.
  • Paging brings in only the pages actually referenced (demand paging).
  • Thrashing occurs when the system spends more time paging than executing. The OS combats it by:

    • Reducing the degree of multiprogramming.
    • Using working‑set or page‑fault‑frequency algorithms to keep frequently used pages resident.

4.4 Memory‑Utilisation Formula & Example

\[

U_{MEM}= \frac{\text{Allocated memory in use}}{\text{Total physical memory}}\times100\%

\]

Example: A machine has 8 GB RAM. Currently 5 GB is occupied by active pages.

\[

U_{MEM}= \frac{5}{8}\times100\% = 62.5\%

\]

Techniques to raise this figure:

  • Demand paging (load pages only when needed).
  • Effective page‑replacement policies (LRU, Clock).
  • Segmentation to share read‑only code, avoiding duplicate copies.

5. File‑System Management – Efficient Use of Storage

  • Hierarchical directory structure (folders within folders) for logical organisation.
  • Metadata** stored for every file:

    • Size, creation/modification timestamps.
    • Owner & group.
    • Permission bits (read/write/execute) and optional Access‑Control Lists (ACLs) for finer‑grained rights.

  • File caching – recently accessed blocks are kept in RAM, reducing disk reads.
  • Write‑back buffers – small writes are combined before being flushed to disk, lowering I/O overhead.
  • Support for security (ACLs, permission bits) and integrity (journalling in modern file systems).

6. Device Management – Keeping I/O Devices Productive

6.1 Device Drivers

Drivers translate generic OS requests into hardware‑specific commands, allowing the OS to control many different devices through a uniform interface.

6.2 I/O Scheduling & Buffering

  • Interrupt‑driven I/O – the device signals the CPU when it is ready, avoiding wasteful polling.
  • Direct Memory Access (DMA) – data moves directly between a device and RAM without CPU involvement, freeing the CPU for other work.
  • Buffering – temporary RAM storage smooths speed differences between fast CPU and slower devices.
  • Common disk‑scheduling algorithms:

    • Elevator (SCAN) – moves the disk arm in one direction servicing requests, then reverses; reduces average seek time.
    • Shortest Seek Time First (SSTF) – selects the request closest to the current head position.

7. Security & Protection – Safe Resource Sharing

  • Authentication – verifies user identity (passwords, biometrics, smart cards).
  • Access Control – decides who may read, write or execute a resource.

    • Permission bits (r/w/x) for owner, group and others.
    • Access‑Control Lists (ACLs) for more detailed rights.

  • Process Isolation – each process runs in its own protected address space, preventing accidental or malicious interference.
  • Sandboxing – restricts a programme’s ability to access system resources, useful for untrusted applications.

8. Language Translators – Compilers, Interpreters & IDEs (Cambridge 5.2)

  • Assembler – translates symbolic machine language (assembly) into object code.
  • Compiler – translates a whole high‑level program into executable machine code before execution (e.g., C, Java).
  • Interpreter – reads and executes source code line‑by‑line at run‑time (e.g., Python, BASIC).
  • Integrated Development Environment (IDE) – combines editor, compiler/interpreter, debugger, syntax‑checking and code‑completion tools to aid programme development.

9. Quantitative Performance Metrics

MetricFormulaWhat It Measures
CPU Utilisation (UCPU)\(U_{CPU}= \dfrac{\text{CPU time on processes}}{\text{Total elapsed time}}\times100\%\)Proportion of processor capacity that is actively used.
Memory Utilisation (UMEM)\(U_{MEM}= \dfrac{\text{Allocated memory in use}}{\text{Total physical memory}}\times100\%\)Degree to which RAM is occupied by useful data.
I/O Throughput (TIO)\(T_{IO}= \dfrac{\text{Number of I/O operations completed}}{\text{Unit time}}\)Rate at which the system can service I/O requests.

10. Example Scenario – Balancing Resources in a Modern OS

A user runs a web browser, a music player, and a background backup utility simultaneously.

  1. Scheduling – The OS assigns higher priority (or a shorter RR quantum) to the browser and music player, keeping the UI responsive, while the backup runs at a lower priority.
  2. Virtual Memory – The backup’s large buffers are kept in RAM only while needed; less‑used pages are swapped out, freeing memory for the browser’s cache.
  3. File Caching – Frequently visited web pages and music files remain in the file‑system cache, reducing disk reads.
  4. DMA & Interrupts – Audio data is transferred from the sound card to RAM via DMA; the sound driver receives an interrupt when a buffer is empty and refills it without CPU polling.
  5. Device Drivers & I/O Scheduling – The disk driver uses the SCAN algorithm: the backup’s large sequential writes are serviced efficiently, while random reads from the browser are still served quickly.
  6. Security – Each programme runs under the user’s account; the OS checks permission bits/ACLs before allowing the backup to write to the external drive.
  7. Process Isolation – The browser, music player and backup each have separate address spaces, preventing a crash in one from affecting the others.

11. Summary

  • The OS abstracts hardware, providing a stable platform and a user interface.
  • Through detailed management of processes, memory, files, devices and security, it maximises CPU, RAM, storage and I/O utilisation.
  • Key techniques – scheduling, demand paging, caching, DMA, interrupt‑driven I/O, and robust access control – keep resources busy, minimise idle time and improve overall system throughput.

Suggested diagram: Flowchart showing interaction between user programmes, the OS kernel (process, memory, file‑system, device, security modules), and hardware components (CPU, RAM, storage, I/O devices).