Show understanding of different number systems

1.1 Data Representation – Understanding Number Systems

Why Different Number Systems?

  • Electronic circuits have two stable states → binary (base‑2) is the natural language of a computer.
  • Humans read and write most numbers in decimal (base‑10).
  • For compactness and for grouping of binary bits we also use:
    • Octal (base‑8) – groups of three binary bits.
    • Hexadecimal (base‑16) – groups of four binary bits.
  • Binary prefixes (kibi, mebi, gibi…) distinguish powers of 2 from the decimal prefixes (kilo, mega, giga) required by the syllabus.

Binary vs Decimal Prefixes

PrefixSymbol2ⁿ (binary)10ⁿ (decimal)
kibiKi2¹⁰ = 1 024k = 10³ = 1 000
mebiMi2²⁰ = 1 048 576M = 10⁶ = 1 000 000
gibiGi2³⁰ = 1 073 741 824G = 10⁹ = 1 000 000 000

Positional Value

In any base‑b system the digit di at position i (counting from the right, starting at 0) represents

di × bi

Example (binary):

1011₂ = 1·2³ + 0·2² + 1·2¹ + 1·2⁰ = 8 + 0 + 2 + 1 = 11₁₀

Common Number Systems

SystemBaseDigits
Binary20, 1
Octal80 – 7
Decimal100 – 9
Hexadecimal160 – 9, A – F (A=10 … F=15)

Conversion Between Number Systems

Binary ↔ Decimal

  1. Binary → Decimal: Multiply each bit by the corresponding power of 2 and sum.
  2. Decimal → Binary: Repeatedly divide by 2, recording the remainders; read the remainders backwards.

Example – Convert 27₁₀ to binary:

27 ÷ 2 = 13  r 1
13 ÷ 2 = 6   r 1
6  ÷ 2 = 3   r 0
3  ÷ 2 = 1   r 1
1  ÷ 2 = 0   r 1   → 11011₂

Decimal ↔ Octal (direct)

  1. Divide the decimal number by 8, record the remainder (0‑7).
  2. Continue dividing the quotient by 8 until it is 0.
  3. Read the remainders backwards.

Example – 156₁₀ to octal:

156 ÷ 8 = 19  r 4
19 ÷ 8 = 2   r 3
2 ÷ 8 = 0    r 2   → 234₈

Binary ↔ Octal

  • Group binary digits in sets of three, starting from the right (add leading 0’s if needed).
  • Replace each group with its octal equivalent (000→0 … 111→7).

Example – 11010110₂ → 011 010 110 → 6 5 6₈.

Binary ↔ Hexadecimal

  • Group binary digits in sets of four, starting from the right (add leading 0’s if needed).
  • Replace each group with its hexadecimal equivalent (0000→0 … 1111→F).

Example – 10111001₂ → 1011 1001 → B9₁₆.

Decimal ↔ Hexadecimal (direct)

  1. Divide the decimal number by 16, record the remainder (0‑9 or A‑F).
  2. Continue dividing the quotient by 16 until it is 0.
  3. Read the remainders backwards.

Example – 254₁₀ → 254 ÷ 16 = 15 r 14 (E); 15 ÷ 16 = 0 r 15 (F) → FE₁₆.

Binary Arithmetic – Addition, Subtraction & Overflow

Binary Addition

Bit + BitSumCarry
0 + 000
0 + 110
1 + 010
1 + 101
1 + 1 + carry 111

Example – Add 1011₂ and 0110₂ (4‑bit addition)

   1 0 1 1
 + 0 1 1 0
 ----------
  1 0 0 0 1   (carry‑out = 1 → overflow for a 4‑bit signed number)

Binary Subtraction using Two’s Complement

  1. Form the two’s‑complement of the subtrahend.
  2. Add it to the minuend.
  3. Discard any final carry‑out; if there is none, the result is negative and is already in two’s‑complement form.

Example – 13₁₀ – 21₁₀ (8‑bit)

13 → 00001101₂
Two’s‑complement of 21:
 21 → 00010101₂
 invert → 11101010₂
 add 1 → 11101011₂
Add:
 00001101
+11101011
-----------
 11111000  (no carry‑out) → negative result
Interpret 11111000₂ as two’s‑complement:
 invert → 00000111
 add 1 → 00001000₂ = 8₁₀
Result = –8₁₀ (i.e. 13 – 21 = –8)

Overflow Rules (Signed Numbers)

  • For an n-bit two’s‑complement representation the range is –2ⁿ⁻¹ → 2ⁿ⁻¹ – 1.
    • 8‑bit: –128 → 127
    • 16‑bit: –32 768 → 32 767
  • Overflow occurs when the carry into the sign bit differs from the carry out of the sign bit.

Signed Number Representation – Two’s Complement

  1. Write the binary magnitude of the absolute value.
  2. Invert every bit (one’s complement).
  3. Add 1 to the result.

Example – Represent –13 in an 8‑bit system

13₁₀ = 00001101₂
One’s complement      = 11110010₂
Add 1                 = 11110011₂   → –13₁₀

Range Limits (8‑bit example)

01111111₂ = 127₁₀   (largest positive)
10000000₂ = –128₁₀  (most negative)

Binary‑Coded Decimal (BCD)

  • Each decimal digit is stored as a separate 4‑bit binary nibble.
  • Allows exact decimal representation (important for financial calculations).

Conversion Algorithm – Decimal → BCD

  1. Write the decimal number as individual digits.
  2. Convert each digit to its 4‑bit binary equivalent.
  3. Concatenate the nibbles (most‑significant digit on the left).

Example – 259₁₀ → BCD:

2 → 0010
5 → 0101
9 → 1001
BCD = 0010 0101 1001₂

Character Encodings

Computers store text as numbers. The syllabus requires knowledge of the binary form of characters for the most common encodings.

ASCII (7‑bit)

CharDecimalHexBinary (7‑bit)
Space3220010 0000
A6541100 0001
a9761110 0001
04830011 0000
95739011 1001

In practice ASCII characters are stored in an 8‑bit byte, the most‑significant bit being 0.

Extended ASCII (ISO‑8859‑1, 8‑bit)

Values 0‑127 are identical to ASCII; 128‑255 add characters for Western European languages. Example: “é” = 233₁₀ = E9₁₆ = 1110 1001₂.

Unicode (UTF‑8)

  • Variable‑length encoding (1–4 bytes) that can represent over a million code points.
  • For the first 128 characters UTF‑8 is identical to ASCII, which simplifies text handling.

Floating‑Point Representation (IEEE 754)

IEEE 754 defines how real numbers are stored in binary. Two formats are required for the syllabus.

Single‑Precision (32 bits)

  • 1 bit – sign (S)
  • 8 bits – exponent (E) with a bias of 127
  • 23 bits – mantissa (M) – the fraction after the leading 1 (implicit).

Value = (‑1)S × 1.M × 2(E‑127)

Double‑Precision (64 bits)

  • 1‑bit sign, 11‑bit exponent (bias = 1023), 52‑bit mantissa.

Key Concepts

  • Normalization: The binary mantissa is always written as 1.xxxx…; the leading 1 is not stored (saving one bit).
  • Bias lets the exponent represent both positive and negative powers.
  • Rounding modes (nearest‑even, toward 0, etc.) determine how a value that cannot be represented exactly is approximated.
  • Precision loss: Only a finite number of mantissa bits are stored, so many decimal fractions are only approximated (e.g., 0.1₁₀ ≈ 0.0001100110011…₂).
  • Overflow / Underflow:
    • Overflow – exponent exceeds the maximum (result is ±∞).
    • Underflow – exponent is below the minimum (result is a sub‑normal number or 0).

Worked Example – Convert 5.75₁₀ to Single‑Precision

  1. Integer part 5 → 101₂.
  2. Fractional part 0.75:
    • 0.75 × 2 = 1.5 → 1
    • 0.5 × 2 = 1.0 → 1
    → 0.11₂.
  3. Combine → 101.11₂.
  4. Normalize → 1.0111₂ × 2².
  5. Sign = 0 (positive).
  6. Exponent = 2 + 127 = 129 → 1000 0001₂.
  7. Mantissa = 0111 0000 0000 0000 0000 000 (23 bits, pad with zeros).
  8. Result (binary): 0 10000001 01110000000000000000000

Rounding Illustration

Convert 0.1₁₀ to single‑precision:

  • Exact binary = 0.000110011001100…₂ (repeating).
  • Only 23 mantissa bits are kept → 0.00011001100110011001101₂.
  • Rounded (nearest‑even) gives the stored value ≈ 0.10000000149011612₁₀, showing a small error.

Data Compression

Compression reduces storage or transmission requirements. The syllabus expects knowledge of both lossless and lossy methods, plus simple calculations.

Lossless Compression – Run‑Length Encoding (RLE)

  • Consecutive identical symbols are replaced by a count and the symbol.
  • Effective for data with long runs of the same value (e.g., simple graphics, binary masks).

Example – Encode the bitmap row AAAAAAABBBCCDDDD (A=‘A’, B=‘B’, …):

7A 3B 2C 4D → 7 A 3 B 2 C 4 D

If each character originally required 8 bits, the uncompressed size is 16 × 8 = 128 bits. The RLE representation needs 4 counts + 4 symbols = 8 bytes = 64 bits → 50 % reduction.

Other Lossless Techniques (brief)

  • Huffman coding – variable‑length codes based on symbol frequencies.
  • LZW – dictionary‑based method used in GIF and PNG.

Lossy Compression

  • Some information is permanently discarded; the reconstructed data is an approximation.
  • Typical algorithms:
    • JPEG – transforms and quantises image blocks.
    • MP3/AAC – psychoacoustic models discard inaudible frequencies.
    • MPEG‑4 – motion‑compensation for video.
  • Trade‑off: higher compression → lower perceived quality.

Multimedia Data Representation

Bitmap (Raster) Graphics

  • Image stored as a grid of pixels.
  • File size = width × height × colour‑depth (bits per pixel) ÷ 8 (bytes).

Example – 640 × 480 image, 24‑bit colour:

640 × 480 × 24 = 7 372 800 bits = 921 600 bytes ≈ 0.88 MiB

Vector Graphics

  • Described by geometric primitives (lines, curves, shapes) rather than pixels.
  • File size depends on the number of objects, not the resolution.

Illustrative calculation – a simple SVG containing 5 lines and 3 circles, each described by 4 numbers (coordinates/radius). Assuming each number is stored as a 32‑bit float:

Objects = 8
Numbers = 8 × 4 = 32
Bits = 32 × 32 = 1 024 bits = 128 bytes

Even a complex diagram remains far smaller than a comparable bitmap of the same visual size.

Sound Sampling

  • Analog signal → discrete samples taken at a fixed rate.
  • Data rate = sample‑rate × bit‑depth × number of channels (bits / second).

Example – CD‑quality audio:

44 100 Hz × 16 bits × 2 channels = 1 411 200 bits/s ≈ 176.4 KB/s

Summary of Key Skills Required for the Syllabus

  • Convert numbers between binary, octal, decimal and hexadecimal (both direct and via binary).
  • Perform binary addition and subtraction using two’s‑complement, recognising overflow.
  • Represent signed integers in two’s‑complement and state the range for a given word length.
  • Convert decimal numbers to BCD and explain why BCD is used.
  • Write the binary representation of characters for ASCII, extended ASCII and Unicode (UTF‑8).
  • Explain IEEE 754 single‑ and double‑precision formats, normalisation, bias, rounding errors and overflow/underflow.
  • Apply a simple lossless compression method (e.g., RLE) and calculate the resulting size reduction.
  • Calculate storage requirements for bitmap images, vector graphics and sampled sound.

Create an account or Login to take a Quiz

91 views
0 improvement suggestions

Log in to suggest improvements to this note.