Show how it is possible for ADTs to be implemented from another ADT

19.1 Algorithms – Implementing an ADT from Another ADT

1. Goal of the Topic

  • Recall the definition of an Abstract Data Type (ADT).
  • Show how a new ADT can be built by re‑using an existing ADT.
  • Analyse the resulting implementation in terms of correctness, time‑complexity and memory‑usage.
  • Apply the same ideas to the searching and sorting algorithms required by the Cambridge A‑Level syllabus.

2. What is an ADT?

An Abstract Data Type (ADT) describes what a data structure does, not how it does it.

  • Values – the set of items that may be stored.
  • Operations – the public procedures that can be applied (e.g. push, pop, insert, delete).
  • Behaviour – pre‑conditions, post‑conditions and side‑effects of each operation.

The ADT specification is independent of the concrete representation (array, linked list, tree, …).

Typical ADTs in the Cambridge (9618) syllabus:

  • Stack
  • Queue
  • List
  • Priority Queue
  • Dictionary (Map)

3. Why Implement One ADT Using Another?

  • Code reuse – a proven implementation can be wrapped rather than rewritten.
  • Correctness by construction – if the underlying ADT is correct, the derived ADT is correct provided the wrapper respects the specification.
  • Performance optimisation – choose an underlying ADT whose operation costs best match the required ADT.
  • Modularity – changes to the underlying ADT do not affect client code that uses the higher‑level ADT.

4. Step‑wise Refinement for ADT‑to‑ADT Implementation

  1. State the abstract specification of the target ADT (list of operations and their contracts).
  2. Select a suitable underlying ADT that can provide the needed behaviour.
  3. Map each target operation to one or more operations of the underlying ADT.
  4. Write wrapper algorithms (pseudocode) that perform the translation.
  5. Analyse complexity (worst‑case, amortised) and extra memory required.
  6. Sketch a correctness argument – show that the wrapper preserves the abstract contracts.
  7. Handle error conditions (empty structure, invalid key, …) using exceptions or special return values.

5. Example Implementations

5.1 Stack using a Dynamic List (array‑based)

Target ADT: push(x), pop(), top()

push(x):
    L.append(x)                     // add at the end of the list

pop():
    if L.isEmpty() then
        raise EmptyStructureError
    return L.removeLast()           // remove and return last element

top():
    if L.isEmpty() then
        raise EmptyStructureError
    return L.get(L.size()‑1)        // read without removing
  • Time complexity: append and removeLast are O(1) amortised (array may need occasional resizing).
  • Memory usage: O(n) for the list, where n = number of stored elements.

5.2 Queue using Two Stacks

Target ADT: enqueue(x), dequeue(), front()

enqueue(x):
    S1.push(x)

dequeue():
    if S2.isEmpty() then
        while not S1.isEmpty() do
            S2.push(S1.pop())       // transfer and reverse order
    if S2.isEmpty() then
        raise EmptyStructureError
    return S2.pop()

front():
    if S2.isEmpty() then
        while not S1.isEmpty() do
            S2.push(S1.pop())
    if S2.isEmpty() then
        raise EmptyStructureError
    return S2.top()
  • Amortised time: O(1) for both enqueue and dequeue. The occasional O(n) transfer is spread over many operations.
  • Memory usage: O(n) – the two stacks together hold exactly the elements of the queue.

5.3 Priority Queue using a Binary Heap (array‑based)

Target ADT: insert(x), deleteMin(), peekMin()

insert(x):
    heap.append(x)                  // place at next free position
    heapifyUp(heap.size()‑1)        // restore heap order

deleteMin():
    if heap.isEmpty() then
        raise EmptyStructureError
    min ← heap[0]
    heap[0] ← heap.removeLast()     // move last element to root
    heapifyDown(0)                  // restore heap order
    return min

peekMin():
    if heap.isEmpty() then
        raise EmptyStructureError
    return heap[0]
  • Time complexity: O(log n) for insert and deleteMin; O(1) for peekMin.
  • Memory usage: O(n) for the underlying array.

5.4 Dictionary (Map) using a Hash Table

Target ADT: put(key, value), get(key), remove(key)

hash(key):
    return (some deterministic function of key) mod m   // m = table size

put(key, value):
    i ← hash(key)
    while table[i] ≠ empty and table[i].key ≠ key do
        i ← (i + 1) mod m               // linear probing
    table[i] ← (key, value)

get(key):
    i ← hash(key)
    while table[i] ≠ empty do
        if table[i].key = key then
            return table[i].value
        i ← (i + 1) mod m
    raise KeyError

remove(key):
    i ← hash(key)
    while table[i] ≠ empty do
        if table[i].key = key then
            table[i] ← deletedMarker
            return
        i ← (i + 1) mod m
    raise KeyError
  • Average‑case time: O(1) for all three operations (assuming low load factor).
  • Worst‑case time: O(n) if many collisions occur.
  • Memory usage: O(m) where m ≥ n, plus a small constant for the hash function.

6. Searching and Sorting Algorithms (Syllabus Requirement 19.1)

6.1 Linear Search (Iterative & Recursive)

// iterative version
linearSearchIter(arr, key):
    for i ← 0 to arr.length‑1 do
        if arr[i] = key then
            return i
    return -1          // not found

// recursive version
linearSearchRec(arr, key, i):
    if i = arr.length then return -1
    if arr[i] = key then return i
    return linearSearchRec(arr, key, i+1)
  • Worst‑case time: O(n)
  • Average‑case time: O(n)
  • Memory: O(1) iterative, O(n) recursive (call stack)

6.2 Binary Search (requires a sorted array)

binarySearch(arr, key):
    low ← 0; high ← arr.length‑1
    while low ≤ high do
        mid ← (low + high) ÷ 2
        if arr[mid] = key then return mid
        else if arr[mid] < key then low ← mid + 1
        else high ← mid – 1
    return -1
  • Worst‑case time: O(log n)
  • Average‑case time: O(log n)
  • Memory: O(1) iterative, O(log n) recursive

6.3 Insertion Sort

insertionSort(arr):
    for i ← 1 to arr.length‑1 do
        key ← arr[i]
        j ← i‑1
        while j ≥ 0 and arr[j] > key do
            arr[j+1] ← arr[j]
            j ← j‑1
        arr[j+1] ← key
  • Worst‑case time: O(n²) (reverse‑ordered input)
  • Average‑case time: O(n²)
  • Best‑case time: O(n) (already sorted)
  • Memory: O(1) – in‑place
  • Stable and adaptive.

6.4 Bubble Sort

bubbleSort(arr):
    repeat
        swapped ← false
        for i ← 0 to arr.length‑2 do
            if arr[i] > arr[i+1] then
                swap(arr[i], arr[i+1])
                swapped ← true
    until not swapped
  • Worst‑case time: O(n²)
  • Average‑case time: O(n²)
  • Best‑case time: O(n) (optimised version with swapped flag)
  • Memory: O(1) – in‑place
  • Stable but rarely used in practice.

7. Comparing Algorithms – Criteria Required by the Syllabus (AO2)

When comparing any two algorithms (search, sort, or ADT‑based), discuss:

  • Time taken – Big‑O notation for worst‑case, average‑case and, where relevant, best‑case.
  • Extra memory – space beyond the input data (recursion stack, auxiliary arrays, etc.).
  • Stability (sorting only) – does the algorithm preserve the relative order of equal keys?
  • Adaptability – does the algorithm run faster on partially ordered data?

7.1 Search & Sort – Summary Table

AlgorithmWorst‑caseAverage‑caseBest‑caseExtra MemoryNotes
Linear SearchO(n)O(n)O(1) (first element)O(1) iterative / O(n) recursiveWorks on unsorted data.
Binary SearchO(log n)O(log n)O(log n)O(1) iterative / O(log n) recursiveArray must be sorted.
Insertion SortO(n²)O(n²)O(n)O(1)Stable, adaptive.
Bubble SortO(n²)O(n²)O(n)O(1)Stable; “swapped” flag gives O(n) best case.

7.2 ADT‑based Implementations – Summary Table

ADT ImplementedUnderlying ADTOperationWorst‑case TimeAmortised TimeExtra Memory
StackDynamic List (array)pushO(1) amortised (occasional resize O(n))O(1)O(n)
popO(1)O(1)O(n)
topO(1)O(1)O(n)
QueueTwo StacksenqueueO(1)O(1)O(n)
dequeueO(n) when S2 emptyO(1) amortisedO(n)
frontO(n) when transfer neededO(1) amortisedO(n)
sizeO(1)O(1)O(n)
Priority QueueBinary Heap (array)insertO(log n)O(log n)O(n)
deleteMin / deleteMaxO(log n)O(log n)O(n)
peekMin / peekMaxO(1)O(1)O(n)

8. Correctness Sketches

  • Stack from List – The list’s last element is always the stack’s top. push appends, pop removes that same element, preserving LIFO order.
  • Queue from Two Stacks – Elements are enqueued onto S1 in arrival order. When S2 is empty, all elements are transferred, reversing their order so the oldest element becomes the top of S2. Subsequent dequeue operations remove from S2, guaranteeing FIFO behaviour.
  • Priority Queue from Heap – The heap invariant (parent ≤ children for a min‑heap) guarantees the smallest element resides at index 0. peekMin reads this element; deleteMin removes it and restores the invariant, matching the abstract specification.
  • Dictionary from Hash Table – The hash function maps each key to a unique bucket (modulo collisions). Linear probing ensures that put, get and remove locate the correct bucket, preserving the “key → value” relationship.
  • Search & Sort – Correctness can be proved by induction on the size of the input (e.g., insertion sort maintains a sorted prefix after each outer iteration).

9. Error Handling (Exception Example)

pop():
    if L.isEmpty() then
        raise EmptyStructureError("Attempt to pop from an empty stack")
    return L.removeLast()

In an exam answer it is sufficient to state that an appropriate error is raised (or a special sentinel value returned) when an operation is applied to an empty structure or an invalid key.

Create an account or Login to take a Quiz

93 views
0 improvement suggestions

Log in to suggest improvements to this note.