Understanding what is meant by a programming paradigm
20.1 Programming Paradigms
Objective
To understand what is meant by a programming paradigm, to recognise the four paradigms required by the Cambridge AS & A‑Level Computer Science syllabus, and to be able to justify the choice of a paradigm for a given problem.
What the syllabus expects
The Cambridge AS & A‑Level Computer Science syllabus specifies exactly four programming paradigms that candidates must be able to identify, compare and justify:
Imperative (Procedural)
Object‑Oriented
Functional
Declarative – includes Logic and Constraint programming
Why paradigms matter (Assessment Objectives)
Link to assessment objectives
AO2 – Analysing problems: recognising which paradigm best matches the natural description of a problem.
AO3 – Designing solutions: selecting language features, structuring code and justifying the paradigm choice in an exam response.
What is a programming paradigm?
A programming paradigm is a fundamental style or approach to writing software. It supplies a set of concepts, principles and patterns that shape how developers think about problems, organise code and express algorithms. Paradigms influence language design, program structure and the way a solution is described to the computer.
Major programming paradigms (Cambridge syllabus)
1. Imperative (Procedural)
Core idea: Describe how to achieve a result using statements that change program state.
Key concepts:
Variables, assignment, loops and conditionals.
Explicit state changes.
Advantages:
Simple, direct mapping to machine instructions.
Fine‑grained control over memory and performance.
Limitations:
State can become tangled as a program grows.
Harder to reason about in large systems.
Typical use‑cases: System programming, embedded software, low‑level hardware interaction.
When to choose it:
When performance or direct hardware access is critical.
When the problem is naturally expressed as a sequence of steps.
Justification prompt: “When would you choose the imperative paradigm for a data‑intensive algorithm that must run with minimal overhead?”
2. Object‑Oriented
Core idea: Model software as interacting objects that encapsulate data and behaviour.
Key concepts:
Classes and objects.
Inheritance and polymorphism – enable code reuse and dynamic dispatch.
Encapsulation (access modifiers).
Advantages:
Encapsulation, inheritance and polymorphism promote modularity and reuse.
Natural fit for modelling real‑world entities.
Limitations:
Can lead to over‑engineering (excessive class hierarchies).
Runtime overhead of dynamic dispatch.
Typical use‑cases: Large‑scale applications, GUI development, simulations, game engines.
When to choose it:
When the problem domain consists of distinct entities with state and behaviour.
When code reuse and maintainability are priorities.
Justification prompt: “Why would an object‑oriented design be preferable for a banking system that models customers, accounts and transactions?”
3. Functional
Core idea: Compose programs by applying and composing pure functions; avoid mutable state.
Key concepts:
First‑class and higher‑order functions (e.g., map, filter, reduce).
Recursion replaces iteration.
Immutability and referential transparency.
Advantages:
Referential transparency makes reasoning and testing easier.
Facilitates safe concurrent and parallel execution.
Limitations:
Steep learning curve for programmers accustomed to imperative thinking.
Recursion can be less efficient without tail‑call optimisation.
Imperative: Uses variables, loops and conditionals; state changes are explicit.
Object‑Oriented: Emphasises objects, classes, inheritance and polymorphism; behaviour is invoked via messages.
Functional: Functions are first‑class citizens; side‑effects are minimised; recursion and higher‑order functions replace iteration.
Declarative – Logic & Constraint: Focuses on *what* must hold; the interpreter performs back‑tracking and constraint propagation to find solutions.
Declarative – Logic & Constraint programming in detail
Logic programming (e.g., Prolog) expresses knowledge as facts and rules. The engine performs automatic back‑tracking search to satisfy a query.
parent(alice, bob).
parent(bob, carol).
ancestor(X, Y) :- parent(X, Y).
ancestor(X, Y) :- parent(X, Z), ancestor(Z, Y).
Query ?‑ ancestor(alice, carol). succeeds because the rules describe the required relationship.
Constraint programming extends this idea by allowing variables to be bound by mathematical constraints. A specialised solver propagates constraints and explores the domain until all are satisfied.
array[1..3] of var 1..9: Row;
constraint Row[1] + Row[2] + Row[3] = 15;
constraint alldifferent(Row);
solve satisfy;
The solver finds a permutation of {1,…,9} that meets the sum and uniqueness constraints.
Example: Summing a list of numbers in each paradigm
Imperative (Python)
total = 0
for n in numbers:
total += n
Object‑Oriented (Java)
int total = 0;
for (int n : numbers) {
total += n;
}
Functional (Haskell)
total = sum numbers -- uses a pure function
Functional – recursion & higher‑order (Python)
def sum_list(lst):
if not lst:
return 0
return lst[0] + sum_list(lst[1:])
# using a higher‑order function
from functools import reduce
total = reduce(lambda a, b: a + b, numbers, 0)
Declarative – Logic (Prolog)
sum_list([], 0).
sum_list([H|T], Sum) :-
sum_list(T, Rest),
Sum is H + Rest.
Query ?‑ sum_list([1,2,3,4], S). yields S = 10.
Choosing a paradigm – exam checklist
What is the natural description of the problem? (steps → Imperative, entities → OO, transformations → Functional, rules/constraints → Declarative.)
Which paradigm offers the greatest readability and maintainability for the given team?
Are there performance or resource constraints that favour a low‑level approach?
Does the required language (Java, Visual Basic, Python) support the paradigm efficiently?
What are the known limitations of the chosen paradigm for this domain?