Numerical methods: numerical solutions of equations, iterative methods

Pure Mathematics 3 (P3) – Numerical Methods (Topic 3.6)

1. Syllabus checklist

The Cambridge International AS & A Level Mathematics 9709 syllabus requires that students:

  • Define a root (zero) of a continuous function \(f(x)\).
  • Understand the idea of a sequence of approximations \(\{x_n\}\) that converges to a root.
  • Write a simple‑iteration formula of the form \(x_{n+1}=F(x_n)\) derived from the original equation \(f(x)=0\).
  • State the three admissible stopping criteria:
    1. \(|x_{n+1}-x_n|<\varepsilon\)
    2. \(|f(x_{n+1})|<\varepsilon\)
    3. Maximum number of iterations reached.
  • Identify the convergence condition for each method and give the order of convergence.
  • Choose an appropriate method from the information available (sign change, derivative, rearrangement).

2. The root‑finding problem

Given a continuous function \(f(x)\), a root is a number \(r\) such that \(f(r)=0\).

Numerical methods generate a sequence \(\{x_n\}\) that (hopefully) converges to \(r\). Two broad families are used:

  • Bracketing (closed‑interval) methods: require a sign change on \([a,b]\) (i.e. \(f(a)f(b)<0\)).
  • Open‑interval methods: require differentiability of \(f\) in a neighbourhood of the root.

Convergence is monitored by either the change in successive iterates \(|x_{n+1}-x_n|\) or the residual \(|f(x_{n+1})|\).

3. Stopping criteria (syllabus requirement)

For every iterative method the exam expects one of the following to be stated and used:

  1. \(|x_{n+1}-x_n|<\varepsilon\) (tolerance on successive approximations)
  2. \(|f(x_{n+1})|<\varepsilon\) (tolerance on the function value)
  3. Maximum number of iterations reached (to avoid an infinite loop).

In examples below the tolerance \(\varepsilon\) is taken as \(10^{-3}\) unless otherwise noted.

4. Iterative methods required by the syllabus

4.1 Bisection method (bracketing)

Purpose: guaranteed convergence when a sign change is known.

Algorithm

  1. Choose \([a,b]\) with \(f(a)f(b)<0\).
  2. Set \(c=\dfrac{a+b}{2}\) and evaluate \(f(c)\).
  3. If \(f(c)=0\) stop – \(c\) is the exact root.
  4. Otherwise replace the endpoint that has the same sign as \(f(c)\) by \(c\) (keep the sub‑interval where the sign change persists).
  5. Repeat steps 2–4 until one of the stopping criteria is satisfied.

Convergence: linear (order \(p=1\)). After \(n\) iterations

\[ |r-c_n|\le\frac{b-a}{2^{\,n+1}}. \]

Typical pitfalls: slow convergence; requires the function to be continuous on the interval.

4.2 Newton–Raphson method (tangent‑line iteration)

Iteration formula

\[ x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}. \]
Derivation (simple‑iteration from the original equation) The tangent to \(y=f(x)\) at \((x_n,f(x_n))\) is \(y = f(x_n)+f'(x_n)(x-x_n)\). Setting this line to zero gives the x‑intercept \(x = x_n-\dfrac{f(x_n)}{f'(x_n)}\), which is taken as the next approximation \(x_{n+1}\). Thus the iteration is directly obtained from the equation \(f(x)=0\).
  • Requires \(f\) differentiable and \(f'(x_n)eq0\) near the root.
  • Order of convergence: quadratic (\(p=2\)).
  • Stopping criteria: any of the three listed in §3.
  • Potential failure modes: division by a very small derivative, divergence if the initial guess is far from the root, or cycling.

4.3 Secant method (finite‑difference iteration)

Iteration formula

\[ x_{n+1}=x_n-\frac{f(x_n)\,(x_n-x_{n-1})}{f(x_n)-f(x_{n-1})}. \]
  • Uses two previous approximations; no derivative is required.
  • Order of convergence: super‑linear, \(p\approx1.618\) (the golden ratio).
  • Stopping criteria: same three options as in §3.
  • More robust than Newton when \(f'\) is difficult to evaluate, but still may diverge for a poor pair of starting values.

4.4 Fixed‑point (simple) iteration

If the equation \(f(x)=0\) can be rewritten as \(x=g(x)\), the iteration

\[ x_{n+1}=g(x_n) \]

is called fixed‑point iteration. It is the generic “simple‑iteration” form required by the syllabus.

  • Convergence condition (Banach fixed‑point theorem): there exists an interval \(I\) containing the fixed point \(r\) such that \(|g'(x)|<1\) for all \(x\in I\). Under this condition the iteration converges linearly (\(p=1\)).
  • Choosing a suitable rearrangement is crucial; a bad choice can give \(|g'|>1\) and cause divergence.
  • Stopping criteria: any of the three listed in §3.
Example of a classic fixed‑point iteration

To compute \(\sqrt{a}\) solve \(x^2-a=0\) and rearrange as

\[ x = g(x)=\frac12\left(x+\frac{a}{x}\right). \]

Since \(|g'(x)|=\bigl|\frac12\bigl(1-\frac{a}{x^2}\bigr)\bigr|<1\) near the root, the iteration converges for any positive start \(x_0\).

5. Summary of convergence orders

MethodOrder of convergence \(p\)Typical convergence speed
Bisection1 (linear)Slow – interval halves each step.
Fixed‑point (ordinary)1 (linear)Depends on \(|g'|\); the smaller \(|g'|\), the faster.
Secant\(\approx1.618\) (super‑linear)Faster than linear, slower than quadratic.
Newton–Raphson2 (quadratic)Very rapid once the iterate is close to a simple root.

6. Choosing a method – decision table (mirrors the syllabus flowchart)

Information availableFirst choice (guaranteed or fastest)Alternative if it fails
Only a sign change is known (\(f(a)f(b)<0\)) Bisection (guaranteed convergence) Try a suitable rearrangement → Fixed‑point, or obtain a second point and use the Secant method.
Derivative \(f'\) easy to evaluate and a good initial guess is available Newton–Raphson (quadratic convergence) Secant method (if derivative becomes zero or is costly).
Derivative unavailable but function evaluations cheap Secant method (no derivative required) Fixed‑point iteration (if a rearrangement with \(|g'|<1\) can be found).
A natural rearrangement \(x=g(x)\) with \(|g'|<1\) is evident Fixed‑point iteration (simple and inexpensive) Newton or Secant as a backup.

7. Worked examples

7.1 Bisection – three‑decimal accuracy

Find a root of \(f(x)=x^{3}-2x-5\) to \(\varepsilon=0.001\).

  1. Sign change: \(f(2)=-1\), \(f(3)=4\) ⇒ interval \([2,3]\).
  2. Midpoint \(c_1=2.5\), \(f(c_1)=2.125\) → new interval \([2,2.5]\).
  3. Midpoint \(c_2=2.25\), \(f(c_2)=0.766\) → \([2,2.25]\).
  4. Midpoint \(c_3=2.125\), \(f(c_3)=0.332\) → \([2,2.125]\).
  5. Midpoint \(c_4=2.0625\), \(f(c_4)=0.158\) → \([2,2.0625]\).
  6. Midpoint \(c_5=2.03125\), \(f(c_5)=0.077\) → \([2,2.03125]\).
  7. Midpoint \(c_6=2.015625\), \(f(c_6)=0.038\) → \([2,2.015625]\).
  8. Midpoint \(c_7=2.0078125\), \(f(c_7)=0.019\) → \([2,2.0078125]\).
  9. Midpoint \(c_8=2.00390625\), \(f(c_8)=0.009\) → interval width \(0.0039<0.001\) is not yet met; continue.
  10. Midpoint \(c_9=2.001953125\), \(f(c_9)=0.004\); width \(0.00195>0.001\).
  11. Midpoint \(c_{10}=2.0009765625\), \(f(c_{10})=0.002\); width \(0.00098<0.001\). Stop.

Root to three decimal places: \(\displaystyle x\approx2.001\).

7.2 Newton–Raphson – same equation

Equation: \(f(x)=x^{3}-2x-5\), \(f'(x)=3x^{2}-2\). Start with \(x_0=2.5\).

Iteration\(x_n\)\(|x_{n}-x_{n-1}|\)\(|f(x_n)|\)
02.50002.1250
12.36490.13510.2715
22.33250.03240.0368
32.32880.00370.0049
42.32870.00010.0002

Since \(|x_4-x_3|=1.0\times10^{-4}<\varepsilon\), the required accuracy is achieved after four iterations: \(x\approx2.329\).

7.3 Fixed‑point iteration (rearranged equation)

Rewrite \(x^{3}-2x-5=0\) as

\[ x=g(x)=\sqrt[3]{\,2x+5\,}. \]

Derivative:

\[ g'(x)=\frac{2}{3(2x+5)^{2/3}},\qquad |g'(2.33)|\approx0.29<1, \] so convergence is expected.

Iteration\(x_n\)\(|x_{n}-x_{n-1}|\)
02.5000
12.35700.1430
22.33300.0240
32.32920.0038
42.32870.0005

Stopping when \(|x_{4}-x_{3}|<0.001\) gives the same root \(x\approx2.329\).

7.4 Divergent Newton iteration (illustrating failure)

Start Newton–Raphson far from the root, e.g. \(x_0=0\).

\[ \begin{aligned} x_1 &=0-\frac{f(0)}{f'(0)} = -2.5,\\ x_2 &=-2.5-\frac{f(-2.5)}{f'(-2.5)}\approx-1.397,\\ x_3 &=-1.397-\frac{f(-1.397)}{f'(-1.397)}\approx-0.297,\\ x_4 &\approx 1.71,\qquad\text{subsequent iterates move away from the root.} \end{aligned} \]

This demonstrates that Newton’s method can diverge if the initial guess is not sufficiently close to a simple root.

8. Key take‑aways for the exam

  • Always check the syllabus‑required stopping criteria and state which one you are using.
  • Verify the convergence condition before applying a method (sign change for bisection, \(|g'|<1\) for fixed‑point, non‑zero derivative for Newton, etc.).
  • When a method fails, refer to the decision table to select a more suitable alternative.
  • Remember the order of convergence – it explains why Newton is usually the fastest but also why it may need a good starting value.

Create an account or Login to take a Quiz

45 views
0 improvement suggestions

Log in to suggest improvements to this note.