When we measure something, the result we get is never exactly the true value.
The difference between the measured value and the true value is called an error.
Errors can be grouped into two main types:
The uncertainty is our best estimate of how far the true value could be from the measured value.
It is usually expressed as a plus‑minus range:
Measured value = \$x \pm \Delta x\$
where \$\Delta x\$ is the uncertainty.
These two terms are often mixed up. Think of them as two different aspects of a measurement:
| Aspect | What it Means | Analogy |
|---|---|---|
| Precision | How close repeated measurements are to each other. | All arrows hit the same spot on a target, but far from the bullseye. |
| Accuracy | How close a measurement is to the true value. | Arrows hit the bullseye, but spread out widely. |
In practice, a good experiment aims for both: many measurements that are close together (high precision) and close to the true value (high accuracy).
📌 Examination Tip: When asked to calculate uncertainty, always:
🎯 Quick Check: If your repeated measurements are all 100 cm but the true value is 101 cm, you have:
📝 Final Exam Reminder:
Always write the uncertainty in the same units as the measured quantity.
Use significant figures: if the uncertainty is \$0.03\$, report the measurement as \$x = 5.12 \pm 0.03\$ (not \$5.120 \pm 0.030\$).