the reliability of the data collected

3.2 Market Research – Market Research Data

Objective

To understand how reliable market‑research data are produced, how reliability is assessed, and how reliability can be improved so that business decisions are based on trustworthy information.

Why Market Research Is Conducted (Cambridge Syllabus Requirement)

Businesses undertake market research to:

  • Identify the size and growth rate of a market.
  • Analyse competitors and their strategies.
  • Develop detailed customer profiles (demographics, psychographics, buying behaviour).
  • Discover customers’ wants, needs and preferences.

All of these insights depend on data that are reliable – i.e. consistent and stable.

Reliability – Definition and Relation to Validity

Reliability = the degree to which data give the same result when the same research is repeated under comparable conditions (consistency & stability).

Validity is different – it refers to whether the data actually measure what they are intended to measure. A data set can be highly reliable but not valid (e.g., consistently measuring the wrong variable).

Why Reliability Matters

  • Ensures decisions are based on trustworthy information.
  • Reduces the risk of costly mistakes caused by erroneous conclusions.
  • Builds credibility with stakeholders, investors and customers.
  • Facilitates comparison of results over time (trend analysis).
  • Provides a solid basis for strategic choices such as product launches, pricing and promotional plans.

Primary vs. Secondary Research (Syllabus Requirement)

Aspect Primary Research Secondary Research
Definition Data collected directly from original sources (e.g., surveys, interviews, observations). Data already published or collected for another purpose (e.g., government statistics, trade journals).
Cost & Speed Usually higher cost and longer to collect. Generally cheaper and quicker to obtain.
Depth & Specificity Highly specific to the researcher’s objectives. May be broad or not perfectly aligned with the research question.
Reliability Often more reliable because the researcher controls the collection process. Reliability depends on the original source’s methodology, currency and any bias.

Sampling (Syllabus Requirement)

Sampling is required because studying an entire population is rarely feasible. The reliability of research is strongly affected by:

  • Sample size – larger samples reduce random error.
  • Representativeness – the sample must reflect the target market’s characteristics.
  • Sampling technique – probability methods minimise bias; non‑probability methods can undermine reliability.

Sampling Techniques – Pros & Cons

Technique Pros Cons / Limitations
Simple random Each member has an equal chance; reduces sampling error. Requires a complete sampling frame; can be time‑consuming.
Stratified Ensures key sub‑groups are represented; improves precision. Needs reliable information about strata; more complex to organise.
Cluster Cost‑effective when population is geographically spread. Higher sampling error if clusters are heterogeneous.
Convenience / quota Fast and cheap. High risk of bias; results may not be generalisable.

Limitations of Sampling

  • Sampling error – the difference between the sample result and the true population value.
  • Non‑response bias – when certain types of respondents are less likely to participate.
  • Coverage error – when the sampling frame does not include all elements of the target population.

Data Collection Methods (Syllabus Requirement)

Method Typical Use Strengths Weaknesses
Structured questionnaire (paper or online) Large‑scale quantitative surveys. Standardised, easy to analyse, high response rates when well‑designed. Limited depth; may suffer from questionnaire fatigue.
Face‑to‑face or telephone interview Detailed quantitative or mixed‑method studies. Higher completion rates; interviewer can clarify questions. Costly, interviewer bias possible.
Focus group Exploratory qualitative research. Rich, in‑depth insight; observes group dynamics. Small sample, moderator influence, results not statistically generalisable.
Observation (in‑store, online tracking) Behavioural data where respondents may not be aware they are being studied. Captures actual behaviour rather than reported behaviour. Interpretation can be subjective; privacy concerns.
Online surveys & polls Quick feedback from large audiences. Fast, low cost, easy to reach geographically dispersed groups. Low response rates, self‑selection bias.
Secondary data review Background information, market size, trends. Quick, inexpensive, often large data sets. May be outdated, not specific to research question, possible bias.

Reliability – Key Influencing Factors

  1. Source of data – primary data are generally more reliable than secondary data.
  2. Method of collection – standardised instruments (questionnaires, interview scripts) increase reliability.
  3. Sample size and representativeness – larger, well‑selected samples produce more stable results.
  4. Timing of data collection – consistent intervals avoid seasonal or situational distortion.
  5. Instrument design – clear wording, unambiguous scales (e.g., Likert 1‑5) improve consistency.
  6. Data recording procedures – double‑entry, automated checks reduce transcription errors.
  7. Inter‑rater consistency – when more than one person codes or observes, agreement must be measured.

Assessing Reliability

Both qualitative judgments and quantitative tests are used.

Qualitative Assessment

  • Review of research design and methodology.
  • Cross‑checking findings with other reputable sources.
  • Pilot testing of questionnaires to spot ambiguous wording.

Quantitative Tests (with Cambridge‑recommended thresholds)

  • Test‑retest reliability – same instrument administered to the same respondents at two points in time.
    Interpretation: Pearson r ≥ 0.8 = high stability; 0.6‑0.79 = moderate; <0.6 = low.
  • Inter‑rater reliability – consistency between two or more coders.
    Measured with Cohen’s κ (or simple % agreement).
    Interpretation: κ ≥ 0.75 = substantial agreement; 0.40‑0.74 = moderate; <0.40 = poor.
  • Internal consistency – how well items in a scale measure the same construct (Cronbach’s α).
    Interpretation: α ≥ 0.70 = acceptable; 0.70‑0.80 = good; >0.80 = excellent.

Example – Pearson Correlation for Test‑Retest

$$r = \frac{\sum (X_i - \bar X)(Y_i - \bar Y)}{\sqrt{\sum (X_i - \bar X)^2 \; \sum (Y_i - \bar Y)^2}}$$

where X and Y are the scores from the two administrations of the same instrument.

Interpreting Data Displays (Tables, Charts & Graphs)

  • Tables – useful for precise figures, comparison of many variables, and showing totals or percentages. Check that headings are clear and that data are ordered logically (e.g., descending sales).
  • Bar charts – ideal for comparing categories (e.g., market share of competitors). Look for consistent scales and labelled axes.
  • Line graphs – best for showing trends over time (e.g., monthly sales growth). Ensure intervals are equal and a legend is provided if multiple lines are plotted.
  • Pie charts – display parts of a whole (e.g., proportion of customers by age group). Use only when there are few categories and percentages add to 100 %.
  • When interpreting any display, ask:
    • What is the main pattern or trend?
    • Are there any outliers or unexpected values?
    • Is the source of the data reliable?
    • Does the visual representation accurately reflect the underlying numbers?

Improving the Reliability of Market‑Research Data

  • Standardise data‑collection procedures (same script, same environment, same time of day where possible).
  • Use validated measurement scales (e.g., established Likert statements).
  • Train interviewers and data‑entry staff thoroughly; provide a coding manual for open‑ended responses.
  • Conduct pilot studies and refine instruments before the full rollout.
  • Increase sample size and ensure it is representative of the target market.
  • Repeat the study at regular intervals to check for consistency (test‑retest).
  • Apply double‑entry verification and automated error‑checking in data processing.

Reliability Checklist – Quick Reference

Aspect What to Check Action to Improve
Source Primary or secondary? Reputation of the source? Prefer primary data; if secondary, verify methodology, currency and possible bias.
Method Are tools standardised (questionnaire, interview guide)? Adopt structured instruments; pilot test for clarity.
Sample Size adequate? Representative of target market? Calculate required sample size; use probability sampling where feasible.
Timing Was data collected at comparable periods? Schedule to avoid seasonal bias; consider test‑retest intervals.
Instrument Design Are questions clear? Are scales consistent? Pre‑test, use established scales, avoid double‑barrelled questions.
Recording Are data‑entry procedures error‑free? Implement double‑entry, use software validation rules.
Inter‑rater Do multiple coders produce similar results? Provide detailed coding manuals; calculate Cohen’s κ and retrain if κ < 0.75.
Reliability Tests Have you calculated test‑retest, inter‑rater or internal‑consistency coefficients? Apply Pearson r, Cohen’s κ, Cronbach’s α and compare with Cambridge thresholds.

Practical Example (Business‑Decision Context)

Scenario: A clothing retailer plans to launch a new product line. An online survey of recent purchasers yields an average satisfaction score of 3.2/5 (SD = 1.4). The manager must decide whether to proceed.

Steps to assess and improve reliability before deciding:

  1. Review the questionnaire for ambiguous wording; rewrite any unclear items.
  2. Check that the sample includes customers from all major age groups, regions and income brackets (stratified sampling).
  3. Run a pilot with 30 respondents and calculate Cronbach’s α; aim for α ≥ 0.70.
  4. Conduct a test‑retest two weeks later; compute Pearson r. A value > 0.8 indicates good stability.
  5. If open‑ended comments are coded, have two analysts code them and calculate Cohen’s κ; κ ≥ 0.75 is desirable.
  6. Only if the reliability thresholds are met should the retailer base the rollout decision on the survey findings.

Links to Other Syllabus Topics

  • Reliability of sales‑forecast data (Section 5.3 – Forecasting) – the same statistical tests are used to ensure financial projections are robust.
  • Product‑development research (Section 8.1 – Product Development) – reliable market data guide decisions on features, pricing and positioning.
  • Marketing‑mix decisions – reliable data on customer preferences influence product, price, place and promotion strategies.

Summary

  • Reliability = consistency & stability; it is essential for sound business decisions.
  • Key influences: source, method, sample size/representativeness, timing, instrument design, recording procedures, inter‑rater consistency.
  • Assess reliability qualitatively (design review, pilot) and quantitatively (test‑retest, inter‑rater, internal consistency) using Cambridge‑recommended thresholds.
  • Use the reliability checklist, improve research design, and interpret data displays correctly to produce trustworthy market‑research information.

Practice Questions

  1. Explain why a larger sample size generally improves the reliability of market‑research data.
  2. Describe how a pilot study can help identify reliability problems in a questionnaire.
  3. Calculate the test‑retest reliability coefficient for the paired scores (4,5), (6,7), (5,5), (7,8). Show your work using the Pearson formula provided.
  4. Discuss two ways in which the timing of data collection can affect reliability.
  5. Evaluate the reliability of the following secondary source: a 2018 industry report on UK fashion retail sales published by a trade association. Consider source reputation, data‑collection method, age of the data and any potential bias.
Suggested diagram: Flowchart showing steps to assess and improve reliability of market‑research data (design → pilot → reliability testing → final data collection → interpretation).

Create an account or Login to take a Quiz

22 views
0 improvement suggestions

Log in to suggest improvements to this note.