Know and understand phishing, pharming, smishing, vishing including the methods that can be used to help prevent them

8 Safety and Security

Objective

Know and understand the social‑engineering threats phishing, pharming, smishing and vishing; be able to analyse their impact on data and apply appropriate technical, personal and organisational measures to prevent them.

1. Broader Context – Threats to Data

  • Physical safety (syllabus 8.1) – Workstations should be placed to avoid electrocution, fire, cable‑trip hazards and should follow ergonomic best practice (adjustable chair, monitor height, regular breaks).
  • e‑Safety and data protection (syllabus 8.2)

    • Personal data = information that can identify an individual (name, address, DOB, etc.).
    • Sensitive data = health, biometric, financial, or special‑category data.
    • Key legislation: GDPR (EU), Data Protection Act 2018 (UK), Computer Misuse Act 1990. These require organisations to keep personal data confidential, maintain its integrity, and report breaches.

  • Security of data – threat categories (syllabus 8.3)

    • Social‑engineering attacks – phishing, pharming, smashing, vishing.
    • Malware – viruses, worms, ransomware.
    • Unauthorised access – weak passwords, lack of multi‑factor authentication (MFA).
    • Physical loss or theft of devices – laptops, USB sticks, smartphones.

2. Key Terminology

PhishingDeceptive electronic communication (usually email) that pretends to be from a trusted source to obtain personal or financial information.
PharmingManipulation of DNS or local host files so that a user is directed to a fraudulent website even when the correct URL is entered.
SmishingPhishing carried out via SMS (text) messages.
VishingPhishing carried out over the telephone, often using Voice‑over‑IP (VoIP) or spoofed caller IDs.

3. How Each Attack Works

3.1 Phishing

  1. Attacker creates a convincing email that appears to come from a bank, retailer or colleague.
  2. Typical features:

    • Urgent language – e.g. “Your account will be closed within 24 h”.
    • A link that looks legitimate but actually points to a fake site (e.g. https://www.bank‑secure‑login.com).
    • An attachment containing malware (e.g. a Word document with a macro).

  3. The victim clicks the link, is taken to a counterfeit login page, and enters credentials.
  4. Credentials are captured and later used for fraud or identity theft.

Real‑world example: The 2022 “Google Docs” phishing campaign reported by The Register (March 2022) sent emails with a link to a fake docs.google.com login page, harvesting thousands of Gmail passwords.

3.2 Pharming

Two main techniques:

  • DNS poisoning (cache poisoning) – false entries are injected into a DNS server’s cache, causing a legitimate domain (e.g. bank.com) to resolve to a malicious IP address.
  • Host‑file alteration – malware edits the local hosts file, mapping the target domain to a fraudulent IP.

How it is achieved:

  • Compromised router or ISP DNS server.
  • Malware that edits the hosts file.

Detection clues: Browser shows a certificate warning, the HTTPS lock icon is missing, or an nslookup returns an unexpected IP address.

3.3 Smishing

  1. Victim receives an SMS that appears to be from a trusted organisation (bank, delivery service, government agency).
  2. The message contains a short URL (e.g. bit.ly/2XyZ) or a phone number and an urgent request such as “Your parcel is being held – verify now”.
  3. Clicking the link opens a fake mobile site that asks for personal details; calling the number connects to a scripted social‑engineering script.

Safe handling tools: Use a URL‑expander (e.g. checkshorturl.com) before clicking, and verify the sender via the official app or website.

3.4 Vishing

  1. Attacker places a call, often using caller‑ID spoofing (VoIP, SIP‑trunk manipulation) to display a trusted number such as a bank’s helpline.
  2. Scripted scenario – “We have detected suspicious activity on your account; please confirm your PIN and date of birth.”
  3. The victim provides the details, which are recorded for later fraud.

Pre‑call checks: Ask for a reference number, hang up and call the official number printed on a bank statement, and record details of the suspicious call for reporting.

4. Impact & Risk Assessment (AO3)

  • Confidentiality breach – loss of credentials, credit‑card numbers, personal identifiers.
  • Integrity loss – unauthorised changes to accounts or data.
  • Availability impact – accounts locked, ransomware payments.
  • Financial loss – unauthorised transactions, ransom payments.
  • Reputational damage – loss of trust in the organisation.
  • Legal consequences – GDPR/Data Protection Act fines, civil claims.

When analysing a scenario, students should consider:

  1. What data is at risk?
  2. How could the attack be detected early?
  3. What immediate actions should be taken (e.g., change passwords, report the incident)?
  4. Which preventive controls could have reduced the risk?

5. Comparison of Attack Types

AttackMedium UsedTypical TargetCommon IndicatorsKey Preventive Measures
PhishingEmailIndividuals; staff (finance, HR, admin)Misspelled sender address, urgent language, mismatched URLs, macro‑enabled attachmentsSpam filters, SPF/DKIM/DMARC, anti‑phishing gateway, MFA, URL‑checking, SSL/TLS verification
PharmingDNS / hosts fileAnyone using a compromised network or deviceCertificate warnings, unexpected IP address, DNS lookup mismatchDNSSEC, secure DNS provider, regular hosts‑file audit, HTTPS verification, firewall rules, router firmware updates
SmishingSMS / text messageMobile phone users (all ages)Short URL, unknown short code, urgent request, unknown sender numberDo not click short URLs; use URL‑expander; verify via official app; carrier‑level spam filtering; MDM policies
VishingTelephone / VoIPPhone users, especially seniors & small‑business ownersCaller‑ID shows trusted organisation but voice is unfamiliar; request for personal dataNever give data on unsolicited calls; verify by hanging up and calling official number; call‑blocking apps; recording of suspicious calls

6. Prevention Strategies

6.1 Technical Controls (syllabus 8.3 – “Protection of data”)

  • Multi‑factor authentication (MFA) – adds a second factor (OTP, hardware token) to protect accounts.
  • Anti‑phishing email gateway – scans links, attachments and validates SPF/DKIM/DMARC records.
  • DNSSEC – digitally signs DNS records to prevent cache poisoning.
  • SSL/TLS (HTTPS) – encrypts web traffic; users should verify the lock icon and certificate.
  • Firewalls & intrusion‑prevention systems – block known malicious IPs and domains.
  • Regular patch management – keep OS, browsers, firmware and security software up to date.
  • Mobile Device Management (MDM) – enforces app controls, filters SMS spam and can remotely wipe lost devices.
  • Password policy & password managers – enforce strong, unique passwords and store them securely.

6.2 User Awareness & Safe Practices

  1. Check the sender’s address for misspellings or unexpected domains.
  2. Hover over links to view the real URL; verify the HTTPS lock and certificate before entering credentials.
  3. Never provide personal or financial details in response to an unsolicited email, SMS or call.
  4. Use a trusted URL‑expander for short links; avoid clicking them directly.
  5. For suspicious calls, note the time, number and content, then report them.
  6. Verify any request for data through an independent channel (log into the official website or call the official helpline).
  7. Be skeptical of urgent or threatening language that tries to create panic.

6.3 Organisational Policies & Procedures

  • Conduct regular phishing‑simulation exercises and record results for improvement.
  • Maintain a clear incident‑reporting procedure – e.g., a dedicated “security mailbox” or ticketing system.
  • Provide mandatory e‑safety training covering email, SMS and telephone scams.
  • Enforce a documented password policy and promote the use of password‑manager software.
  • Perform periodic risk assessments (identify assets, threats, vulnerabilities, impact) and update controls accordingly.
  • Log suspicious calls, SMS and emails for analysis and to satisfy data‑protection compliance.

7. Audience & Communication (syllabus 9 & 10)

Effective communication of security advice depends on the audience:

  • Staff in finance or HR – need detailed guidance on recognising sophisticated phishing emails.
  • General employees – benefit from short, visual reminders (posters, pop‑up alerts) about urgent language and URL checks.
  • External customers – organisations should use clear, jargon‑free messages on their websites and in SMS alerts, respecting privacy and data‑protection rules.

Choosing the right channel (email, intranet, poster, face‑to‑face briefing) and tone (formal for policy documents, informal for quick reminders) improves uptake of safe practices.

8. Summary Checklist for Exams

  • Identify the medium used (email, DNS, SMS, telephone).
  • Spot typical deception cues: urgency, misspelled addresses, short URLs, spoofed caller ID.
  • Explain the possible impact on confidentiality, integrity and availability.
  • State at least two technical controls (e.g., MFA, DNSSEC) and two personal‑awareness actions (e.g., hover over links, verify caller) that would prevent the attack.
  • Describe the reporting process you would follow in an organisation (record, report to security mailbox, initiate incident response).

Suggested diagram: Flowchart showing the progression from a phishing email → credential theft → data breach, with parallel branches for smishing, vishing and pharming, and points where technical and human controls interrupt the flow.