Show understanding of Artificial Intelligence (AI)

Topic 7.1 – Ethics and Ownership

Objective

Show understanding of Artificial Intelligence (AI) and evaluate the ethical and ownership implications of computing systems, in line with the Cambridge AS & A‑Level Computer Science (9618) syllabus.

1. Why Ethics Is Needed in Computing

  • Professional responsibility – ensures that developers, users and managers act in ways that protect individuals, organisations and society.
  • Consequences of unethical practice:

    • Data breaches – loss of personal data can lead to identity theft and heavy fines (e.g., the 2018 Equifax breach).
    • Software plagiarism – copying code without permission breaches copyright and erodes trust.
    • Malicious misuse – bots, ransomware or unauthorised surveillance can cause widespread harm.

  • Ethics provides a decision‑making framework that balances technical possibilities with social values.

2. Professional Bodies and Codes of Conduct

Membership of recognised professional organisations reinforces ethical practice:

  • BCS (The Chartered Institute for IT) – publishes the Code of Conduct for BCS Members.
  • IEEE (Institute of Electrical and Electronics Engineers) – provides the IEEE Code of Ethics.
  • ACM (Association for Computing Machinery) – maintains the ACM Code of Ethics and Professional Conduct.

These codes stress honesty, respect for privacy, avoidance of harm and the importance of continual professional development.

3. Key Legislation Relevant to Ethics

3.1 Copyright

  • Author’s rights – the creator automatically owns the work and can control copying, distribution and adaptation.
  • Duration – generally life of the author + 70 years (varies by jurisdiction).
  • Fair‑use / fair‑dealing – limited exceptions for teaching, research, criticism or review.
  • Licence compatibility – when combining code under different licences, ensure the terms do not conflict (e.g., GPL‑licensed code cannot be combined with a licence that imposes additional restrictions).

3.2 Data Protection & Privacy (UK GDPR)

  • Lawful, fair and transparent processing.
  • Purpose limitation – use data only for the reason it was collected.
  • Data minimisation – collect the minimum necessary.
  • Right to be forgotten – individuals can request deletion of their data.

3.3 Encryption & Secure Communication

  • Symmetric encryption – same secret key for encryption and de‑cryption (e.g., AES).
  • Asymmetric encryption – a public key encrypts, a private key decrypts (e.g., RSA, ECC). Enables digital signatures and key exchange.
  • SSL/TLS – protocols that use a combination of asymmetric key exchange and symmetric encryption to provide confidentiality, integrity and authentication for internet traffic.

4. Types of Software Licences & When to Use Them

Licence FamilyKey FeaturesTypical Use‑Case
Free / Open‑Source (e.g., GPL, MIT, Apache)

  • Source code publicly available.
  • Permission to modify and redistribute.
  • Copyleft (GPL) forces derivative works to use the same licence; permissive licences (MIT, Apache) do not.
  • Check licence compatibility when mixing components.

Collaborative projects, academic research, or when community contributions are desired.
Commercial (proprietary)

  • Source code closed.
  • Use governed by a licence agreement (often per‑seat or subscription).
  • Restrictions on copying, modification and redistribution.

Enterprise solutions where support, warranties and IP protection are essential.
Share‑ware / Freemium

  • Basic functionality free; advanced features require payment.
  • Often time‑limited trial versions.

Marketing strategy to attract users before converting them to paying customers.

5. What Is Artificial Intelligence?

Artificial Intelligence (AI) is the branch of computer science that creates systems capable of performing tasks that normally require human intelligence.

5.1 Core AI Techniques

  • Machine learning (ML) – algorithms improve automatically from data (e.g., decision trees, support‑vector machines).
  • Deep learning – neural networks with many hidden layers; excels at image, speech and text pattern recognition.
  • Reinforcement learning – agents learn optimal actions through trial‑and‑error interaction with an environment (e.g., game playing, robotics).
  • Search and graph‑based AI – algorithms such as A*, Dijkstra and breadth‑first search use graph structures to find optimal paths; fundamental for routing, game AI and planning.
  • Expert systems – rule‑based programmes that encode human expertise (e.g., medical‑diagnosis shells).
  • Natural language processing (NLP) – enables computers to understand, generate and translate human language.

6. Ethical Issues in AI

  1. Bias and discrimination – training data reflecting societal prejudices can produce unfair outcomes (e.g., facial‑recognition misclassifying certain ethnic groups).
  2. Privacy – large datasets often contain personal information; informed consent and robust anonymisation are essential.
  3. Transparency & explainability – deep models can act as “black boxes”, making it difficult for users to understand decisions.
  4. Accountability – determining who is responsible when an AI system causes harm (developer, user, organisation).
  5. Job displacement – automation may replace roles, creating economic and social challenges.
  6. Autonomous weapons – AI‑driven lethal systems raise profound moral questions about removing human judgement.
  7. Environmental impact – training large models consumes significant electricity and can have a high carbon footprint; sustainable AI practices are increasingly important.

7. Ownership and Intellectual Property in AI

AspectKey ConsiderationsTypical Legal Position
Data OwnershipWho owns the training data? Consent, licensing, and anonymisation are critical.Usually the data provider retains rights; users must obey data‑use agreements and privacy law.
Model OwnershipWho owns the trained model – the developer, the employing organisation, or the end‑user?Governed by contracts; open‑source licences may apply if the model is released publicly.
AI‑Generated WorksCan a machine be an author? What copyright applies?Most jurisdictions require a human author. The US Copyright Office has refused registration of works created solely by AI; a human must contribute sufficient creativity for protection.
Patents for AI InventionsIs an invention created autonomously by AI patentable?Current law generally requires a natural person as inventor; some jurisdictions are reviewing this requirement.
Software Licence for AI ToolsChoice between open‑source (e.g., Apache 2.0 for libraries) and commercial licences for AI frameworks.Licence choice influences redistribution rights, liability clauses and commercial use.

8. Four‑Step Ethical Decision‑Making Model for AI

The Cambridge syllabus expects the “recognise – evaluate – decide – act” model. The table maps the model to AI‑specific checkpoints.

StepWhat to Do (AI Context)Corresponding Checklist Items
RecogniseIdentify the ethical question(s) raised by the AI system (e.g., bias, privacy, environmental cost).Is the purpose lawful and socially beneficial?
Are there potential harms?
EvaluateGather relevant facts – data provenance, licence terms, model transparency, stakeholder impact.Has data been obtained with informed consent and properly anonymised?
Are the algorithms transparent enough for stakeholders?
DecideChoose a course of action that balances benefits against risks, considering professional codes and legal requirements.What human‑oversight mechanisms (e.g., “human‑in‑the‑loop”) will be put in place?
How will potential harms be mitigated?
ActImplement the decision, document the rationale, and monitor outcomes for future review.Do the licensing terms for data, models and software comply with legal and ethical standards?
Who will be held accountable if something goes wrong?

9. Example: AI in Healthcare

Consider an AI system that predicts disease risk from medical images.

Key Ethical Questions (applied to the four‑step model)

  1. Recognise – Is the system’s purpose (early disease detection) lawful and beneficial? Does it risk false positives that could cause anxiety?
  2. Evaluate

    • Data: Were patients informed that their images would be used for training? Are the images anonymised?
    • Transparency: Can clinicians see which image features contributed to the risk score?
    • Environmental impact: What energy was consumed to train the model?

  3. Decide – Implement “human‑in‑the‑loop” verification, set a threshold that balances sensitivity and specificity, and choose an open‑source framework with a licence compatible with the hospital’s policies.
  4. Act – Document consent procedures, retain audit logs of model predictions, provide training for staff on interpreting results, and establish a clear liability chain (software vendor, hospital, attending doctor).

Illustrative flowchart (suggested diagram):

Data collection → Consent & anonymisation → Model training (with environmental‑impact monitoring) → Decision output → Human‑in‑the‑loop review → Clinical action → Post‑deployment audit.

10. Summary

Artificial Intelligence offers powerful capabilities but also raises complex ethical and ownership challenges. A thorough grasp of professional codes, relevant legislation (copyright, data protection, encryption), software licensing, AI techniques (including graph‑based search and reinforcement learning), and the four‑step ethical decision‑making model equips students to evaluate both the technical merits and the broader societal impact of computing solutions.