Lecture 25: Ethics in Machine Learning

“With great power comes greater responsibility”.

Learning goals: - How to measure fairness - How to mitigate biases

1. Framework of harms

We analyze different types of harms caused by algorithms:

  • Quality-of-service harms
  • Distributive harms
  • Existential harms

2. Detecting hidden biases

  • Algos are hypersensitive to data that they are trained on. They might recognize patterns that we cannot recognize easily. See slide 40.
    • Make this a paragraph in PWR paper.
      • Also include model cards initiative (i.e. in transparency section).
      • The section on Part 1: Framework of Harms can be used to show the effects biases have on society at greater.
  • Case study: st. georges algorithm: i.e. NLP for ethnic names

3. Algorithmic fairness

  • Adapting philosophical terms to ai algorithms.

4. Part 4: the blind spots

mistakes of the future