Summrize Logo
All Categories

Snippets about: Equity

Scroll left and right !

COMPAS Recidivism and Algorithmic Fairness

In 2016, a ProPublica investigation into the COMPAS criminal risk assessment tool concluded the tool was biased against Black defendants. Their analysis found that Black defendants who did not reoffend were 2x more likely to be classified as high-risk compared to White defendants.

The makers of COMPAS, Northpointe, countered that the model was equally accurate for White and Black defendants and had the same false positive rates for each risk score level, so could not be biased.

This sparked a heated debate in the algorithmic fairness community. A series of academic papers showed that the two notions of fairness - equal false positive rates and equal accuracy across groups - are mathematically incompatible if the base rates of the predicted variable differ across groups.

The COMPAS debate crystallized the realization that there are multiple conceptions of algorithmic fairness that often cannot be simultaneously satisfied. It brought the issue into the public eye and kickstarted the field of fairness in machine learning.

Section: 1, Chapter: 2

Book: The Alignment Problem

Author: Brian Christian

Books about Equity

    Summrize Footer

    Subscribe to Our Newsletter

    Become smarter every day with key takeaways delivered straight to your inbox. Perfect for busy people who want to learn from the smartest minds in just minutes!