COMPAS: A Machine Learning Algorithm beyond the code
This is a presentation of my research on the COMPAS algorithm, a tool for predicting recidivism, that was exposed as biased against Black and Latinx people by ProPublica, an independent news agency in 2016.
In this presentation I begin by describing who creates this technology and the Business to Business model attached to it. I dig into the assumptions that the algorithm is based upon and continue by explaining different avenues to explore the bias in the algorithm and a number of factors that contribute to its failure.
I include reflections on the impact of this tool for the crisis of mass incarceration, and highlight the need for an ecosystem that enables communities to deal with harm. I conclude with a commentary on the social effects of this technology, especially as it contributes to the construction of imaginaries about Latinx and Black people, often reinforcing stereotypes, and directly damaging these communities.
The narrative attached to this presentation is located within the context of Design education. I invite students and facilitators to question assumptions that are built within the technologies we develop and the larger socio-political effects of our practice.