Published on: Thursday, October 8, 2020

Humans may have originated racial bias, but we now have artificially intelligent tools that perpetuate it for us. The tools broaden the reach, and obscure the effects (article available here).

Humans design AI tools, at least for now. Different designs involve more or less human selection of inputs (versus the tool itself determining common patterns in data that suggest what inputs are appropriate). But humans can select or deselect inputs, and adjust weightings. There is no master "golden tablet" that establishes the right or correct set of inputs for any AI tool. Nor is there a master "golden tablet" of data-data that is somehow the "right data" that humans can use to teach AI what it needs to know to accomplish its task. Input selection, weightings, and data sets are all areas in which human bias can enter an AI tool and impact how it works.

AI tools used to assess criminal risk and needs profiles for arrestees (often called "risk and needs assessment" or "RNA" tools) have been subject to particular scrutiny. A number of these tools are designed by private companies and licensed to state court systems across the country. A number of state court systems have also designed their own tools. The federal courts use a custom-designed tool for convicted offenders, Post-Conviction Risk Assessment ("PCRA").

Race effects can enter RNA tools by the use and weightings of inputs that use race or proxies for race, such as residential zip code, schools attended, age of first arrest, family history of arrests, among others, and data sets from communities in which race has arguably played a role in frequency of arrest. Inputs, weightings, and data sets that one way or another position race as a predictor of criminal behavior result in racially skewed predictions.