The Right Way to Regulate Algorithms
Which public school will your child attend? How severe a sentence will you receive in the criminal justice system? Will you earn tenure as a teacher? In many cities, a new force is playing a critical role in answering these questions: algorithms.
Cities rely on algorithms to help make decisions that affect people’s lives in meaningful ways, from assessing teacher performance to predicting the likelihood of criminal re-offense. And yet, the general public knows almost nothing about how they work.
Take a recent example in New York City: The police department began using algorithms to help decide where to deploy officers across the city. In 2015, the New York Police Department partnered with software company Azavea to pilot the HunchLab platform, which considers a variety of factors—from historic crime to proximity to bars and bus stops—to determine where crime is most likely to happen.
The purpose of data-driven algorithms like this one is to make policing more objective and less subject to individual bias. But many worry that the biases are simply baked into the algorithms themselves. Some opponents have argued that HunchLab will disproportionately target areas with more people of color and low-income residents because they reinforce old stereotypes: Data on patterns of past arrest rates, for example, might cause an algorithm to target low-income neighborhoods where officers were historically more likely to pick up black kids for possession. Others dispute whether or not the program is effective. A study by the RAND Corporation of a similar predictive policing program in Shreveport, Louisiana, found no measurable effect on crime.
Even if these algorithms do improve policing, mistrust will continue so long as public information is lacking. Despite a recent lawsuit by NYU’s Brennan Center for Justice that required the NYPD to release parts of its correspondence with Azavea, the public knows little about how HunchLab works, whether it uses tainted data, or is effective at reducing crime. As it stands, residents, advocates, and researchers have little ability to evaluate these tools to determine whether they are accurate or fair. Even City Council members have struggled to understand how their own precincts make staffing decisions.