Abuse of artificial intelligence by the police in the US. More than bias

Featured image

The 16th episode of the series Building sustainable cities - The contribution of digital technology reveals what can happen if the power of artificial intelligence is not used in a responsible manner.

The fight against crime in the United States, has been the scene of artificial intelligence’s abuse for years. As will become apparent, this is not only the result of bias. In episode 11, I discussed why artificial intelligence is a fundamentally new way of using computers. Until then, computers were programmed to perform operations such as structuring data and making decisions. In the case of artificial intelligence, they are trained to do so. However, it is still people who design the instructions (algorithms) and are responsible for the outcomes, although the way in which the computer performs its calculations is increasingly becoming a 'black box'.

Applications of artificial intelligence in the police

Experienced detectives are traditionally trained to compare the 'modus operandi' of crimes to track down perpetrators. Due to the labor-intensive nature of the manual implementation, the question soon arose as to whether computers could be of assistance. A first attempt to do so in 2012 in collaboration with the Massachusetts Institute of Technology resulted in grouping past crimes into clusters that were likely to have been committed by the same perpetrator(s). When creating the algorithm, the intuition of experienced police officers was the starting point. Sometimes it was possible to predict where and when a burglar might strike, leading to additional surveillance and an arrest.

These first attempts were soon refined and taken up by commercial companies. The two most used techniques that resulted are predictive policing (PredPol) and facial recognition.

In the case of predictive policing, patrols are given directions in which neighborhood or even street they should patrol at a given moment because it has been calculated that the risk of crimes (vandalism, burglary, violence) is then greatest. Anyone who behaves 'suspiciously' risks to be arrested. Facial recognition plays also an important role in this.

Both predictive policing and facial recognition are based on a "learning set" of tens of thousands of "suspicious" individuals. At one point, New York police had a database of 48,000 individuals. 66% of those were black, 31.7% were Latino and only 1% were white. This composition has everything to do with the working method of the police. Although drug use in cities in the US is common in all neighborhoods, policing based on PredPol and similar systems is focused on a few neighborhoods (of color). Then, it is not surprising that most drug-related crimes are retrieved there and, as a result, the composition of the database became even more skewed.

Overcoming bias

In these cases, 'bias' is the cause of the unethical effect of the application of artificial intelligence. Algorithms always reflect the assumptions, views, and values of their creators. They do not predict the future, but make sure that the past is reproduced. This also applies to applications outside the police force. The St. George Hospital Medical School in London has employed disproportionately many white males for at least a decade because the leather set reflected the incumbent staff. The criticized Dutch System Risk Indication System also uses historical data about fines, debts, benefits, education, and integration to search more effectively for people who abuse benefits or allowances. This is not objectionable but should never lead to 'automatic' incrimination without further investigation and the exclusion of less obvious persons.

The simple fact that the police have a disproportionate presence in alleged hotspots and are very keen on any form of suspicious behavior means that the number of confrontations with violent results has increased rapidly. In 2017 alone, police crackdowns in the US resulted in an unprecedented 1,100 casualties, of which only a limited number of whites. In addition, the police have been engaged in racial profiling for decades. Between 2004-2012, the New York Police Department checked more than 4.4 million residents. Most of these checks resulted in no further action. In about 83% of the cases, the person was black or Latino, although the two groups together make up just over half of the population. For many citizens of colour in the US, the police do not represent 'the good', but have become part of a hostile state power.

In New York, in 2017, a municipal provision to regulate the use of artificial intelligence was proposed, the Public Oversight of Surveillance Technology Act (POST). The Legal Defense and Educational Fund, a prominent US civil rights organization, urged the New York City Council to ban the use of data made available because of discriminatory or biased enforcement policies. This wish was granted in June 2019, and this resulted in the number of persons included in the database being reduced from 42,000 to 18,000. It concerned all persons who had been included in the system without concrete suspicion.

San Francisco, Portland, and a range of other cities have gone a few steps further and banned the use of facial recognition technology by police and other public authorities. Experts recognize that the artificial intelligence underlying facial recognition systems is still imprecise, especially when it comes to identifying the non-white population.

The societal roots of crime

Knowledge of how to reduce bias in algorithms has grown, but instead of solving the problem, awareness has grown into a much deeper problem. It is about the causes of crime itself and the realization that the police can never remove them.

Crime and recidivism are associated with inequality, poverty, poor housing, unemployment, use of alcohol and drugs, and untreated mental illness. These are also dominant characteristics of neighborhoods with a lot of crime. As a result, residents of these neighborhoods are unable to lead a decent life. These conditions are stressors that influence the quality of the parent-child relationship too: attachment problems, insufficient parental supervision, including tolerance of alcohol and drugs, lack of discipline or an excess of authoritarian behavior. All in all, these conditions increase the likelihood that young people will be involved in crime, and they diminish the prospect of a successful career in school and elsewhere.

The ultimate measures to reduce crime in the longer term and to improve security are: sufficient income, adequate housing, affordable childcare, especially for 'broken families' and unwed mothers and ample opportunities for girls' education. But also, care for young people who have encountered crime for the first time, to prevent them from making the mistake again.

Beyond bias

This will not solve the problems in the short term. A large proportion of those arrested by the police in the US are addicted to drugs or alcohol, are severely mentally disturbed, have serious problems in their home environment - if any - and have given up hope for a better future. Based on this understanding, the police in Johnson County, Kansas, have been calling for help from mental health professionals for years, rather than handcuffing those arrested right away. This approach has proved successful and caught the attention of the White House during the Obama administration. Lynn Overmann, who works as a senior advisor in the president’s technology office, has therefore started the Data-Driven Justice Initiative. The immediate reason was that the prisons appeared to be crowded by seriously disturbed psychiatric patients. Coincidentally, Johnson County had an integrated data system that stores both crime and health data. In other cities, these are kept in incomparable data silos. Together with the University of Chicago Data Science for Social Good Program, artificial intelligence was used to analyze a database of 127,000 people. The aim was to find out, based on historical data, which of those involved was most likely to be arrested within a month. This is not with the intention of hastening an arrest with predictive techniques, but instead to offer them targeted medical assistance. This program was picked up in several cities and in Miami it resulted in a 40% reduction in arrests and the closing of an entire prison.

What does this example teach? The rise of artificial intelligence caused Wire editor Chris Anderson to call it the end of the theory. He couldn't be more wrong! Theory has never disappeared; at most it has disappeared from the consciousness of those who work with artificial intelligence. In his book The end of policing, Alex Vitale concludes: Unless cities alter the police's core functions and values, use by police of even the most fair and accurate algorithms is likely to enhance discriminatory and unjust outcomes (p. 28). Ben Green adds: The assumption is: we predicted crime here and you send in police. But what if you used data and sent in resources? (The smart enough city, p. 78).

The point is to replace the dominant paradigm of identifying, prosecuting and incarcerating criminals with the paradigm of finding potential offenders in a timely manner and giving them the help, they need. It turns out that it's even cheaper. The need for the use of artificial intelligence is not diminishing, but the training of the computers, including the composition of the training sets, must change significantly. It is therefore recommended that diverse and independent teams design such a training program based on a scientifically based view of the underlying problem and not leaving it to the police itself.

This article is a condensed version of an earlier article The Safe City (September 2019), which you can read by following the link below, supplemented with data from Chapter 4 Machine learning's social and political foundationsfrom Ben Green's book The smart enough city (2020).

https://smartcityhub.com/technology-innnovation/safe-cities/

Comments

Topics