UK POLICE EXPRESS CONCERN OVER RELYING ON AI TECHNOLOGY

 

BRITISH POLICE FEAR USING AI MAY LEAD TO INCREASED BIAS

By now, most developed countries have police forces that have dabbled a bit in utilizing AI technology.

The idea is that AI can assist law enforcement officers to streamline many tasks and increase efficiency.

Some major AI applications in law enforcement include using drones for surveillance, bots to scan social media to identify at risk people, and even interview bots to attempt to weed out lies spoken by suspects. The technology that gets the most attention however, is facial recognition technology.

Facial recognition technology has been attacked for years now due to its inability to accurately detect the faces of people of color. Earlier this year, the Algorithmic Justice League tested many facial recognition algorithms and found that they particularly struggle with darker-skinned females. These results were also replicated by the American Civil Liberties Union, who focused on Amazon’s facial recognition system. When they tested it against members of congress it incorrectly flagged those with darker skin more often.

Bias/Facial Recognition

Facial recognition technology is on the rise whether we like it or not.

A UK GOVERNMENT STUDY

In light of all the controversy surrounding the use of AI tech for law enforcement applications the UK government advisory body for the Centre for Data Ethics and Innovation claims that police feel AI may amplify prejudices. In this study 50 experts were interviewed by the Royal United Services Institute for the research. Some of these experts included senior police officers with decades of service. Most of the experts interviewed are concerned that racial bias and human prejudices could make their way into algorithms if they are trained on existing police data.

THE RESULTS

The findings of this study show major societal impact if such facial recognition technology were rolled out today. British authorities seem to be somewhat aware of the dire consequences associated with biased AI. As awareness spreads we may see these issues solved in time. These are the major findings of the report from the Centre for Data Ethics and Innovation.

MULTIPLE TYPES OF BIAS CAN OCCUR

This is something most of us were already aware of but when we think about bias in AI we tend to lean towards racial criteria. The truth is bias in these algorithms can extend well beyond race. They extend to all legally protected characteristics that include things like gender, religious affiliation, etc. It can also include more seemingly arbitrary characteristics such as income level, location, and even social media activity. We know, that social media giants regularly work with law enforcement to provide them with any and all data that can make their jobs easier at the cost of our privacy.

ACHIEVING ALGORITHMIC FAIRNESS IS DIFFICULT

Creating algorithms that make fair decisions is not just about providing it with clean data. The issue extends far beyond this. The human analysts need to have a better idea of exactly how the algorithm is coming to these decisions. This speaks to an inherent black box problem that has been discussed on this blog before.

A LACK OF GUIDANCE

There remains a lack of rules or a clear process for scrutiny, regulation and enforcement for police use of data analytics. In other words, the use of this technology is so new at this time that law enforcement has no good set of guidelines for implementing these algorithms in a fair manner. It is very difficult to set such guidelines in place when you haven’t even figured out how to get rid of the bias problem.

THE PROBLEMS WITH AI IN LAW ENFORCEMENT AND THE SOLUTION

It is very difficult to assess the efficacy algorithms would bring to the table if they are riddled with bias. It is important to note however that humans are and always have been extremely biased. This includes law enforcement officers. In fact, profiling suspects is a big part of their job. It only becomes an issue when people are singled out for no reason other than race or some other identifying characteristic. This means that human law enforcement officers, while biased, are still capable of looking past biases and making objective decisions.

The algorithm is not. If an algorithm is trained on data that is poisoned with bias then its decision making will be inherently flawed because these algorithms lack general intelligence and empathy and many other human characteristics capable of setting aside bias. The issue will extend far beyond race. Algorithms will begin to discriminate using all kinds of arbitrary criteria such as income level and where someone lives.

After all, these databases law enforcement possesses are filled with more data points than you could possibly imagine. Police departments will stop at nothing to make their job easier. The only answer here is to slow down the roll out of these algorithms. We need to take a step back and perfect the technology to avoid unintended consequences. This applies not just to AI in law enforcement but to other areas as well.

Leave a Reply

Your email address will not be published. Required fields are marked *