Over the past couple of decades AI has been used to identify all kinds of things like employee theft and insider trading and more useful things like the fastest way to get to work in the morning or a possible cure for a deadly disease. Large tech companies continue to experiment in this regard creating all kinds of technological double edged swords. In light of all this progress law enforcement agencies are following suite and attempting to utilize machine learning to create programs that they believe will make the world a better place.

We touched on this last week when covering the gait recognition technology being pushed by China. Other major countries such as Great Britain are now attempting to create programs that can identify criminals before a crime is committed and save the costs associated with tracking down someone and jailing them. Technology like this is still in its infancy but its worth considering whether it will really make the world a better place or turn the world into a dystopian nightmare.


Great Britain is proposing a system in which AI would use over 1400 indicators to determine an individuals risk of committing a crime. This could include past offenses, whether or not the person hangs around with criminals and so on and so forth. The head of this project Iain Donelly the plan will allow the force to do more with less and that budget cuts have stopped humans being able to do their jobs without cybernetic help. It will be designed so that every force in the UK has access to it. Data will be pooled together with the help of every police agency working together. Even people who have not committed a crime yet may be subject to a stop and search if identified by this AI to be a potential criminal.

The project known as National Data Analytics Solution (NADS) will target individuals with already known criminal tendencies. The project is currently in the works and has until the end of 2019 to produce a prototype. At that time we will return to this issue to analyze the details of the algorithms being used. There are also a lot of ethical questions here and it makes us wonder what kind of slippery slope we may have already started down. Using records of past offenses could reinforce existing biases which may cloud the ability of the AI to make accurate judgments. Not to mention the many other potential biases that could be present in the minds of the programmers who create it. Bias in AI will remain a serious issue until we can create one with general intelligence which is still at least decades away.

The criteria by which possible criminals would be identified seems worded to deliberately seem benign as law enforcement has access to much more data on people than that. For example, it is not out of the realm of possibility that law enforcement could use a persons genetics to assess their risk of committing crimes in the future. It almost seems like we are heading down that kind of path regardless of the implications.


Britain’s AI will attempt to identify possible criminals.


Proponents of this technology all justify it by making the same arguments. They all run down an arbitrary list of potential benefits while conveniently ignoring any drawbacks or any possible human rights violations. For instance, police agencies in Great Britain point out that due to budget cuts they don’t have the resources to fight crime without the help of technological tools. Chasing criminals after they have already committed the act can get costly which is where the idea of preventing crimes before they happen came to be.

But in real life things are not so simple. One obvious risk is the problem with bias, which seemingly can’t even be solved when dealing with humans, who possess general intelligence, let alone machines that do not. Another issue with this technology in Britain is that most of the arrests correlate where police are deployed as opposed to where crime actually occurs which for obvious reasons disproportionately affects people of color.


People who are targeted by this AI will be stopped and searched in an attempt to mitigate crime but once criminals become aware of tools like this all they have to do is make sure that nothing can be proven upon being stopped by law enforcement. This is ridiculously easy and will prove to be a glaring flaw in Great Britain’s brilliant plan to stop crime before it occurs. For obvious reasons identified people cannot be arrested as no crime has been yet committed so concealing the fact that they intend to commit a crime is an issue with their plan that cannot yet be solved. At least not without diving down further into the rabbit hole of totalitarianism.

It almost seems like tools of this nature are deployed to sidestep the actual problem rather than solving it. If a particular country has a high crime rate and law enforcement is having issues dealing with it than the problem has to be attacked at its source. Employing programmers to create powerful yet heavily biased AI to compensate for the shortcomings of police will only serve to drive a further wedge between them and the general public.

Leave a Reply

Your email address will not be published. Required fields are marked *