How Predictive Policing Technologies Can Work Against Justice

In the U.S., predictive policing technologies are becoming a favorite tool of police forces because of their ability to predict a potential crime. (Image: via pixabay )

With Artificial Intelligence technologies becoming pervasive in all aspects of society, it was inevitable that they would be used by law enforcement. In the U.S., predictive policing technologies are becoming a favorite tool of police forces because of their ability to predict a potential crime. But such technologies bring with them a host of problems, including the risks of profiling.

Dangers of profiling

Predictive AIs run on past data. If historical data shows that people with certain characteristics or those belonging to specific communities are prone to violence, the AI will extrapolate data into the future to predict the likelihood of such people committing a crime again.

So, if data fed into the AI shows that blacks are more likely to mug people on the streets of Manhattan, then the AI might start assigning regions of Manhattan populated by the black community as being “risky.”

new-york-2727274_1280
If historical data shows that people with certain characteristics or those belonging to specific communities are prone to violence, the AI will extrapolate data into the future to predict the likelihood of such people committing a crime again. (Image: via pixabay / CC0 1.0)

If data shows that whites belonging to certain areas are involved in organized crime, the AI might classify the white population of those regions likewise. Since most reported cases of sexual violence involve a man being the perpetrator, it is inevitable that the AI would eventually group the male community as being a “threat” to females.

In all such instances, what happens is that the AI categorizes people into groups of “high risk” and “low risk” based on whether people similar to them have committed a crime or not. This actually violates the tenets of equality and the presumption of innocence.

“Using models of risk as a basis for police decision-making means that those already subject to police attention will become increasingly profiled. More data on their offending will be uncovered. The focus on them will be intensified, leading to more offending identified — and so the cycle continues. An unintended consequence of this is that those not subject to significant attention will be able to continue to offend with less hindrance,” according to Mike Rowe, Professor of Criminology at Northumbria University, Newcastle (The Conversation).

Just a statistic

Police officers using predictive technologies are also at risk of viewing “potential” criminals as just a statistic rather than a real human being.

For instance, suppose the AI indicates that a certain neighborhood has three times the amount of crime on Wednesday nights. Based on this analysis, the department might decide to double their patrolling in the area. But the AI also signals that the risk of being shot as a police officer in the neighborhood is four times greater than other regions.

This would lead to an officer posted in the area to be on high alert. And when officers come across a potential criminal, they will only see them as someone posing “four times the risk” of death. This can make the officers pull their trigger and kill the potential criminal out of fear of getting shot himself. If it is later found that the criminal never had a gun and the officer was never at risk of being shot, the guilt arising from killing an unarmed person would take a massive toll on the officer’s psyche.

weapon-3836563_1280
When the officers come across a potential criminal, they will only see them as someone posing ‘four times the risk’ of death. (Image: via pixabay / CC0 1.0)

“There’s a real danger, with any kind of data-driven policing, to forget that there are human beings on both sides of the equation… Officers need to be able to translate these ideas that suggest different neighborhoods have different threat scores. And, focusing on the numbers instead of the human being in front of you changes your relationship to them,” Andrew Ferguson, a professor of law at the University of the District of Columbia, said to Smithsonian Magazine.

This is not to say that predictive technologies are absolutely bad and we must abandon their development. But until the issue of profiling is dealt with and officers are trained to apply their personal judgment rather than blindly follow AI statistics, the technology may not be ready for large-scale implementation.

Follow us on Twitter or subscribe to our weekly email

RECOMMENDATIONS FOR YOU