AI is now being used to predict crimes before they happen

Moses Okoh
2 min readDec 12, 2023

--

The application of artificial intelligence in predicting crime draws parallels with the Sybil system portrayed in the anime Psycho Pass. This biomechatronic network measures citizens' biometrics to calculate a crime coefficient, indicating the likelihood of an individual committing a crime. In reality, AI has demonstrated success in crime prediction, boasting up to 90% accuracy. However, concerns arise due to potential biases embedded in the data used to train these systems.

While studies, like one from the University of Chicago, show AI algorithms predicting crimes a week in advance with high accuracy, the controversy arises from the risk of perpetuating existing biases. The European Union's AI act reflects this concern, explicitly prohibiting predictive policing systems. The fear is that if biased police data is used to train AI, it may exacerbate inequalities by directing law enforcement disproportionately to specific areas or communities.

Noteworthy examples include a Richmond, Virginia police department using historical data to predict random gunfire on New Year's Eve, resulting in a significant decrease in incidents. In the UK, police forces employ AI from companies like IBM, Microsoft, and Palantir to predict crime locations and assess individuals' potential criminal behavior.

However, the controversy surrounding AI in crime prediction stems from historical biases within policing datasets. If AI is trained on problematic data, it risks perpetuating and amplifying existing biases, potentially leading to over-surveillance and targeting of specific communities. As Kate Crawford, co-founder and co-director of AI Now, warned, the effectiveness of AI systems relies heavily on the quality and fairness of the data used for training.

--

--

Moses Okoh
0 Followers

It's all about tech and lifestyle here