Fighting Crime with Data: An Ethical Dilemma | Harnham Recruitment post

Can you be guilty of a crime you’ve yet to commit? That’s the premise of Steven Spielberg’s 2002 sci-fi thriller ‘Minority Report’. But could it actually be closer to reality than you think.  As technology has advanced, law enforcement has had to adapt. With criminals utilising increasingly sophisticated methods to achieve their goals, our police forces have had to continuously evolve their approach in order to keep up.  New digital advances have refined crime-solving techniques to the point where they can even predict the likelihood of a specific crime occurring. But with our personal data at stake, where do we draw the line between privacy and public safety? 

Caught on Camera  The digital transformation has led to many breakthroughs over the past few decades, originating with fingerprint analysis, through to the advanced Machine Learning models now used to tackle Fraud and analyse Credit Risk.  With an estimated one camera per every 14 individuals in the UK, CCTV coverage is particularly dense. And, with the introduction of AI technologies, their use in solving crimes is likely to increase even further.  IC Realtime’s Ella uses Computer Vision to analyse what is happening within a video. With the ability to recognise thousands of natural language queries, Ella can let users search footage for exactly what they’re after; from specific vehicles, to clothes of a certain colour. With only the quality of CCTV cameras holding it back, we’re likely to see technology like this become mainstream in the near future.  Some more widespread technologies, however, are already playing their part in solving crimes. Detectives are currently seeking audio recordings from an Amazon Echo thought to be active during an alleged murder. However, as with previous requests for encrypted phone data, debate continues around what duty tech companies have to their customer’s privacy. 

Hotspots and Hunches Whilst Big Data has been used to help solve crime for a while, we’ve only seen it begin to play a preventive role over the past few years. By using Predictive Analytics tools such as HunchLab to counter crime, law enforcement services can: Direct resources to crime hotspots where they are most needed. Produce statistical evidence that can be shared with local and national-level politicians to help inform and shape policy.  Make informed requests for additional funding where necessary.  Research has shown that, in the UK, these tools have been able to predict crime around ten times more accurately than the police.  However, above and beyond the geographical and socioeconomic trends that define these predictions, advances in AI have progressed things even further.  Often, after a mass shooting, it is found that the perpetrators had spoken about their planned attack on social media. The size of the social landscape is far too big for authorities to monitor everyone, and often just scanning for keywords can be misleading. However, IBM’s Watson can understand the sentiment of a post. This huge leap forward could be the answer to the sincere, and fair, policing of social media that we’ve yet to see.

Man vs Machine Whilst our social media posts may be in the public domain, the question remains about how much of our data are we willing to share in the name of public safety.  There is no doubt that advances in technology have left us vulnerable to new types of crime, from major data breaches, to new ways of cheating the taxman. So, there is an argument to be had that we need to surrender some privacy in order to protect ourselves as well as others. But who do we trust with that data? Humans are all susceptible to bias and AI inherits the biases of its creators. Take a program like Boulder, a Santa-esque prototype that analyses the behaviour of people in banks, determining who is ‘good’ and who is ‘bad’. Whilst it can learn signs of what to look for, it’s also making decisions based around how it’s been taught ‘bad’ people might look or act. As such, is it any more trustworthy than an experienced security guard? If we ignore human bias, do we trust emotionless machines to make truly informed decisions? A study that applied Machine Learning to cases of bail found that the technology’s recommendations would have resulted in 50% less reoffenders than the original judges’ decisions. However, whilst the evidence suggests that this may be the way forward, it is unlikely that society will accept such an important, life-changing decision being made by a machine alone.  There is no black and white when it comes to how we use data to prevent and solve crime. As a society, we are continuously pushing the boundaries and determining how much technology should impact the way we govern ourselves. If you can balance ethics with the evolution of technology, we may have a role for you.  Take a look at our latest roles or contact one of our expert consultants to find out how we can help you. 

Posted in