Fighting Crime with Data: An Ethical Dilemma

Henry Rodrigues our consultant managing the role
Posting date: 11/15/2018 9:27 AM
Can you be guilty of a crime you’ve yet to commit? That’s the premise of Steven Spielberg’s 2002 sci-fi thriller ‘Minority Report’. But could it actually be closer to reality than you think.  

As technology has advanced, law enforcement has had to adapt. With criminals utilising increasingly sophisticated methods to achieve their goals, our police forces have had to continuously evolve their approach in order to keep up.  

New digital advances have refined crime-solving techniques to the point where they can even predict the likelihood of a specific crime occurring. But with our personal data at stake, where do we draw the line between privacy and public safety? 


Caught on Camera  


The digital transformation has led to many breakthroughs over the past few decades, originating with fingerprint analysis, through to the advanced Machine Learning models now used to tackle Fraud and analyse Credit Risk.  

With an estimated one camera per every 14 individuals in the UK, CCTV coverage is particularly dense. And, with the introduction of AI technologies, their use in solving crimes is likely to increase even further.  

IC Realtime’s Ella uses Computer Vision to analyse what is happening within a video. With the ability to recognise thousands of natural language queries, Ella can let users search footage for exactly what they’re after; from specific vehicles, to clothes of a certain colour. With only the quality of CCTV cameras holding it back, we’re likely to see technology like this become mainstream in the near future.  

Some more widespread technologies, however, are already playing their part in solving crimes. Detectives are currently seeking audio recordings from an Amazon Echo thought to be active during an alleged murder. However, as with previous requests for encrypted phone data, debate continues around what duty tech companies have to their customer’s privacy. 


Hotspots and Hunches


Whilst Big Data has been used to help solve crime for a while, we’ve only seen it begin to play a preventive role over the past few years. By using Predictive Analytics tools such as HunchLab to counter crime, law enforcement services can: 

  • Direct resources to crime hotspots where they are most needed. 
  • Produce statistical evidence that can be shared with local and national-level politicians to help inform and shape policy.  
  • Make informed requests for additional funding where necessary.  

Research has shown that, in the UK, these tools have been able to predict crime around ten times more accurately than the police.  

However, above and beyond the geographical and socioeconomic trends that define these predictions, advances in AI have progressed things even further.  

Often, after a mass shooting, it is found that the perpetrators had spoken about their planned attack on social media. The size of the social landscape is far too big for authorities to monitor everyone, and often just scanning for keywords can be misleading. However, IBM’s Watson can understand the sentiment of a post. This huge leap forward could be the answer to the sincere, and fair, policing of social media that we’ve yet to see.


Man vs Machine 


Whilst our social media posts may be in the public domain, the question remains about how much of our data are we willing to share in the name of public safety.  

There is no doubt that advances in technology have left us vulnerable to new types of crime, from major data breaches, to new ways of cheating the taxman. So, there is an argument to be had that we need to surrender some privacy in order to protect ourselves as well as others. But who do we trust with that data? 

Humans are all susceptible to bias and AI inherits the biases of its creators. Take a program like Boulder, a Santa-esque prototype that analyses the behaviour of people in banks, determining who is ‘good’ and who is ‘bad’. Whilst it can learn signs of what to look for, it’s also making decisions based around how it’s been taught ‘bad’ people might look or act. As such, is it any more trustworthy than an experienced security guard? 

If we ignore human bias, do we trust emotionless machines to make truly informed decisions? A study that applied Machine Learning to cases of bail found that the technology’s recommendations would have resulted in 50% less reoffenders than the original judges’ decisions. However, whilst the evidence suggests that this may be the way forward, it is unlikely that society will accept such an important, life-changing decision being made by a machine alone. 

There is no black and white when it comes to how we use data to prevent and solve crime. As a society, we are continuously pushing the boundaries and determining how much technology should impact the way we govern ourselves. If you can balance ethics with the evolution of technology, we may have a role for you.  

Take a look at our latest roles or contact one of our expert consultants to find out how we can help you. 

Related blog & news

With over 10 years experience working solely in the Data & Analytics sector our consultants are able to offer detailed insights into the industry.

Visit our Blogs & News portal or check out the related posts below.

How Can Your Career In Big Data Help You To Accelerate Change?

Data & Analytics is fast becoming a core business function across a range of different industries. 2.5 quintillion bytes of data are produced by humans every day, and it has been predicted that 463 exabytes of data will be generated each day by humans as of 2025. That’s quite a lot of data for organisations to break down. Within Gartner’s top 10 Data & Analytics trends for 2021, there is a specific focus on using data to drive change. In fact, business leaders are beginning to understand the importance of using data and analytics to accelerate digital business initiatives. Instead of being a secondary focus — completed by a separate team — Data & Analytics is shifting to a core function. Yet, due to the complexities of data sets, business leaders could end up missing opportunities to benefit from the wealth of information they have at their fingertips. The opportunity to make such an impact across the discipline is increasingly appealing for Data Engineers and Architects. Here are a just a selection of the benefits that your role in accelerating organisational change could create. Noting the impact In a business world that has (particularly in recent times) experienced continued disruption, creating impact in your industry has never been more important. Leaders of organisations of a range of sizes are looking to data specialists to help them make that long-lasting impression. What is significant here is that organisations need to build-up and make use of their teams to better position them to gather, collate, present and share information – and it needs to be achieved seamlessly too. Business leaders, therefore, need to express the specific aim and objective they are using data for within the organisation and how it’s intended to relate to the broader overarching business plans. Building resilience Key learnings from the past year have taught senior leaders around the globe that being prepared for any potential future disruption is a critical part of an organisation’s strategic plans. Data Engineers play a core role here. Using data to build resilience, instead of just reducing resistance or limiting the challenges it presents, will ensure organisations are well-placed to move into a post-pandemic world that makes use of the abundance of data available to them. Big Data and pulling apart and understanding these large scale and complex data sets will offer a new angle with which to inform resilience-building processes.  Alignment matters An organisation’s ability to collect, organise, analyse and react to data will be the thing that sets them apart from their competitors, in what we expect to become an increasingly competitive market. Business leaders must ensure that their teams are part of the data-driven culture and mindset that an organisation adopts. As this data is used to inform how an organisation interacts with its consumers, operates its processes or reaches new markets, it is incredibly important to ensure that your Data Engineers (and citizen developers) are equipped and aligned with the organisation’s visions. Change is a continuous process, particularly for the business community. Yet, there are some changes that are unpredictable, disruptive and mean that many pre-prepared plans may face a quick exit from discussions. Data professionals have an opportunity to drive the need for change, brought about by the impacts of the pandemic, in a positive and forward-thinking way. In understanding impact, resilience and alignment, this can be truly achieved. Data is an incredibly important tool, so using this in the right way is absolutely critical. If you’re in the world of Data & Analytics and looking to take a step up or find the next member of your team, we can help. Take a look at our latest opportunities or get in touch with one of our expert consultants to find out more.

Mitigating Risk In The Financial Services Sector With Machine Learning

Data & Analytics is an industry that is constantly evolving and is always using the latest technology to innovate its services and capabilities. More recently, these advancements have moved in areas such as Artificial Intelligence (AI) and Machine Learning (ML). Machine Learning is a method of data analysis, under the branch of Artificial Intelligence, that allows systems to learn from data, identify patterns and ultimately make decisions with little to no human intervention. Used across a vast range of sectors, this arm of Data & Analytics has become widely popular, especially within highly-advanced industries such as Finance.  Since the 2008 financial crash, at the top of the agenda for many Financial Institutions (FIs) was, and still is, the need to protect business, increase profitability and, possibly most importantly, address the abundant inadequacies of risk management. This includes risks posed by consumers such as liquidity, insolvency, model and sovereign, as well as any internal process and operational risk the FIs may also be facing through any failures or glitches.   Machine Learning has played a crucial role in improving the quality and precision of FIs risk management abilities. In HPC Wire, it has been reported that the use of AI and ML within the financial sphere to mitigate risk, improve insights and develop new offerings may generate more than $250 billion for the banking industry.  How does ML work? By using incredibly large data sets, drawn (with consent) from consumers, ML can learn, and predict, patterns in consumer behaviour. This can be done in one of two ways: through supervised learning tools, or unsupervised learning tools.  Explained by Aziz and Dowling; “In supervised learning you have input data that you wish to test to determine an output. In unsupervised learning, you only have input data and wish to learn more about the structure of the data.” How do banks use ML to mitigate risk? In FIs, a mix of the two ML tools are used. Most commonly, we can expect to see learning systems such as data mining, neural networks and business rules management systems in play across a lot of banks. These models work in tandem to identify relationships between the data given from the FIs and their consumers – from their profiles to their spending habits, credit card applications to recorded phone calls – which then build ‘character profiles’ of each individual customer. The process can then begin, spotting signs of potential risk factors. This may include debt, fraud and/or money laundering.  Here we break down two key examples.  Fraud Thanks to ML, customers have become accustomed to incredibly quick and effective notification of fraudulent activity from their banks. This ability from FIs comes from large and historical datasets of credit card transactions and machines which have been algorithmically trained to understand and spot problematic activity. As stated by Bart van Liebergen; “The historical transaction datasets showcase a wide variety of pre-determined features of fraud, which distinguish normal card usage from fraudulent card usage, ranging from features from transactions, the card holder, or from transaction history.”  For example, if your usual ‘character profile’ is known by ML tools to spend between £500 - £1000 per month on your credit card, and suddenly this limit is overtly exceeded, fraudulent activity tags will be alerted, and the freezing of your account can be done in real-time.  Credit applications When borrowing from a bank or any other FI, consumers must undertake a credit risk assessment to ensure that they have a record of paying back debt on time, and therefore not adding greater cost, and risk, to the lender.  Traditionally, FIs have approached credit risk with linear, logit and probit regressions but, serious flaws were found in these methods, with many applications being left incomplete. In this space, the evidence for the effectiveness of ML is overwhelming. Khandani et al. found that FIs using ML to analyse and review credit risk can lead to a 25 per cent cost saving for the FI involved.   These ML models come in various shapes and sizes, with the most common being instantaneous apps or websites which allow users and their banks to have access to real-time scoring, data visualisation tools and business intelligence tools.  The risk of risk management with ML Like with any AI or ML application or tool, there will always be cause for concern and real need to always remain vigilant. While ML has shown to be an invaluable tool across lower risk areas, the complexity of more statistical areas of banking, such as loans, has proven to be an Achilles heel for the technology. This usually stems from bias, a perpetual problem for AI and ML across all industries.  Technology Review notes that “There are two main ways that bias shows up in training data: either the data you collect is unrepresentative of reality, or it reflects existing prejudices.”  Data, analytics, AI and ML are notoriously non-diverse working sectors. The person behind the screen creating learning algorithms tends to be white and male, and very unrepresentative of the whole society that the machine learning tool will serve. Over the years, we have heard numerous accounts of unfair and unjust machines, which have learned from a very narrow and unrepresentative dataset, which stems from the lack of diversity amongst the employees within the Data & Analytics industry. For example, Microsoft’s racist bot and Amazon’s sexist recruitment tool, both clear examples that ML and AI are not ready to be used on their own, and humans still need to play an integral part in decision making.  Banks and FIs must be aware of the, potentially lethal, consequences that bias in ML may present. Lenders must be careful to ensure they are working within the guidelines of fair lending laws and that no one group of people are being penalised for no reason other than issues within the technology and its algorithms. It is vital that the humans behind the technology don’t rely on ML to provide them with an answer 100 per cent of the time but, instead, use it to aid them in their decision making when it comes to risk mitigation.  If you’re looking for a role in Data & Analytics or are interested in finance or Risk Analytics, we may have a role for you. Take a look at our latest opportunities or get in touch with one of our expert consultants to learn more.  

RELATED Jobs

recently viewed jobs