Will Artificial Intelligence Revolutionise Eye Healthcare?

our consultant managing the role
Posting date: 9/26/2018 1:32 PM
Faced with a rapidly expanding and increasingly older population, Healthcare resources in both the UK and US are facing an unprecedented level of demand. With only limited resource available, conversation is beginning to turn to the potential use of Artificial Intelligence (AI) to ease some of the strain.

A recent example already seeing success is the current collaboration between Google’s DeepMind and London’s Moorfields Eye Hospital. But, as the lines begin to blur between human and machine-diagnosis, it’s worth questioning what role AI should actually play. 

SEEING THE POTENTIAL IN AI


Aside from the increase in population, there are many societal elements that are affecting the healthcare system. An increase in illnesses such as diabetes has led to a rise in eye-diseases and increased demand on optometrists. 

Fortunately, AI can speed up the process with new technologies allowing systems like DeepMind to make their own diagnosis.

Optical Coherence Technology (OCT) allows optometrists to create a 3D scans of people’s eyes. By bouncing near-infrared light of the interior surfaces of the eye, it can create an image that will reveal any abnormalities. DeepMind has been trained on over 15,000 scans and can now form a likely diagnosis, having used algorithms to find common patterns within the data. 

Head of DeepMind, Mustafa Suleyman, says:

“ [This could] transform the diagnosis, treatment, and management of patients with sight threatening eye conditions [...] around the world.”

However, with an accuracy of just over 94%, there is still enough room for error to cause concern, especially given the potential consequences of an incorrect diagnosis. 

LOOKING FOR MISTAKES 


This doesn’t mean we should rule out the use of AI altogether. Whilst we may not be able to solely rely on the technology for diagnosis, it can be effective when working hand-in-hand with a human skillset. 

In particular, by using AI systems for Triage purposes (determining what order patients should be seen in), as opposed to making a full diagnosis, patients demonstrating more significant symptoms could be reported and seen by a medical professional as priority, potentially leading to a higher chance of recovery. 

When AI is used as a driver for patient management, as opposed to being viewed as alternative physician, it can create a faster and more efficient process. 

To help continue to improve the results produced by DeepMind, the NHS have been given a validated version to use for free for the next five years. Using real-world applications over this time should streamline both their processes, and the technology itself. 

A LONG TERM VISION


For the time being, AI’s role within Eye Health is one of evolution, not revolution. With the inconsistency of current technology and the impact of incorrect results on people’s sight, it can only be utilised as a supporting tool. 

For now, the skillsets of Data Analysts and medical doctors remain too separate to full work hand-in-hand. Add to this the risks of automation bias (a willingness to blindly trust a machine’s output), and the margin of error is too high. 

However, that’s not to say that AI can’t and won’t play a significant part in the future of Healthcare. With the technology to detect eye conditions through the lens of your smartphone camera closer than ever to mainstream use, AI is set to play a huge role in outpatient treatment. At this stage, however, that role will be one of risk predictor, not eliminator. 

If you think you have the skillset to help take AI to the next level in Healthcare we may have a role for you.

Take a look at our latest opportunities or get in contact with our team. 

Related blog & news

With over 10 years experience working solely in the Data & Analytics sector our consultants are able to offer detailed insights into the industry.

Visit our Blogs & News portal or check out the related posts below.

From Idea to Impact: How Charities Use Data

It’s that time of year again. As the festive season draws near and we pull together wish lists, many of us also begin to think about how we can give back. Given that the UK spent over £7 billion this Black Friday and Cyber Monday weekend, it’s not surprising that the idea of Giving Tuesday is becoming more and more popular.  But with 160,000 registered charities in the UK alone, institutions are turning to data to find new ways to stand out and make a greater impact.  Far from just running quarterly reports, charities are now utilising the insights they gain from data to inform their strategies, improve their services and plan for the future.  IDEAS Given that not every charity is lucky enough to go viral with an Ice Bucket Challenge style video, there is a need to find other ways to stand out in such a crowded market. As such, many are looking to the data they have collected to help create a strategy. Macmillan Cancer Support, one the UK’s biggest charities, wanted to see more success from one of their main fundraisers, ‘The World’s Biggest Coffee Morning’. The event, which sees volunteers hold coffee and cake-fuelled gatherings across the country was revolutionised by data. By engaging with their database and researching what motivated fundraisers, they refocused their marketing around how the occasion could create an opportunity for people to meet up and chat, such as swapping ‘send for your free fundraising pack’ for ‘order your free coffee morning kit’. Whilst these amends may seem superficial, they had a major impact increasing funds raised from £15m to £20m.  Some brands have taken this idea even further, using Data & Analytics tools to engage with potential donors. Homelessness charity Cyrenians’ data told them that there were a number of misconceptions about rough sleepers, including 15% of people believing that they were homeless by choice. To counter this they created an AI chatbot, named Alex, that allowed users to ask questions they may not have been comfortable asking a real person.  Another charity using data tools to counter common misconceptions is Dyslexia Association. Their Moment of Dyslexia campaign saw them utilise facial recognition technology; the longer a person looked at their digital poster, the more jumbled up the words and letters became. By harnessing both insights and the technology made possible by data, they were able to offer an insight into what dyslexia is like for people who previously didn’t understand.  INDIVIDUALS A big issue facing a number of charities is trust. Following a series of recent scandals, the public are more sceptical than ever of how charities are run, and their use of data is no exception. This ‘trust deficit’ has resulted in vast amount of potential donors staying away, with recent research highlighting that only 11% of people are willing to share their data with a charity, even if it means a better service.  Whilst charities with effective Data Governance are able to use their vast amount of data to enhance those business, those who mismanage it are likely to suffer. Following a cyber-attack that exposed the data of over 400,000 donors, the British and Foreign Bible Society were fined £100,000. As hackers were able to enter the network by exploiting a weak password, this serves as a timely reminder that our data needs not only to be clean, but secure.  Financial implications aside, improper data usage can also do irreversible damage to a charity’s reputation. St Mungo’s has faced criticism for passing information about migrant homeless people to the Home Office, putting them at risk of deportation. Whilst they were cleared of any wrongdoing by the ICO, this controversial use of data has had a negative impact on the charity’s image. With a decline in the number of people donating to charity overall, anything that can put people off further is bad news.  IMPACT Whilst there is more demand than ever for charities to share their impact data, there is also more opportunity. With Lord Gus O’Donnell urging charities to make data an ‘organisation-wide priority’, many are going beyond publishing annual reports and fully embracing a culture shift. Youth charity Keyfund have been able to justify how the spend their funds based on their impact data. Having heard concerns from fundraisers regarding whether their leisure projects were effective they looked at the data they had gathered from the 6,000 young people they were helping. What they found was that not only were their leisure projects effective, they had an even more positive impact than their alternatives, particularly for those from the most deprived area. This allowed them to continue to support these programs and even increase funding where necessary. Going one step further are Street League, a charity that use sports programmes to tackle youth unemployment. Rather than share their impact data in quarterly, or even annual, reports they moved to real-time reporting. Interested parties can visit an ‘Online Impact Dashboard’ and see up-to-the-minute data about how the charity’s work is impacting the lives of the people it is trying to help. This not only allows for the most relevant data to be used strategically, but also supports the business holistically, gaining donor both attention and trust. To stand out in the charity sector institutions need to take advantage of data. Not only can this be used to generate campaigns and streamline services but, when used securely and transparently, it can help rebuild trust and offer a competitive edge.  If you want to make the world a better place by harnessing and analysing data, we may have a role for you. Take a look at our latest opportunities or get in touch with one of our expert consultants to see how we can help you. 

Fighting Crime with Data: An Ethical Dilemma

Can you be guilty of a crime you’ve yet to commit? That’s the premise of Steven Spielberg’s 2002 sci-fi thriller ‘Minority Report’. But could it actually be closer to reality than you think.   As technology has advanced, law enforcement has had to adapt. With criminals utilising increasingly sophisticated methods to achieve their goals, our police forces have had to continuously evolve their approach in order to keep up.   New digital advances have refined crime-solving techniques to the point where they can even predict the likelihood of a specific crime occurring. But with our personal data at stake, where do we draw the line between privacy and public safety?  Caught on Camera   The digital transformation has led to many breakthroughs over the past few decades, originating with fingerprint analysis, through to the advanced Machine Learning models now used to tackle Fraud and analyse Credit Risk.   With an estimated one camera per every 14 individuals in the UK, CCTV coverage is particularly dense. And, with the introduction of AI technologies, their use in solving crimes is likely to increase even further.   IC Realtime’s Ella uses Computer Vision to analyse what is happening within a video. With the ability to recognise thousands of natural language queries, Ella can let users search footage for exactly what they’re after; from specific vehicles, to clothes of a certain colour. With only the quality of CCTV cameras holding it back, we’re likely to see technology like this become mainstream in the near future.   Some more widespread technologies, however, are already playing their part in solving crimes. Detectives are currently seeking audio recordings from an Amazon Echo thought to be active during an alleged murder. However, as with previous requests for encrypted phone data, debate continues around what duty tech companies have to their customer’s privacy.  Hotspots and Hunches Whilst Big Data has been used to help solve crime for a while, we’ve only seen it begin to play a preventive role over the past few years. By using Predictive Analytics tools such as HunchLab to counter crime, law enforcement services can:  Direct resources to crime hotspots where they are most needed.  Produce statistical evidence that can be shared with local and national-level politicians to help inform and shape policy.   Make informed requests for additional funding where necessary.   Research has shown that, in the UK, these tools have been able to predict crime around ten times more accurately than the police.   However, above and beyond the geographical and socioeconomic trends that define these predictions, advances in AI have progressed things even further.   Often, after a mass shooting, it is found that the perpetrators had spoken about their planned attack on social media. The size of the social landscape is far too big for authorities to monitor everyone, and often just scanning for keywords can be misleading. However, IBM’s Watson can understand the sentiment of a post. This huge leap forward could be the answer to the sincere, and fair, policing of social media that we’ve yet to see. Man vs Machine  Whilst our social media posts may be in the public domain, the question remains about how much of our data are we willing to share in the name of public safety.   There is no doubt that advances in technology have left us vulnerable to new types of crime, from major data breaches, to new ways of cheating the taxman. So, there is an argument to be had that we need to surrender some privacy in order to protect ourselves as well as others. But who do we trust with that data?  Humans are all susceptible to bias and AI inherits the biases of its creators. Take a program like Boulder, a Santa-esque prototype that analyses the behaviour of people in banks, determining who is ‘good’ and who is ‘bad’. Whilst it can learn signs of what to look for, it’s also making decisions based around how it’s been taught ‘bad’ people might look or act. As such, is it any more trustworthy than an experienced security guard?  If we ignore human bias, do we trust emotionless machines to make truly informed decisions? A study that applied Machine Learning to cases of bail found that the technology’s recommendations would have resulted in 50% less reoffenders than the original judges’ decisions. However, whilst the evidence suggests that this may be the way forward, it is unlikely that society will accept such an important, life-changing decision being made by a machine alone.  There is no black and white when it comes to how we use data to prevent and solve crime. As a society, we are continuously pushing the boundaries and determining how much technology should impact the way we govern ourselves. If you can balance ethics with the evolution of technology, we may have a role for you.   Take a look at our latest roles or contact one of our expert consultants to find out how we can help you.