Visit our Blogs & News portal or check out our recent posts below.
Nick began his career at Harnham in 2014 and has helped grow the UK Data Science and Machine Learning team throughout his time here. The Data Science and AI space has quickly become one of the fastest growing industries globally and Nick and his team are partnering with some of the leaders in this field to assist in their recruitment of top UK talent.
Visit our Blogs & News portal or check out our recent posts below.
Faced with a rapidly expanding and increasingly older population, Healthcare resources in both the UK and US are facing an unprecedented level of demand. With only limited resource available, conversation is beginning to turn to the potential use of Artificial Intelligence (AI) to ease some of the strain. A recent example already seeing success is the current collaboration between Google’s DeepMind and London’s Moorfields Eye Hospital. But, as the lines begin to blur between human and machine-diagnosis, it’s worth questioning what role AI should actually play. SEEING THE POTENTIAL IN AI Aside from the increase in population, there are many societal elements that are affecting the healthcare system. An increase in illnesses such as diabetes has led to a rise in eye-diseases and increased demand on optometrists. Fortunately, AI can speed up the process with new technologies allowing systems like DeepMind to make their own diagnosis. Optical Coherence Technology (OCT) allows optometrists to create a 3D scans of people’s eyes. By bouncing near-infrared light of the interior surfaces of the eye, it can create an image that will reveal any abnormalities. DeepMind has been trained on over 15,000 scans and can now form a likely diagnosis, having used algorithms to find common patterns within the data. Head of DeepMind, Mustafa Suleyman, says: “ [This could] transform the diagnosis, treatment, and management of patients with sight threatening eye conditions [...] around the world.” However, with an accuracy of just over 94%, there is still enough room for error to cause concern, especially given the potential consequences of an incorrect diagnosis. LOOKING FOR MISTAKES This doesn’t mean we should rule out the use of AI altogether. Whilst we may not be able to solely rely on the technology for diagnosis, it can be effective when working hand-in-hand with a human skillset. In particular, by using AI systems for Triage purposes (determining what order patients should be seen in), as opposed to making a full diagnosis, patients demonstrating more significant symptoms could be reported and seen by a medical professional as priority, potentially leading to a higher chance of recovery. When AI is used as a driver for patient management, as opposed to being viewed as alternative physician, it can create a faster and more efficient process. To help continue to improve the results produced by DeepMind, the NHS have been given a validated version to use for free for the next five years. Using real-world applications over this time should streamline both their processes, and the technology itself. A LONG TERM VISION For the time being, AI’s role within Eye Health is one of evolution, not revolution. With the inconsistency of current technology and the impact of incorrect results on people’s sight, it can only be utilised as a supporting tool. For now, the skillsets of Data Analysts and medical doctors remain too separate to full work hand-in-hand. Add to this the risks of automation bias (a willingness to blindly trust a machine’s output), and the margin of error is too high. However, that’s not to say that AI can’t and won’t play a significant part in the future of Healthcare. With the technology to detect eye conditions through the lens of your smartphone camera closer than ever to mainstream use, AI is set to play a huge role in outpatient treatment. At this stage, however, that role will be one of risk predictor, not eliminator. If you think you have the skillset to help take AI to the next level in Healthcare we may have a role for you. Take a look at our latest opportunities or get in contact with our team.
26. September 2018
There is a fundamental shift happening in the analytics space at the moment, one which is seeing the rise of open-source software staking its claim on the sector. More frequently we at Harnham are seeing companies of all sizes introducing open-source software such as R, Python and Hadoop. Moving away from the traditional analytics tools of the past – most notably, SAS. This dramatic shift is something which has been comfortably reinforced at the EARL conference 2015. Back by popular demand, it hosts a number of key figures in the analytics space to showcase the latest trends and topics in R. Over the past decade R programming has arguably become the most important tool for statistical analysis, data visualisation and predictive modelling, used by statisticians and data scientists to name a few, in both academia and the commercial world. The Benefit of Being OpenSo why is R so great? Well first of all, being open-source, it’s FREE – anyone can use it. Who doesn't like stuff that comes for free? After this, you also have a robust tool that can manage any statistical task whilst providing the ability to deliver high quality visualisations. Something nicely illustrated by Joe Cheng’s presentation on the increasingly popular applications of Rshiny. On top of this, once you know how to, it’s easy to use R, and gives fast results. Romain Francois and Dirk Eddelbuettel’s ‘Rccp’ package is a perfect example of this, and has become the most widely used R package. Seamlessly combining the power of C++ and an easy to approach API that lets you write high-performance code it’s easy to see why. Finally (and probably most importantly) its open-source nature means it becomes a collaborative effort to constantly find ways to improve its use, keeping the tool ahead of the game. With over 2 million users worldwide experimenting, finding bugs and working together as a community. R manages to stay at the forefront of its field. A tool of this nature has thrived as well as similar projects, such as Linux and MySQL; in culture; which has collaboration/brainstorming and sharing information as the key to its continued furtherance and enrichment. One Step Ahead of the Competition Combine these factors, and you can see why R has become one of – if not the – most popular analytics software tools right now. Both companies and universities alike are jumping on the bandwagon, meaning that more and more analysts are coming into the field having added R to their toolbox. What occurs is a snowball effect, sustaining the continued rise of R. With R thriving it places a huge pressure on others to keep up – can they adapt or will they soon come to play little, if any, part in the analytics space in years’ to come? Whilst there was little talk at the EARL Conference about R’s biggest competitor, it became clear that the battle beginning to take shape, potentially for the years to come is between R and Python. Who will win the fight to become the data scientist’s language of choice? Despite all the talk it is obvious to see that there are pros and cons to both, and use cases whereby one would be more effective than the other. Therefore is it really a war? Or, is the outcome more likely a co-existence?
02. November 2015
Data science is a young discipline, a multidisciplinary field requiring knowledge in sophisticated statistical modeling and software engineering. A strong grasp of information design doesn’t hurt, either. As a result, skilled practitioners are in high demand as increasingly data-driven enterprises and organizations in need of a unique skillset capable of reaping insights from big data. Meanwhile, there remains some confusion and debate as to what makes a data scientist.The future of the discipline is bright, but it’s useful to look to its past to understand what it is and where it may be going. Data science arose from the convergence of two more mature disciplines. In a new post at Forbes, Gil Press presents a short history of how the discipline came to be, tracing its evolution back to a 1962 paper by mathematician John W. Tukey, “The Future of Data Analysis“. In Peter Naur’s 1974 book Concise Survey of Computer Methods, the computer scientist offered an early definition of data science, as “The science of dealing with data, once they have been established, while the relation of the data to what they represent is delegated to other fields and sciences.”Beginning in the mid-’90s, the discussion leapt out of academic circles and turned towards potential business applications, with the advent of data mining technologies and their potential application in marketing and business intelligence. These developments also prompted the now-familiar challenge of storing and working with millions of rows of data. In 1999, Jacob Zahavi articulated this emerging issue, stating, “Scalability is a huge issue in data mining. Another technical challenge is developing models that can do a better job analyzing data, detecting non-linear relationships and interaction between elements… Special data mining tools may have to be developed to address web-site decisions.”Data science came into its own during the last decade. As the strands of mathematics and computer science continued to intertwine in academia, new technologies were developed to mine, store, and analyze these massive data sets, while consumer internet giants such as Google demonstrated the business value of a data-driven approach to operations and innovation. A 2009 prediction by Google’s Chief Economist Hal Varian was particularly spot-on, with Varian telling McKinsey Quarterly, “I keep saying the sexy job in the next ten years will be statisticians…the ability to take data—to be able to understand it, to process it, to extract value from it, to visualize it, to communicate it—that’s going to be a hugely important skill in the next decades.”Four years later, this statement seems like a forgone conclusion, as big data has reached buzzword status in the media, and become fundamental to the operations of enterprise, academic, and government organizations. Awareness of the value of data science has leapt out of academia and the business world and into mass culture, largely thanks to the accuracy of Nate Silver’s projections during the 2012 elections and his bestselling book The Signal and the Noise. The discipline’s prominence and impact is set to increase considerably in the next decade, with the advent of the Internet of Things, the industrial internet, and the democratization of its tools and techniques, which will transform fields from healthcare to agriculture, journalism to civic life.To learn more about the history of data science and its rise to prominence, check out Gil Press’s Short History of Data Science at Forbes.Click here for the article on the web.
21. January 2015
French company Spotter has developed an analytics tool that claims to be able to identify sarcastic comments posted online. Spotter says its clients include the Home Office, EU Commission and Dubai Courts. The algorithm-based analytics software generates reputation reports based on social and traditional media material. However some experts say such tools are often inadequate because of the nuance of language. A spokeswoman for the Home Office said she should not comment at this time. Spotter's UK sales director Richard May said the company monitored material that was "publicly available". Its proprietary software uses a combination of linguistics, semantics and heuristics to create algorithms that generate reports about online reputation. It says it is able to identify sentiment with up to an 80% accuracy rate. The company says these reports can also be verified by human analysts if the client wishes. Algorithms had been developed to reflect various tones in 29 different languages including Chinese, Russian and Arabic, said Mr May. "Nothing is fool-proof - we are talking about automated systems," he told the BBC. "But five years ago you couldn't get this level of accuracy - we were at the 50% mark." Mr May added one of the most common subjects for sarcasm was bad service - such as delayed journeys. "One of our clients is Air France. If someone has a delayed flight, they will tweet, 'Thanks Air France for getting us into London two hours late' - obviously they are not actually thanking them," he said. "We also have to be very specific to specific industries. The word 'virus' is usually negative. But if you're talking about virus in the context of the medical industry, it might not be." Spotter charged a minimum of £1,000 per month for its services, Mr May said. Human effort Simon Collister, who lectures in PR and social media at the London College of Communication, told the BBC there was "no magic bullet" when it came to analytics that recognize tone. "These tools are often next to useless - in terms of understanding tone, sarcasm, it's so dependent on context and human languages," he said. "It's social media and what makes it interesting and fascinating is the social side - machines just can't comprehend that side of things in my opinion." Mr Collister added that human interpretation was still vital. "The challenge that governments and businesses have is whether to rely on automated tools that are not that effective or to engage a huge amount of human effort." Click here for the article on the web.
18. February 2014