Deep Learning Engineer

London
£550 - £650 per day

Deep Learning Engineer
London/Remote
Initial 3-month Contract
£600 per day

As a Deep Learning Engineer, you will be working with 3D image recognition tools and object tracking. This contract is with a US-based company who use Data Science techniques to provide their media clients with marketing insights.

THE COMPANY:
This company have a huge presence in the US and are looking to grow into Europe. They have a product that allows them to provide insightful knowledge to their media clients using artificial intelligence. They are looking to use Deep Learning to build an image recognition product and eventually build a computer vision team

THE ROLE:
This opportunity will provide a Deep Learning Engineer to show-off their capabilities with 3D image recognition. You will be working with a wide range of products across multiple retailers for object detection purposes on their websites. The ideal candidate will have expertise using Neural networks, OpenCV and an understanding of the Python libraries.

YOUR SKILLS AND EXPERIENCE:
The ideal Deep Learning Engineer will have:

  • Expertise with OpenCV, Python, TensorFlow and Neural Nets
  • Experience on 3D image recognition techniques
  • Built models from scratch in the Computer Vision space
  • A PhD in a maths or stats subject

HOW TO APPLY:
Please submit your CV to Henry Rodrigues at Harnham via the Apply Now button.
Please note that our client is currently running a fully remote interview process, and able to on-board and hire remotely as well.

Send similar jobs by email
22507 - HR
London
£550 - £650 per day
  1. Contract
  2. Facial Recognition

Similar Jobs

Salary

£80000 - £90000 per annum + + BONUS + BENEFITS

Location

City of London, London

Description

Incredible opportunity to join a rapid-growth company who are set to take the data world by storm. Tech: Python, Pandas, NumPy, Docker, Tensorflow, AWS

Harnham blog & news

With over 10 years experience working solely in the Data & Analytics sector our consultants are able to offer detailed insights into the industry.

Visit our Blogs & News portal or check out our recent posts below.

The Search For Toilet Paper: A Q&A With The Data Society

We recently spoke Nisha Iyer, Head of Data Science, and Nupur Neti, a Data Scientist from Data Society.  Founded in 2014, Data Society consult and offer tailored Data Science training for businesses and organisations across the US. With an adaptable back-end model, they create training programs that are not only tailored when it comes to content, but also incorporate a company’s own Data to create real-life situations to work with.  However, recently they’ve been looking into another area: toilet paper.  Following mass, ill-informed, stock-piling as countries began to go into lockdown, toilet paper became one of a number of items that were suddenly unavailable. And, with a global pandemic declared, Data Society were one of a number of Data Science organisations who were looking to help anyway they could.  “When this Pandemic hit, we began thinking how could we help?” says Iyer. “There’s a lot of ways Data Scientists could get involved with this but our first thought was about how people were freaking out about toilet paper. That was the base of how we started, as kind of a joke. But then we realised we already had an app in place that could help.” The app in question began life as a project for the World Central Kitchen (WCK), a non-profit who help support communities after natural disasters occur.  With the need to go out and get nutritionally viable supplies upon arriving at a new location, WCK teams needed to know which local grocery stores had the most stock available.  “We were working with World Central Kitchen as a side project. What we built was an app that supposed to help locate resources during disasters. So we already had the base done.” The app in question allows the user to select their location and the products they are after. It then provides information on where you can get each item, and what their nutritional values are, with the aim of improving turnaround time for volunteers.  One of the original Data Scientists, Nupur Neti, explained how they built the platform: “We used a combination of R and Python to build the back-end processing and R Shiny to build the web application. We also included Google APIs that took your location and could find the closest store to you. Then, once you have the product and the sizes, we had an internal ranking algorithm which could rank the products selected based on optimisation, originally were based on nutritional value.”  The team figured that the same technology could help in the current situation, ranking based on stock levels rather than nutritional value. With an updated app, Iyer notes “People won’t have to go miles and stand in lines where they are not socially distancing. They’ll know to visit a local grocery store that does have what they need in stock, that they’ve probably not even thought of before.” However, creating an updated version presented its own challenges. Whereas the WCK app utilised static Data, this version has to rely on real-time Data. Unfortunately this isn’t as easy to come by, as Iyer knows too well:  “When we were building this for the nutrition app we reached out to groceries stores and got some responses for static Data. Now, we know there is real-time Data on stock levels because they’re scanning products in and out. Where is that inventory though? We don’t know.” After putting an article out asking for help finding live Data, crowdsourcing app OurStreets got in touch. They, like Data Society, were looking to help people find groceries in short supply. But, with a robust front and back-end in place, the app already live, and submissions flying in across the States, they were looking for a Data Science team who could make something of their findings.  “We have the opportunity,” says Iyer “to take the conceptual ideas behind our app and work with OurStreets robust framework to create a tool that could be used nationwide.” Before visiting a store, app users select what they are looking for. This allows them to check off what the store has against their expectations, as well as uploading a picture of what is available. They can also report on whether the store is effectively practising social distancing. Neti explains, that this Data holds lots of possibilities for their Data Science team: “Once we take their Data, our system will clean any submitted text using NLP and utilise image recognition on submitted pictures using Deep Learning. This quality Data, paired with the Social Distancing information, will allow us to gain better insights into how and what people are shopping for. We’ll then be able to look at trends, see what people are shopping for and where. Ultimately, it will also allow us to make recommendations as to where people should then go if they are looking for a product.”  In addition to crowdsourced information, Data Society are still keen to get their hands on any real-time Data that supermarkets have to offer. If you know where they could get their hands on it, you can get in touch with their team.  Outside of their current projects, Iyer remains optimistic for the world when it emerges from the current situation: “Things will return to normal. As dark a time as this is, I think it’s going to exemplify why people need to use Artificial Intelligence and Data Science more. If this type of app is publicised during the Coronavirus, maybe more people will understand the power of what Data and Data Science can do and more companies that are slow adaptors will see this and see how it could be helpful to their industry.”   If you want to make the world a better place using Data, we may have a role for you, including a number of remote opportunities. Or, if you’re looking to expand and build out your team with the best minds in Data, get in touch with one of expert consultants who will be able to advise on the best remote and long-term processes. 

How Computer Vision Is Streamlining Manufacturing

Since the Ford Motor Company first introduced the assembly line for car production, automation has been part of the manufacturing industry. Over 100 years later, Computer Vision adds another layer to streamlined processes. Industrial robots. Drones. Automation. With the adoption of AI technologies and its connective capabilities, we’re in the next age of Smart Manufacturing. Demand is led by supply and, as consumers demand more, manufacturers are constantly evolving to ensure their processes are efficient and safe. The implementation of machines allows them to make sure quality control measures are in place and catch issues before breakdowns occur. This verification of output far outpaces the human eye and opens up opportunities for more creative thinking.  Working Hand In (Robotic) Hand While there may still be some element of fear regarding machines taking over jobs, this isn’t the intent. Ultimately, the idea is for humans and machines to partner for more streamlined and efficient processes within the industry. The role of machines is to continue the automation of processes using image recognition, gathering insights from AI-driven Analytics solutions, and optimising operations across facilities. We continue to retain oversight of these processes, but are now also free to focus on higher-value tasks at the same time, allowing strategic and creative thinking to take the lead.  Computer Vision is playing a crucial role in the implementation of AI in manufacturing and its use is estimated to grow more than 45% by 2025. Why? Here are a few reasons: Quality inspectionPredictive maintenanceDefect reductionProductivity improvement Human-machine partnerships through the adoption of AI, cloud-based technologies, and Computer Vision are helping to prepare facilities to become networked factories. Not unlike the un-siloed Data teams working throughout a variety of industries, the factory will also link their teams. From design to supply chain, the production line to quality control; the coming years will see continued growth in the output and efficiency of today’s manufacturer. Looking Out For Bias However, there is one area in which Computer Vision remains lacking. Navigating visual images still contains within it a bias which can be detrimental to some production output use cases. Think cars, wearable devices, or uniforms. The biases and stereotypes found most often in Computer Vision algorithms are three attributes protected by anti-discrimination law; gender, skin colour, and age. To help combat these biases and make imageable visuals more easily identifiable, two computer scientists embarked upon a research project.  What they found was that not only were there biases in these areas but some visual clues still posed problems.  However, the images used to train Computer Vision technologies can determine the differences. Not just in people, but in landscape and objects as well. By crowdsourcing correct categorisations, automating image collection, and more aptly defining words to negate stereotypical phrasings, researchers are striving toward a bias-free image capture. Seeking Out Business Goals In the last few years, Computer Vision has made great strides in uniting technologies to streamline the manufacturing process. As researchers work to reduce bias in computer vision and AI, machines become ever more essential for meeting business goals. Factories with smart manufacturing systems can more quickly process inefficiencies with improved accuracy. In 2017, sales of Computer Vision and automation systems grew 14.6% over the previous year to $2.633 billion. All industries are noticing the benefits of Computer Vision as an essential system but, like the Ford Motor Company in the early 20th century, manufacturing looks once again set to lead the world in innovation.  Ready to take the next step in your career? Whether you’re interested in AI, Big Data and Analytics, Computer Vision or more, we may have a role for you. Check out our current vacancies or get in touch with one of our expert consultants to find out more.  

Recently Viewed jobs