UPLOAD YOUR CV
We help the best talent in the Digital analytics market to find rewarding careers.
Simply upload your CV and select your areas of interest and our expert recruitment consultants will be in touchUpload Now
Senior Natural Language Processing Engineer
San Francisco Bay Area, CA
$160,000-$190,000 (depending on experience)
Harnham is working with a specialized AI Group working to have an impact in the healthcare space. This is the opportunity to change the way that people interact with the healthcare system from every angle. With this group in a period of rapid growth there is major opportunity to grow within the organization.
This is the opportunity to work with natural language processing and deep learning to have an impact on a space that affects millions. As an Natural Language Processing Engineer you will be solving diverse problems dealing with healthcare data and working with cutting-edge technology.
AS A SENIOR NATURAL LANGUAGE PROCESSING ENGINEER, YOU WILL:
YOUR SKILLS AND EXPERIENCE:
Competitive base salary of $160,000-$190,000 + Benefits
HOW TO APPLY:
Please register your interest by sending your CV to Frances Raynolds via the Apply line on this page
US$130000 - US$140000 per year + Relocation
A household name and market leader-this global business is looking to hire an experienced Computer Vision engineer!
US$165000 - US$180000 per year
Want to work on cutting edge AI projects? An established company out of MIT is looking to hire a Computer Vision engineer with experience in Deep Learning.
US$120000 - US$180000 per year + Equity & Benefits
Computer Vision Engineers, if you have experience in SLAM, I have a unique opportunity to join one of New York's best start-ups.
With over 10 years experience working solely in the Data & Analytics sector our consultants are able to offer detailed insights into the industry.
Visit our Blogs & News portal or check out our recent posts below.
I am thrilled to announce that we've been named one of The Sunday Times' Top 100 Small Companies to Work For 2019. This is the first year we've been eligible for the award and, fantastically, we've managed to place 26th. Coming off the back of our three-star accreditation from Best Companies for 'Extraordinary Levels' of workplace engagement, and being named APSCo's Recruitment Company of the Year (£10m-£50m) this is something else for the whole business to be proud of. Crucially, for both myself and the leadership team, is the fact that this accolade is based entirely on employee feedback. Our success has always been built on the success of our employees and we have always tried to nurture an environment where they can flourish. To be recognised for our efforts. and to know that our staff are happy here, means a tremendous amount to us. And, as ever, we're looking to grow our team. If you're determined, ambitious and driven, get in touch about our latest opportunities.
21. February 2019
From Vinyl to Tidal; we all know that the way we consume music has changed. Technological advances have made Steve Job’s claim that he would put “1,000 songs in our pockets” seem antiquated, whilst Spotify’s algorithms serve us tracks that we’ll love before we’ve discovered them ourselves. But can the technologies that have brought us these advancements change the way we make music? Whether it’s leading to new instruments or creating a song without our input, Artificial Intelligence is a game changer. Make Some Noise Until recently, the best way to imitate a sound was by experimenting with the different settings on a keyboard. However, this is no longer the case, thanks to Google’s research arm Magenta. They’ve created the NSynth Super, an instrument that generates sounds based upon Deep Neural Network techniques. These algorithms allow the NSynth to not only imitate a sound, but consistently learn more and more about the specificities of that pitch, creating something closer to reality. Users can then combine those individual sounds to create something unique and entirely original. This is potentially just the beginning of a new wave of music, and in a decade’s time the NSynth could end up having as big an impact as autotune. Talking About AI Generation Whilst we’re still waiting to see the impact of instruments akin to the NSynth, machine-led compositions are becoming more and more commonplace. Using a Recurrent Neural Network (RNN), one can feed a model existing music and ask it to generate something new. By learning the patterns and rhythms of notes from a variety of compositions, the model should be able to output an original and melodical sequence. Although these may not be the most amazing tracks in the world, they do serve a purpose. Music production platform Jukedeck allows users to input their requirements for a piece of music (genre, temp, mood, length, instruments etc.) that can then be automatically generated using AI. Obviously these aren’t designed to be chart hits, but production music that can be purchased cost-efficiently for YouTubers, Short Films and other backing-tracks. Despite the fact that this remains the most common use of AI in music, some artists are looking to push this even further. Musician Taryn Southern, for example, has created an EP based purely on AI compositions generated using Amper Score. The platform generated a beat, melody and basic structure before Southern then rearranged and added lyrics too. Could this form of collaboration become the future of mainstream music? Rage Against the Machine Learning As with any change, AI’s interruption of the music industry is not without controversy, and there are those who believe that the human contribution is what makes music what it is. Indeed, there are still several limitations to what AI can achieve creatively. Despite a neural network’s success with creating original compositions, another’s ability to write lyrics was somewhat lacklustre. Despite being trained on a combination of lyrics (for structure), and literature (for vocabulary), its output was largely nonsense and included lines such as “I got monk that wear you good”. Perhaps, like Southern’s compositions, AI is best used as an accompanying tool. London-based start-up AI Music offer technology that ‘shape-shifts’ songs to adapt to the context in which they’re played. This could be anything from tempo changes to match a listener’s speed to remastering tracks to appeal to different moods and situations. IBM’s Watson Beat, on the other hand, creates compositions that naturally fit to the visuals of a video. In this context, as within many other industries, AI looks set to support our existing skillsets rather than replace jobs. Whether you’re looking to create collaborative technologies or revolutionise an industry, we may have a role for you. Take a look at our latest opportunities or get in touch with one of our specialist consultants to find out more.
07. February 2019