Register a new account with Harnham
£50000 - £90000 per annum + pension, healthcare etc
Looking for a Data Scientist with top academics for a start-up in fin-tech with huge success that has grown rapidly! Pioneering their market - Apply now!
£75000 - £100000 per annum + Equity, benefits
City of London, London
Great Opportunity for a Data Scientist to join an exciting tech company in London!
Minimum of 2 years. exp required. DS Hands-on exp essential. Apply Now!
£100000 - £110000 per annum + bonus, benefits
* Principal Data Scientist needed for online travel giant - one of the best DS practices globally * London - £110k * Top of the market Data Scientists Apply! *
£60000 - £90000 per annum + Bonus, other benefits
Global media company are looking for a Senior Data Scientist with strong experience to join their Data Science team! ** London - £60 - £90k ** Apply Now! **
With over 10 years experience working solely in the Data & Analytics sector our consultants are able to offer detailed insights into the industry.
Visit our Blogs & News portal or check out our recent posts below.
From Vinyl to Tidal; we all know that the way we consume music has changed. Technological advances have made Steve Job’s claim that he would put “1,000 songs in our pockets” seem antiquated, whilst Spotify’s algorithms serve us tracks that we’ll love before we’ve discovered them ourselves. But can the technologies that have brought us these advancements change the way we make music? Whether it’s leading to new instruments or creating a song without our input, Artificial Intelligence is a game changer.
Make Some Noise Until recently, the best way to imitate a sound was by experimenting with the different settings on a keyboard. However, this is no longer the case, thanks to Google’s research arm Magenta. They’ve created the NSynth Super, an instrument that generates sounds based upon Deep Neural Network techniques. These algorithms allow the NSynth to not only imitate a sound, but consistently learn more and more about the specificities of that pitch, creating something closer to reality. Users can then combine those individual sounds to create something unique and entirely original. This is potentially just the beginning of a new wave of music, and in a decade’s time the NSynth could end up having as big an impact as autotune.
Talking About AI Generation Whilst we’re still waiting to see the impact of instruments akin to the NSynth, machine-led compositions are becoming more and more commonplace. Using a Recurrent Neural Network (RNN), one can feed a model existing music and ask it to generate something new. By learning the patterns and rhythms of notes from a variety of compositions, the model should be able to output an original and melodical sequence. Although these may not be the most amazing tracks in the world, they do serve a purpose. Music production platform Jukedeck allows users to input their requirements for a piece of music (genre, temp, mood, length, instruments etc.) that can then be automatically generated using AI. Obviously these aren’t designed to be chart hits, but production music that can be purchased cost-efficiently for YouTubers, Short Films and other backing-tracks. Despite the fact that this remains the most common use of AI in music, some artists are looking to push this even further. Musician Taryn Southern, for example, has created an EP based purely on AI compositions generated using Amper Score. The platform generated a beat, melody and basic structure before Southern then rearranged and added lyrics too. Could this form of collaboration become the future of mainstream music?
Rage Against the Machine Learning As with any change, AI’s interruption of the music industry is not without controversy, and there are those who believe that the human contribution is what makes music what it is. Indeed, there are still several limitations to what AI can achieve creatively. Despite a neural network’s success with creating original compositions, another’s ability to write lyrics was somewhat lacklustre. Despite being trained on a combination of lyrics (for structure), and literature (for vocabulary), its output was largely nonsense and included lines such as “I got monk that wear you good”. Perhaps, like Southern’s compositions, AI is best used as an accompanying tool. London-based start-up AI Music offer technology that ‘shape-shifts’ songs to adapt to the context in which they’re played. This could be anything from tempo changes to match a listener’s speed to remastering tracks to appeal to different moods and situations. IBM’s Watson Beat, on the other hand, creates compositions that naturally fit to the visuals of a video. In this context, as within many other industries, AI looks set to support our existing skillsets rather than replace jobs. Whether you’re looking to create collaborative technologies or revolutionise an industry, we may have a role for you. Take a look at our latest opportunities or get in touch with one of our specialist consultants to find out more.
07. February 2019
Read about Harnham's corporate social responsibility initiatives
Find our more about the causes we support.