Posted by on
Tags: , , , , , , , , , ,
Categories: Uncategorized

As the name implies, machine learning is a form of AI whereby a computer algorithm analyses and stores data over time, then uses this data to make decisions and predict future outcomes. Deep learning is the next evolution of this: instead of requiring human ‘supervision’, algorithms can autonomously use ‘neural networks’ analogous to the human brain. Put simply, lines of computer code can now, to some extent, be programmed to learn for themselves, then use those learnings to perform complex operations on a scale that far surpasses human abilities.  

Considered the single biggest advancement in software development over the past few years, this technology is possible thanks to revolutionary advancements in computing power and data storage, and is now an integral part of day-to-day life, like how Siri or Alexa intelligently store data to predict future actions. Ever wondered why Facebook’s ‘People You May Know’ and those pesky suggested ads on social media are always so accurate? Spooky, huh? That’s before we even mention face recognition software, email spam filtering, image classification, fraud detection… 

Yep, machine learning algorithms are everywhere, and the field of music is no exception. For us everyday music listeners here in 2019, streaming services’ algorithms drive those lists of suggestions that help you hunt down new songs and artists you’d never normally discover. Last year, Google’s Magenta research division developed the open-source NSynth Super, a synthesiser powered by their NSynth algorithm designed to create entirely new sounds by learning the acoustic qualities of existing ones.

Computer-assisted composition, meanwhile, has been around since Brian Eno’s Koan-powered Generative Music 1 was released on floppy disc back in 1990. Amper Music takes this concept into the 21st century: it’s a service that uses deep learning to automatically compose computer-generated music for a piece of media based on the user’s choice of ‘style’ or mood’. Content creator Taryn Southern famously composed an entire track with the AI-powered assistance of Amper Music, and it’s since amassed almost 2 million plays on YouTube.

Furthermore, this tech is being used to give music producers and performers a helping hand. Audionamix’ Xtrax Stems 2 uses cloud-based machine-learning assistance to deconstruct a fully-mixed stereo track into a trio of constituent sub-stems (vocal, drums and music) that can then be used for live remixes and DJ mashups.

Whether you think the machines will take over our studios or not, it’s clear that artificial intelligence technology is here to stay, and we’re witnessing the beginning of a music tech revolution.

Read more here:

https://www.musicradar.com/news/what-is-machine-learning-and-what-does-it-mean-for-music

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.