Google makes music with machine learning
FYI, this story is more than a year old
NSynth Super is part of an ongoing experiment by Magenta: a research project within Google that explores how machine learning tools can help artists create art and music in new ways.
Technology has always played a role in creating new types of sounds that inspire musicians, from the sounds of distortion to the electronic sounds of synths.
Today, advances in machine learning and neural networks have opened up new possibilities for sound generation.
Building upon past research in this field, Magenta created NSynth (Neural Synthesizer).
It’s a machine learning algorithm that uses a deep neural network to learn the characteristics of sounds, and then create a completely new sound based on these characteristics.
Rather than combining or blending the sounds, NSynth synthesises an entirely new sound using the acoustic qualities of the original sounds, so users could get a sound that’s part flute and part sitar all at once.
Since the release of NSynth, Magenta has continued to experiment with different musical interfaces and tools to make the output of the NSynth algorithm more easily accessible and playable.
As part of this exploration, they've created NSynth Super in collaboration with Google Creative Lab.
It’s an open source experimental instrument which gives musicians the ability to make music using completely new sounds generated by the NSynth algorithm from 4 different source sounds.
The experience prototype was shared with a small community of musicians to better understand how they might use it in their creative process.
Using NSynth Super, musicians have the ability to explore more than 100,000 sounds generated with the NSynth algorithm.
You can see it in action here: