After teaching AI to draw and paint with AutoDraw, Google has set its sight on conquering another art form: music
The company’s AI research team, Google Magenta, announced a new project in April called Neural Synthesizer, or NSynth, which generates audio using deep neural networks. That technology will be demonstrated at Durham, North Carolina’s annual arts and technology festival, Moogfest, later this week.
SEE ALSO: Google’s new AutoDraw wants to make drawing easier for everyone
To create music, NSynth uses a dataset containing sounds from individual instruments and then blends them to create hybrid sounds. According to the company, NSynth gives “artists with intuitive control over timbre and dynamics and the ability to explore new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer,” the company said in their announcement. Read more…
More about Google, Music, Research, Ai, and Neural Network