9th UGent Data Science Seminar with Dr. Sander Dieleman
Dr. Sander Dieleman (Google DeepMind): Generating music in the raw audio domain
Abstract
Realistic music generation is a challenging task. When machine learning is used to build generative models of music, typically high-level representations such as scores, piano rolls or MIDI sequences are used that abstract away the idiosyncrasies of a particular performance. But these nuances are very important for our perception of musicality and realism, so we embark on modelling music in the raw audio domain. I will discuss some of the advantages and disadvantages of this approach, and the challenges it entails.
Bio
Sander Dieleman is a Research Scientist at DeepMind in London, UK, where he has worked on the development of AlphaGo and WaveNet. He was previously a PhD student at Ghent University, where he conducted research on feature learning and deep learning techniques for learning hierarchical representations of musical audio signals. During his PhD he also developed the Theano-based deep learning library Lasagne and won solo and team gold medals respectively in Kaggle’s “Galaxy Zoo” competition and the first National Data Science Bowl. In the summer of 2014, he interned at Spotify in New York, where he worked on implementing audio-based music recommendation using deep learning on an industrial scale.
After the seminar a sandwich lunch will be served. If you wish to take part in this, registration is required.