This is the second part of my conversation with Dogac Basaran, a post-doctoral researcher at CNRS, the French national scientific research centre. If you missed the first part, you might want to go back and listen to the previous episode on Signal Processing Basics for Audio.
Today, in part 2 of 2, we explore Dogac’s research into audio fingerprinting, alignment, and melody extraction. By analysing the magnitude of frequency peaks and their relative spacing, Dogac shows us how it’s possible to create audio fingerprints that can be used to detect and match audio recordings, even if they contain noise or are incomplete. These fingerprints have a variety of uses, including aligning multiple recordings of a single speaker/performance, and identifying a particular recording.
We also discuss query by humming, the state-of-the-art technique that takes an audio fingerprint of a person humming a melody, and matches it to a database of music recordings. Dogac also explains why learning how to build neural networks has become an essential skill in this field.
Links from the show:
- Full show notes : http://bit.ly/voicetechpodcast
- Dogac Basaran on Github: https://github.com/dogacbasaran
- Dogac Basaran’s websites: https://dbasaran.wp.imt.fr/ and http://dogacbasaran.com/
- Signal Processing MOOC on Coursera: https://www.coursera.org/learn/dsp
- MATLAB: https://matlab.mathworks.com/
Find us here: