Dr Chris Mitchell is the CEO and Founder of Audio Analytic, who develop sound recognition software that adds context-based intelligence into consumer technology products. In our conversation, you’ll learn what the state of the art sound recognition systems can do, the types of sound events that are typically recognised, which consumer products they’re integrated into, and the many benefits and new possibilities the technology affords to developers and users.
We discover the difference between sound recognition and speech recognition, how sound recognition provides the all important context for voice enabled devices to make the right decisions, and how smart devices can take advantage of this contextual knowledge. Then we dive into some of the technical details of how it all works, including ‘better than real-time processing’, edge computing vs the cloud, the need to train custom acoustic models, and how these machine learning models can run on low-resource devices like headphones using TinyML. Chris briefly explains the process of integrating the AI3 framework into your products, then we tackle the all important question of data privacy and security.
Many of the smart devices of future will rely on sound recognition to understand the context of their environments. Chris and his team are at the cutting edge of the sound recognition field and are long-time experts in the domain, so there’s no better person to introduce us to this important technology.
Links from the show:
- Audio Analytic: https://www.audioanalytic.com
- Hive Hub case study: http://bit.ly/2IOlA9E
Sponsors:
- Dabble Lab: https://youtube.com/dabblelab
- Manning books: https://www.manning.com 40% off all books code: podvoicetech19
- Voice-Connected Home 2019: https://bit.ly/2IZuVwc, 20% discount code: VoiceTech
Find us here: