The Emotion Machine – Rana Gujral, Behavioral Signals – Voice Tech Podcast ep.035

Rana Gujral Behavioural Signals

Episode description

Rana Gujral is the CEO of Behavioral Signals, a company that allows developers to add speech emotion and behavioral recognition AI to their products. We discuss the many reasons why it’s important that machines can recognise and express human emotion, including improving human computer interaction and boosting business KPIs.

We discover why voice is the best modality for analysing emotion, then highlight some of the many business and consumer use-cases for this technology. Finally we dive into the signal processing pipeline that makes it all work, and Rana shares his advice for working with this technology.

This is one of my favourite conversations of the year so far. It’s a topic close to my heart, having previously worked on voice emotion transformation in the lab, and I feel it’s one of the most important technologies to close the gap between humans and machines. Rana is also a very articulate and inspirational speaker, which makes this an unmissable conversation.

Highlights from the show

  • Why is it important that machines that can read, interpret, replicate and experience emotions? It’s an essential element of intelligence, and users will demand and require increasingly greater intelligence from the machines they interact with.
  • How does emotion analysis improve human computer conversations? It helps to establish conversational context and intent in voice interaction.
  • Why is having theory of mind important for machines to understand us? It lets machines emulate empathy, which is an essential component of natural conversation.
  • People treat voice assistants differently depending on how the technology communicates – more empathy creates different outcomes
  • How does adding emotion AI to your product help your business? Knowing what is being said and how it’s being said allows you to take decisions to improve your KPIs.
  • Why is voice the best modality for analysing emotion? Our eyes can decieve us, as humans are adept at masking their emotions in their facial expression, but much less so in our voices.
  • What are the use cases for voice emotion analysis? Improving empathy in social robotics platforms, making voice assistants more relatable, boosting sales in call centers, reducing suicide rates in helpline callers…
  • What’s the signal processing pipeline of the system? Data collection, Signal analysis, Modeling
  • Advice for people looking to enter the research field of emotion analytics? Find a niche area that improves the quality of life for people.

Links from the show

Subscribe to get future episodes

Join the discussion

Support the Voice Tech Podcast

Share this article

What do you think?

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Posts

David Borish Primo Ai
Rana Gujral Behavioural Signals
Stas Tushinskiy Instreamatic

Get notified about new articles