Alexa, Biometrics, Daily Updates, Future Ear Radio, Google Assistant, Hearables, Hearing Healthcare, Podcasts, VoiceFirst

018 – The Emergence of Hearables Panel with Andy Bellavia

The next episode of the Future Ear Podcast is a recording of the panel that Andy Bellavia and I participated on during Project Voice. The panel was moderated by the host of the event, Bradley Metrock, and was held on January 14th. During the panel, we touched on a wide variety of hearables-specific topics, largely based around the intersection of hearables and voice technology.

The panel begins with a high-level overview of what exactly hearables are and how they differentiate from something like a hearing aid. As I’ve written about before, the term was defined by wireless analyst, Nick Hunn, and it represents any ear-worn device that has a degree of intelligence, aka a “smart device.” Therefore, hearing aids are actually among the most sophisticated hearables out there, as today’s cutting-edge, high-end hearing aids come packed with more intelligent features than most other ear-worn devices. We’re undergoing a shift toward intelligence-based, ear-worn consumer technology, resulting in the fact that most new Bluetooth devices on the market would be considered hearables as they’re broadly becoming more intelligent.

As this panel was recorded the week following CES, which is where the Bluetooth Special Interests Group (SIG) introduced the new hearables-specific Bluetooth LE (low energy) Audio protocol, much of the conversation revolves around this new protocol and the implications stemming from it. Prior to this panel, I had just written a piece for Voicebot covering this new protocol, so it was still fresh in my mind as to what we can expect from new features, such as broadcast mode.

After covering the new Bluetooth protocol at length, we each address a question from Bradley about whether we see hearables manufacturers incorporating “master assistants” into their devices, or if they’ll use their own type of voice assistant. As Andy points out, we’re a ways away from hearables having the capability to host a voice assistant on the device itself, rather than accessed via the cloud. From a hardware standpoint, there are challenges and power constraints with putting the voice assistant directly on the device. There may be a bit of a middle-ground, similar to what we’re seeing from Starkey’s Livio AI hearing aid, which uses a “Thrive Assistant,” that fields local, hearing aid specific questions on the device, and then sends general queries to Google Assistant in the cloud.

Finally, we touch on some of the aspects of biometric data and the opportunities that it bio-data presents to a voice assistant. I believe this to be one of the most powerful combinations of hearables and voice technology. The reason being is that as we’re collecting more and more data, the biggest consumer demand pertaining to the data will likely be what exactly does one do with this type of data. An intelligent voice assistant that can monitor and assess the data being collected, could then play the role of a health coach, nutritionist, or nurse (to name a few).

-Thanks for Reading-

To subscribe to the podcast: Apple PodcastsSpotifyAnchorGoogle Player

To add to your flash briefing, click here – then say, “Alexa, play news”

To listen on your Google Assistant device, enable the Google Action here 

Leave a Reply