
Today’s episode of the Future Ear Radio Podcast features Andy Bellavia of Knowles Corporation. This is the second half of our hearables discussion, and while part one was largely centered around how the stage has been set for the next decade with hearables, part two explores what types of use cases might emerge now that ear-computers are becoming so ubiquitous.
We start by touching on the fact that hearables are so unique because they offer such a powerful legacy use case with audio consumption. Since the vast majority of people wearing AirPods and the like are using them to stream music or podcasts, there’s no inherent demand from the user base for the devices to do much beyond providing high-quality audio consumption. Therefore, these devices are almost like, “Trojan Horses,” as they’ll slowly gain more and more features and allow for more capabilities.
By housing a voice assistant inside a hearable, there are ways in which audio consumption can be made better. For starters, it should become easier to navigate and control your audio via your conversational assistant. Further down the line, voice assistants might provide better ways to curate and discover new audio content, through the contextual information its learning from one’s consumption habits.
Moving beyond audio consumption, we start to dive into GPS-based application opportunities. The assistant should be privy to much of the information contained within your smartphone, such as your geo-location, allowing for conversational queries that pertain to one’s location. For example, we talk about asking for directions to the nearest public transit stop with an elevator. Another example would be if you’re walking near a restaurant that you’re intrigued by and want to ask your assistant for reviews, or even booking a table. Much of this would be centered around your geo-location, and therefore, not particularly relevant for developers who have been building experiences for non-portable smart speakers.
One of the key ideas that we discuss is this concept of the assistant working on the user’s behalf to gather all the different types of back-end information that’s currently stored on websites and within apps and aggregate back to the user all the disparate info in one answer. We use the video above as a reference to how intelligent hearables might be able to take the information stored on Home Depot’s app a step further.
Since Home Depot has different store layouts across all their stores, there’s not a lot of consistency around information like bin-locations for items. That said, each store does list its bin locations, so, a voice assistant could rely on the geo-location to determine the store, and then be able to quickly answer, “where are the claw hammers?” The assistant would know that you’re looking for the back-end information specific to the store you’re at via the geo-location. We apply this concept to transit too, allowing you to ask, “when’s the next train,” and your assistant fetching that info from whichever app houses that particular trains transit schedule, such as Chicago’s CTA app.
We’re really only discussing the tip of the iceberg here with what’s possible when you house a voice assistant inside hearable devices. As we head into 2020, we’re sure to see a whole lot of creative new ways in which various technologies are fused together to create even more compelling use cases and experiences. I expect Apple, Google, Amazon and Samsung to all take different approaches with their various hearables and voice assistants, and how the two work together.
-Thanks for Reading-
Dave