Essays, Hearables, Hearing Healthcare, Longevity Economy, VoiceFirst

The Unexpected #VoiceFirst Power Users

Smart Assistant Power Users

In-The-Ear Assistants

There was a really good post published last week in the #VoiceFirst world by Cathy Pearl, the author of the book, “Designing Voice User Interfaces.” In her post, she goes through  some of the positive effects that smart assistants are having on the disabled and elderly communities. The unique and awesome thing about the Voice user interface is it enables these demographic groups that had previously been left behind by past user interfaces. Due to physical limitations or the deterioration of one’s senses and dexterity, mobile computing (and all prior generations of computing) is not very conducive to these groups of people. Additionally,  Voice is being adopted by all ages, from young children to elderly folks, in large part due to the fact that there is virtually no learning curve. “Just tell Alexa what you want her to do.”

Cathy’s article dovetails nicely into what I see as being the single biggest value-add that hearing aids and hearables have yet to offer – smart assistant integration. As I wrote about back in January, one of the most exciting announcements this year was Amazon’s Mobile Accessory Kit (AMAK). This software kit makes it dramatically easier for OEMs, such as hearable and hearing aid manufacturers, to integrate Alexa into their devices.

(I should note that, as of now, “integration” represents a pass-through connection from the phone to the audio device. In the future, as our mini ear-computers become more independent from our phones, so too should we see full smart assistant integration as our audio devices further mature and become more capable as standalone devices.)

AMAK will help accelerate the smart assistant integration that’s already taking place in the hearables market, which now includes Airpods (Siri), Bose QC35 (Google Assistant), Bragi Dash Pro (Siri/Google/Alexa), Jabra Elite 65t (Siri/Google/Alexa), NuHeara IQ Buds (Siri/Google) and a handful of others. Hearing aids will soon see this type of integration too. Starkey CTO, Achin Bhowmik, alluded to being able to activate and engage smart assistants with taps on the hearing aids, verbal cues and head gestures. Given the partnerships between hearing aid and hearable companies (i.e. Starkey and Bragi) or full-on acquisitions (i.e. GN Resound owning Jabra), it seems that we’ll see this integration with all of our new “connected” hearing aids too.

A Convergence of Needs

Convergence Of Needs

For our aging population, there’s a convergence of needs that tends to exist. For starters, one out of every three US adults 65+ years old has a certain degree of hearing loss. Add in the fact that dating back to January 2011, 10,000 baby boomers turn 65 every day. By 2029, 18% of America will be above the age of 65 years old. Our population is living longer, the baby boomers are all surpassing the age of 65, and we’re all being exposed to levels of sound pollution not yet seen before. Mix that all together and we’re looking at increasing number of people who could benefit from a hearing aid.

Next, it’s important to consider what happens to our day-to-day tasks that we depend on technology for when a new interface arrives. I mentioned this in a previous post, in which I wrote:

“Just as we unloaded our various tasks from PCs to mobile phones and apps, so too will we unload more and more of what we currently depend on our phones for, to our smart assistants. This shift from typing to talking implies that as we increase our dependency on our smart assistants, so too will we increase our demand for an always-available assistant(s).”

My point was that just about everything you now depend on your phone for – messaging, maps, social media, email, ordering food, ridesharing, checking weather/ stock prices/ scores/ fantasy sports/ etc – will likely manifest itself in some way via Voice. This is a big deal in general, but for our aging and disabled population, this can be truly life-changing.

That was the aha! moment for me reading Cathy’s post. The value proposition for smart assistants is much more compelling at this point for these communities of people compared to someone like myself who has no problem computing via a smartphone. I certainly enjoy using my Alexa devices, and in some instances it might cut down on friction, but there’s nothing that it currently offers that I can’t otherwise do on my phone.

It’s similar to why mobile banking is growing like crazy in places like Kenya and India. For a large portion of people in those countries, there is no legacy, incumbent system in place in which people need to migrate from, unlike here in the US where the vast majority of people have traditional bank accounts. Along the same vein, many elderly people and those with physical limitations would not be migrating from existing systems, but rather adopting a new system from scratch that yields entirely new value.

If I’m already a hearing aid candidate or considering a hearable, smart assistant integration makes owning this type of device that much more compelling. Even in its current crude, primitive state, smart assistants provide brand new functionality and value for those that struggle to use a smartphone. There’s an unmet need in these communities to connect and empower oneself via the internet and smart assistants supply a solution.

The Use Cases of Today

Old Lady and Alexa.jpg

Building off this idea that we’re just shifting tasks to a new interface, let’s consider messaging. As Cathy highlighted in her post, we’re already seeing some really cool use cases being deployed by assisted living facilities like Front Porch in California, where the facility is outfitting residents with Amazon Echos. The infrastructure is being built out to facilitate audio messaging between residents, staff and residents’ families.

Taking it one step further, if the resident has their smart assistant integrated in their hearing aid, they can seamlessly communicate with fellow residents, staff members and their family members anywhere they want in the assisted living facility. Not to mention being able to actually hear and understand the assistant responding since it’s housed directly in the ear. Whereas I prefer to text, audio messaging mediated by Alexa or Siri provides a much more conducive messaging system for these groups.

The Alexa skill, My Life Story, is built specifically for those suffering from Alzheimer’s. It allows for the user’s family members to program “memories” for their loved one, so that Alexa reads back the memories to help trigger their memory. Again, putting this directly in the hearing aid allows for this type of functionality to exist anywhere the user is with their hearing aid, empowering them to be more mobile while remaining tethered to something they may become dependent on. (Reminds me of this scene from the movie “50 First Dates.”)

Another great example of how smart assistants can provide a level of independence for the user is this story describing how a stroke victim uses smart assistants. The victim’s family created routines, such as saying “Alexa, good morning” which triggers the connected devices in her room to open the blinds, turn the lights to 50%, and turn on the TV. “Alexa, use the bathroom” turns her room’s lights yellow to notify the staff that she needs to use the bathroom. So, while connected light bulbs and TVs might seem excessive or unnecessary for you or I, they serve as tools to help restore another’s dignity.

These are just a few specific use cases among many tailored to these communities that tie in with the more broad ones that already exist for the masses, such as ordering an Uber, streaming audio content, checking into flights, answering questions, checking the weather, or the other 30,000+ skills that can be accessed via Alexa.

The Use Cases of Tomorrow

We’re already seeing network effects broadly take hold with smart assistants, and I think its fair to say that we’ll see pockets of network effects within specific segments of the total user base too. If there are disproportionately high numbers of users or the engagement levels are higher within a certain segment of the user base, you can expect software developers to migrate toward creating more functionality for these pockets of users. There’s more money and incentive to cater to the power users.

Where the software development will get really interesting is when the accompanying hardware matures too. In this instance, the hearing aids and hearables. CNET’s Roger Cheng and Shara Tibken dove into what a more technologically mature hearing aid might look like with Starkey’s Achin Bhowmik. In this excerpt, Bhowmik describes the hearing aid’s transformation into a multipurpose device:

“Using AI to turn these into real-time health monitoring devices is an enormous opportunity for the hearing aid to transform itself,” he says.

By the end of this year, Starkey also will be bringing natural language translation capabilities to its hearing aids. And it plans to eventually integrate optical sensing to detect heart rate and oxygen saturation levels in the blood. It’s also looking at ways to noninvasively monitor the glucose levels in a user’s blood and include a thermometer to detect body temperature.

So, the hardware will supply a whole host of new capabilities rife with opportunities for developers to overlay smart assistant functionality on top of. Going back to the idea of there being convergence of needs – if I am 80 years old, have hearing loss, diabetes, and dexterity issues – a hearing aid that provides amplification, monitors glucose levels, and houses a smart assistant that interprets those glucose readings for me and gives me the functionality I currently derive from my iPhone, then that’s a very compelling device.

A single device that serve multiple roles and meets a number of unmet needs simultaneously. Empower these communities with something like that and these groups of people are going to adopt smart assistants en masse. Finally, an all-inclusive tool to connect those on the sidelines to the digital age.

-Thanks for Reading-


2 thoughts on “The Unexpected #VoiceFirst Power Users”

Leave a Reply