audiology, Daily Updates, Future Ear Radio, hearables, hearing aids

Signia’s Acoustic-Motion Sensors (Future Ear Daily Update 9-18-19)

Acoustic Motion Sensor 2.jpg

Much of what I am excited about right now in the world of consumer technology broadly, and in wearables/heararables/hearing aids more narrowly, is the innovation happening at the component level inside the devices. I’m still reeling a bit from Apple’s U1 chip embedded in the iPhone 11 and the implications of it that I wrote about here. New chips, new wireless technologies, new sensors, new ways to do cool things. Now, we can add another one to the list – acoustic-motion sensors – which will be included in Signia’s new line of hearing aids, Xperience.

Whereas video and camera systems rely on optical motion detection, Signia’s hearing aids will use its mics and sensors to assess changes in the acoustic environment. For example, if you’re moving from sitting at a table speaking face to face with one person, to a group setting where you’re standing around a bar, the idea is that the motion sensors would react to the new acoustic setting and then automatically adjust the mics accordingly, from directional to omni-directional settings and balance in-between.

These acoustic-motion sensors are part of a broader platform that simultaneously uses two processors, Dynamic Soundscape Processing and Own Voice Processing. The Own Voice processor is really clever. It’s “trained” for a few seconds to identify the user’s voice and differentiate it from other peoples’ voices that will inevitably be picked up through the hearing aid. This is important, as multiple hearing aid studies have found that a high number of hearing aid wearers are dissatisfied by the way their own voice sounds through their hearing aids. Signia’s Own Voice processor was designed specifically to alleviate that effect.

Now, with the inclusion of acoustic-motion sensors to constantly monitor the changes in the acoustic setting, the Dynamic Sound processor will be alerted by the sensors to adjust on-the-fly to provide a more natural sounding experience. The hearing aid’s two processors will then communicate with one another to determine which processor each sound should feed into. If you ask me, that’s a lot of really impressive functionality and moving pieces for a device as small as a hearing aid to handle, but it’s a testament to how sophisticated hearing aids are rapidly becoming.

I’ve written extensively about the innovation happening inside the devices and what’s most exciting is that the more I learn about what’s happening, the more I realize that we’re really only getting started. A quote that still stands out to me from Brian Roemmele’s U1 chip write up, is this:

“The accelerometer systems, GPS systems and IR proximity sensors of the first iPhone helped define the last generation of products. The Apple U1 Chip will be a material part of defining the next generation of Apple products.” – Brian Roemmele

To build on Brian’s point, it’s not just the U1 chip, it’s all of the fundamental building blocks being introduced that are enabling this new generation of functionality. Wearable devices in particular are poised to explode in capability because the core pieces required for all of the really exciting stuff that’s starting to surface, are maturing to the point where it has become feasible to begin implementing them into devices as small as hearing aids. There is so much more to come with wearable devices as the components inside the devices continue to be innovated on, which will then manifest in cool new capabilities, better products, and ultimately, better experiences.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to your flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

audiology, Daily Updates, Future Ear Radio, hearables

Sonova’s SWORD Chip (Future Ear Daily Update 8-22-19)

8-22-19 - Sonova's SWORD chip

Last May, I wrote a long essay about how the most interesting innovation occurring in the world of hearing aids and hearables is the stuff that’s happening inside the devices. Nearly all the cool new features and capabilities that are emerging with our smart ear-worn computers are by-and-large derived from the breakthroughs transpiring underneath the hood of the devices. As I wrote about in that essay, the core of the innovation is based around what 3D Robotic’s CEO, Chris Anderson, describes as the “Peace Dividends of the Smartphone Wars.”

It’s estimated that more than 7 billion smartphones have been produced since 2007, which means tens or hundreds of billions of the components comprising the guts of the devices have been produced for said smartphones as well. All of the components contained in the smartphone supply chain, are the same components housed in the vast majority of consumer electronic devices, ranging from drones, to Roombas, to TVs, to smart speakers, to Apple Watches, to hearables. Components such as microphones, receivers, antennas, sensors, DSP chips, etc. have become dramatically cheaper and more accessible for OEMs, as they represent the aforementioned, “peace dividends.”

A good example of this phenomenon is Sonova’s SWORD (Sonova Wireless One Radio Digital) chip. When hearing aids began to become Bluetooth enabled, the hearing aid manufacturers initially opted to work directly with Apple to use its proprietary 2.4 GHz Bluetooth low energy protocol, creating a new class of made-for-iphone (MFi) hearing aids. The upside for hearing aid manufacturers to use this protocol, rather than Bluetooth Classic, was that it represented a battery efficient solution that paired very well binaurally, so long as the hearing aids were paired to an iPhone. That’s Apple in a nutshell: if you’re part of its ecosystem, it works great, if not, don’t bother.

So, in 2018, when all of Sonova’s competitors had their MFi hearing aids on the market, some for years, the market expected that Sonova would soon release it’s own, long-awaited line of MFi hearing aids. Instead, Sonova released the Audeo Marvel line and incorporated a new chip, SWORD, which supported five Bluetooth protocols, allowing users to pair to iPhones using the MFi protocol (Bluetooth low energy), or to Android handsets using Bluetooth Classic.

One of the reasons the MFi protocol was initially more attractive relative to BT Classic is that bluetooth low energy is inherently more power efficient and capable of streaming binaurally. Sonova’s SWORD solved the power dilemma, due to a new chip design that included a new power management system relying on voltage converters. It solved the binaural issue, by allocating one of the five BT protocols to its own proprietary, Binaural VoiceStream Technology (BVST).

This week, Sonova took it a step further by ushering in Marvel 2.0 and allowing for RogerDirect connectivity. This allows for the various Roger microphone transmitters to directly pair with the Roger receiver built into the Marvel hearing aids. This is done by allocating one of the five BT protocols to the Roger system. Abram Bailey at Hearing Tracker wrote a great recap on this new firmware update that will soon become available to all Marvel owners through their hearing care provider. You can also check out Dr. Cliff Olson’s video on the update below.

All of the innovation occurring inside the devices might not be the most glamorous, we’re talking about computer chips after all, but it’s what all the cool new features of today’s hearing aids and hearables are predicated on. That’s why I am so intrigued by what Sonova is doing with SWORD – it makes the device so much more capable in what it can do. If you’re curious about what features on the horizon are most likely to appear next with our little ear-worn computers, start looking at the what’s going on underneath the hood of the devices and you’ll start to get an idea of what’s feasible based on the component innovation.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

 

audiology, Daily Updates, Future Ear Radio

What If Treating Hearing Loss Mitigates Cognitive Decline? (Part 2) (Future Ear Daily Update 7-12-19)

7-11-19 - What if treating hearing loss mitigates

There have been a number of studies conducted in recent years to try and understand if there is a link between hearing loss and cognitive decline, and the research continues to indicate that there is a correlation between the two. The question that researchers are now trying to answer is whether people with hearing loss can mitigate the slope of cognitive decline by treating their hearing loss.

The answer to this question is very tough to parse out because of how many other variables are involved, which makes it difficult for researchers to really isolate the equation down to hearing loss, hearing loss treatment (i.e. hearing aids), and cognitive decline. Since hearing loss often becomes more commonplace as we age, it could just be that both hearing loss and cognitive decline are byproducts of aging. That said, the two could be deeply linked, as suggested by Dr. Piers Dawes’ “cascade hypothesis” where, “long-term deprivation of auditory input may impact on cognition either directly, through impoverished input, or via effects of hearing loss on social isolation and depression.”

In other words, correlation does not necessarily equal causation, and therefore research that tries to get to the answer of this question must be thorough in effectively isolating the variables being measured together from the co-variables that might skew the data.

Today, I came across a paper published in ENT & Audiology news by Dr. Catherine Palmer, PhD Audiologist and Director of Audiology at the University of Pittsburgh, that revolved around research attempting to solve this question. In the paper, this quote here really jumped out at me:

“In a US population-based longitudinal cohort study, 2040 individuals over the age of 50 had cognitive performance measured every two years over 19 years, and new hearing aid use was identified along this time period. After controlling for a number of covariates (e.g. sex, age, education, marital status, wealth, smoking, drinking, physical activity, depression, etc.) the authors determined that hearing aid use had a mitigating effect on the trajectory of cognitive decline in later life. In other words, those who received hearing aids, regardless of many other covarying factors, had a less steep slope toward cognitive decline.”

These findings, along with Dr. Piers Dawes research that was published in January 2019, both indicate that treating hearing loss might change the trajectory of dementia to some extent and lessen the slope toward cognitive decline.

Back in May, I wrote an update about an interview I did with Dr. Nicholas Reed of Johns Hopkins for Oaktree TV about this very topic. During our conversation, Nick stated that along with Johns Hopkins’ study currently underway to try and answer this question, Johns Hopkins has conducted research that has found hearing loss leads to higher healthcare costs (outside of hearing loss related expenses), more frequent hospitalizations and increase in certain cognitive-related comorbidities. All the more reason why hearing loss is such a serious issue:

So, while there is not sufficient research to definitively say whether treating hearing loss can undoubtedly mitigate the potential of cognitive decline, one has to wonder, what if it does? Keep in mind that in the US healthcare system, hearing aids are considered “elective” status (same as plastic surgery) and are largely un-insured. Wouldn’t this finding help to strengthen the argument that these are not “nice-to-have” devices and actually “need-to-have” devices? Food for thought.

-Thanks for Reading-

Dave

audiology, hearables

Jobs-to-be-Done and the Golden Circle

Jobs to be Done

It’s about the Job, not the Product

There are two concepts that I’ve been thinking about lately to apply back to FuturEar. The first is the framework known as “Jobs-to-be-Done.” I’ve touched on it briefly in previous posts in regards to how it applies to voice technology and the mobile app economy, but it’s a framework that can be applied to just about anything and is worth expanding on because I think it’s going to increasingly impact consumer’s decision-making process when it comes to choosing which type of ear-worn device(s) and software solutions they purchase and wear. This will ring especially true as said devices become more capable and can be worn for longer periods of time as their feature-set broadens and the underlying technology continues to mature.

“Jobs-to-be-Done” is the idea that every product and service, physical or digital, can be “hired” by a consumer to complete a job they’re looking to accomplish. Using this framework, it’s essential to first understand the job it is that the consumer is looking to accomplish and then work backward to figure out which product or service to hire. Clay Christensen, who developed this framework, uses milk shakes as his example:

“A new researcher then spent a long day in a restaurant seeking to understand the jobs that customers were trying to get done when they hired a milk shake. He chronicled when each milk shake was bought, what other products the customers purchased, whether these consumers were alone or with a group, whether they consumed the shake on the premises or drove off with it, and so on. He was surprised to find that 40 percent of all milk shakes were purchased in the early morning. Most often, these early-morning customers were alone; they did not buy anything else; and they consumed their shakes in their cars.

The researcher then returned to interview the morning customers as they left the restaurant, shake in hand, in an effort to understand what caused them to hire a milk shake. Most bought it to do a similar job: They faced a long, boring commute and needed something to make the drive more interesting. They weren’t yet hungry but knew that they would be by 10 a.m.; they wanted to consume something now that would stave off hunger until noon. And they faced constraints: They were in a hurry, they were wearing work clothes, and they had (at most) one free hand.”

The essence of this framework is understanding that while people consume milk shakes all the time, they do so for different reasons. Some people love them as a way to stave off hunger and boredom during a long drive; others like to enjoy them as a tasty treat after a long day. You’re hiring the same product for two different jobs. Therefore, if you’re hiring a milk shake to combat boredom during a long drive, you’re choosing between other foods that might serve the same purpose (chips, sunflower seeds, etc), but if your hiring it for a tasty treat, your competing against things like chocolate or cookies. The job is what impacts the buying behavior for the product; not the other way around.

Ben Thompson, who writes daily business + technology articles for his website Stratechery, recently wrote about this framework through the lens of Uber and the emerging electric scooter economy. As he points out, Ubers and Bird or Lime scooters can be hired for a similar job, which is to get you from point A to point B in short distances. This means that for quick trips, Ubers and scooters are competing with one another, as well as walking, bikes and other forms of micro-transportation. Uber is a product that can be hired for multiple jobs (short trips, long trips, group trips, etc), while you’d only hire a scooter for one of those jobs (short trips).

The Golden Circle

The second concept I’ve been thinking about is the “Golden Circle” that Simon Sinek outlines during his famous “It Starts with Why” Ted Talk. (If you’ve never seen this Ted Talk, I highly encourage you to watch the full thing as it’s very succinct and powerful):

Simon uses the Golden Circle to illustrate why a few companies and leaders are incredibly effective at inspiring people, while others are not. The Golden Circle is comprised of three rings with “why” being at the core, “how” in the middle, and “what” in the outer ring. The vast majority of companies and leaders start from the outside and work their way in when communicating their message or value proposition – their message reads what > how > why.  “Here’s our new car (what), it get’s great gas mileage and has leather seats (how), people love it, do you want to buy our car (why)?” According to Simon, the problem with this flow is that people do not buy what you do, they buy why you do it. He argues that the message’s flow show be inverted to go why > how > what.

Simon uses Apple as an example of a company that works from the inside-out. “Everything we do, we believe in challenging the status quo. We believe in thinking differently (why). The way we challenge the status quo is by making our products beautifully designed, simple to use and user-friendly (how). We just happen to make great computers. Want to buy one (what)?”

What’s so powerful about Apple’s approach of working inside-out is that it effectively doesn’t matter what they’re selling because people are buying the why; people identify with the Apple brand of “thinking differently.” It’s why a computer company like Apple was able to introduce MP3 players, phones, watches, and headphones and we bought them in droves because people associated the new offerings with the brand; these were challenges to the status quo of each new product category. Meanwhile, Dell tried to sell MP3 players at the same time Apple was selling the iPod, but no one bought them. Why? Because people associated Dell with what they sell (computers) and so it felt weird to purchase a different type of product from them.

A Provision of Knowledgeable Assistance

Hearing Care Professional Golden Circle

Hearing care professionals can think of these two concepts in conjunction. There’s a lot of product innovation occurring within the hearing care space right now: new types of devices, improved hardware, new features and functionality, hearing aid and hearable companion apps, and other hearing-centric apps. This innovation will translate to new products that can be hired for new and existing jobs, and therefore broaden the scope of the suite of jobs that you as a hearing care professional can service. This also means that the traditional product for hire, hearing aids, is now competing with new solutions that might be better suited for specific jobs.

This is why I believe the value and the ultimate “why” of the hearing care professional is aligned with servicing the jobs that relate to hearing care, and the products that are hired are just a means to an end. To me, it’s not about what you’re selling, but rather why you’re selling those solutions – to provide people with a higher quality of life. They’re tired of not being a part of the dinner conversations they once loved, worried that their job is in jeopardy because they struggle to hear on business calls, or maybe their spouse is fed up with them having to blast the sound of the TV making it uncomfortable for them to watch TV together. They’re not coming to you to buy hearing aids, they’re coming to you because they have specific jobs that they need help with. Therefore, If new products are surfacing that might be better suited for a specific job, they should be factored into the decision making process.

As the set of solutions to enhance a patient’s quality of life continues to improve and grow over time, it will increase the demand for an expert to sort through those options and properly match solutions to the jobs they’re best suited for. In my opinion, this means that the hearing health professional needs to extend their expertise and knowledge to include additional products for hire, so long as the professional is confident that the product is capable of certain jobs. Just as Apple was able to introduce a suite a products beyond computers, the hearing care professional has the opportunity to be perceived as someone who improves a patient’s quality of life, regardless if that’s via hearing aids, cochlear implants, hearables or even apps. The differentiating value of the professional will increasingly be about serving as a provision of knowledgeable assistance through their education and expertise of all things hearing care related.

-Thanks for Reading-

Dave

audiology, Biometrics, hearables, Smart assistants, VoiceFirst

Capping Off Year One with my AudiologyOnline Webinar

FuturEar Combo

A Year’s Work Condensed into One Hour

Last week, I presented a webinar through the continuing education website, AudiologyOnline, for a number of audiologists around the country. The same week the year prior, I launched this blog. So, for me, the webinar was basically a culmination of the past year’s blog posts, tweets and videos that I’ve generated, distilled into a one-hour presentation. By having to consolidate so many things I have learned into a single hour, it helped me to choose the things that I thought were most pertinent to the hearing healthcare professional.

If you’re interested, feel free to view the webinar using this link (you’ll need to register, though you can register for free and there’s no type of commitment): https://www.audiologyonline.com/audiology-ceus/course/connectivity-and-future-hearing-aid-31891  

Some of My Takeaways

Why This Time is Different

The most rewarding and fulfilling part of this process has been to see the way things have unfolded and the technological progress that has been made both with the hardware and software of the in-the-ear devices and also the rate at which the emerging use cases for said devices are maturing. During the first portion of my presentation, I laid out why I feel this time is different from any previous era where disruption might feel as if it’s on the doorstep, yet doesn’t come to pass, and that’s largely due to the fact that the underlying technology has matured so much of late.

I would argue that the single biggest reason why this time is different is due to the smartphone supply chain, or as I stated in my talk – The Peace Dividends of the Smartphone War (props to Chris Anderson for describing this phenomenon so eloquently). Through the massive, unending proliferation of smartphones around the world, the components that comprise the smartphone (which also comprise pretty much all consumer technology) have gotten incredibly cheap and accessible.

Due to these economies of scale, there is a ton of innovation occurring with each component (sensors, processors, batteries, computer chips, microphones, etc). This means more companies than ever, from various segments, are competing to set themselves apart in any way they can in their respective industries, and therefore are providing innovative breakthroughs for the rest of the industry to benefit from. So, hearing aids and hearables are benefiting from breakthroughs occurring in smart speakers and drones because much of the innovation can be reaped and applied across the whole consumer technology space, rather than just limited to one particular industry.

Learning from Apple

Another point that I really tried to hammer home is the fact that our “connected” in-the-ear devices are now considered “exotropic” meaning that they appreciate in value over time. Connectivity enables the ability for the device to enhance itself, through software/firmware updates and app integration, even after the point-of-sale; much like a smartphone. So, in a similar fashion to our hearing aids and hearables reaping the innovation from other consumer technology innovation occurring elsewhere, connectivity does a similar thing – it enables network effects.

If you study Apple and examine why the iPhone was so successful, you’ll see that its success was largely predicated on the iOS app store, which served as a marketplace that connected developers with users. The more customers (users) there were, the more incentive there was to come and sell your goods as a merchant (developers) in the marketplace (app store). Therefore the marketplace grew and grew as the two sides constantly incentivized one another to grow, which compounded the growth.

That phenomenon I just described is called two-sided network effects and we’re beginning to see the same type of network effects take hold with our body-worn computers. That’s why a decent portion of my talk was spent around the Apple Watch. Wearables, hearables or smart hearing aids – they’re all effectively the same thing: a body-worn computer. Much of the innovation and use cases beginning to surface from the Apple Watch can be applied to our ear-worn computers too. Therefore, Apple Watch users and hearable users comprise the same user-base to an extent (they’re all body computers), which means that developers creating new functionality and utility for the Apple Watch might indirectly (or directly) be developing applications for our in-the-ear devices too. The utility and value of our smart hearing aids and hearables will just continue to rise, long after the patient has purchased their device, making for a much stronger value proposition.

Smart Assistant Usage will be Big

One of the most exciting use cases that I think is on the cusp of breaking through in a big way in this industry is smart assistant integration into the hearing aids (already happening in hearables). I’ve attended multiple conferences dedicated to this technology and have posted a number of blogs on smart assistants and the Voice user interface so, I don’t want to rehash every reason why I think this is going to be monumental for the product offering of this industry, but the main takeaway is this: the group that is adopting this new user interface the fastest is the same cohort that makes up the largest contingent of hearing aid wearers – the older adults. The reason for this fast adoption, I believe, is because there are few limitations to speaking and issuing commands/controlling your technology with your voice. This is why Voice is so unique; It’s conducive to the full age spectrum from kids to older adults, while something like the mobile interface isn’t particularly conducive to older adults who might have poor eyesight, dexterity or mobility.

This user interface and the smart assistants that mediate the commands are incredibly primitive today relative to what they’ll mature to become. Jeff Bezos famously quipped in 2016 in regard to this technology that, “It’s the first inning. It might even be the first guy’s up at bat.” Even in the technology’s infancy, the adoption of smart speakers among the older cohort is surprising and leads one to believe that they’re beginning to grow a dependency on smart assistant mediated voice commands, rather than tap, touch and swiping on their phones. Once this becomes integrated into hearing aids, patients will be able to conduct many of the same functions that you or I do with our phones, simply by asking their smart assistant to do that for them. One’s hearing aid serving the role (to an extent) of their smartphone further strengthens the value proposition of the device.

Biometric Sensors

If there’s one set of use cases that I think can rival the overall utility of Voice, it would be the implementation of biometric sensors into ear-worn devices. To be perfectly honest, I am startled how quickly this is already beginning to happen, with Starkey making the first move with the introduction of a gyroscope and accelerometer into its Livio AI hearing aid allowing for motion tracking. These sensors support the use cases of fall detection and fitness tracking. If “big data” was the buzz of the past decade, then “small data”, or personal data, will be the buzz of the next 10 years. Life insurance companies like John Hancock are introducing policies built around fitness data, converting this feature from a “nice to have” to a “need to have” for those that need to be wearing an-all day data recorder. That’s exactly the role the hearing aid is shaping up to serve – a data collector.

The type of data being recorded is really only limited to the type of sensors that are embedded into the device, and we’ll soon see the introduction of PPG sensors, as Valencell and Sonion plan to release a commercially available sensor small enough to fit into a RIC hearing available in 2019 for OEMs to implement into their offerings. These light-based sensors are currently built into the Apple Watch and provide the ability to track your hear rate. There have been a multitude of folks who have cited their Apple Watch for saving their life, as they were alerted to abnormal spikes in their resting heart rates, which were discovered to be life-threatening abnormalities in their cardiac activity. So, we’re talking about hearing aids acting as data collectors and preventative health tools that might alert the hearing aid wearer to a life-threatening condition.

As these type of sensors continue to shrink in size and become more capable, we’re likely to see more types of data that can be harvested, such as blood pressure and other cardiac data from the likes of an EKG sensor. We could potentially even see a sensor that’s capable of gathering glucose levels in a non-invasive way, which would be a game-changer for the 100 million people with diabetes or pre-diabetes. We’re truly at the tip of iceberg with this aspect of the devices, and this would mean that the hearing healthcare professional is a necessary component (fitting the “data collector”) for the cardiologist or physician that needs their patient’s health data monitored.

More to Come

This is just some of what’s happened across the past year. One year! I could write another 1500 words on interesting developments that have occurred this year, but these are my favorites. There is seemingly so much more to come with this technology and as these devices continue their computerized transformation into looking like something more akin to the iPhone, there’s no telling what other use cases might emerge. As the movie Field of Dreams so famously put it, “If you build it, they will come.” Well, the user base of all our body-worn computers continues to grow and further enticing the developers to come make their next big pay day. I can’t wait to see what’s to come in year two and I fully plan on ramping up my coverage of all the trends converging around the ear. So stay tuned and thank you to everyone who has supported me and read this blog over this first year (seriously, every bit of support means a lot to me).

-Thanks for Reading-

Dave

audiology, Biometrics, hearables, Live-Language Translation, News Updates, Smart assistants, VoiceFirst

Hearing Aid Use Cases are Beginning to Grow

 

The Next Frontier

In my first post back in 2017, I wrote that the inspiration for creating this blog was to provide an ongoing account of what happens after we connect our ears to the internet (via our smartphones). What new applications and functionality might emerge when an audio device serves as an extension of one’s smartphone? What new hardware possibilities can be implemented in light of the fact that the audio device is now “connected?” This week, Starkey moved the ball forward with changing the narrative and design around what a hearing aid can be with the debut of its new Livio AI hearing aid.

Livio AI embodies the transition to a multi-purpose device, akin to our hearables, with new hardware in the form of embedded sensors not seen in hearing aids to date, and companion apps to allow for more user control and increased functionality. Much like Resound firing the first shot in the race to create connected hearing aids with the first “Made for iPhone” hearing aid, Starkey has fired the first shot in what I believe will be the next frontier, which is the race to create the most compelling multi-purpose hearing aid.

With the OTC changes fast approaching, I’m of the mind that one way hearing healthcare professionals will be able to differentiate in this new environment is by offering exceptional service and guidance around unlocking all the value possible from these multi-purpose hearing aids. This spans the whole patient experience, from the way the device is programmed and fit, to educating the patient around how to use the new features. Let’s take a look what one of the first forays into this arena looks like by breaking down the Livio AI hearing aid.

Livio AI’s Thrive App

Thrive is a companion app that can be downloaded to use with Livio AI, and I think it’s interesting for a number of reasons. For starters, what I find useful about this app is that it’s Starkey’s attempt to combat the potential link of cognitive decline and hearing loss in our aging population. It does this by “gamifying” two sets of metrics that roll into your 200 point “Thrive” score that’s meant to be achieved regularly.

Thrive Score.JPG

The first set of metrics is geared toward measuring your body activity, comprised around data collected through sensors to gauge your daily movement. By embedding a gyroscope and accelerometer into the hearing aid, Livio AI is able to track your movement, so that it can monitor some of the same type of metrics as an Apple Watch or Fitbit. Each day, your goal is to reach 100 “Body” points by moving, exercising and standing up throughout the day.Body

The next bucket of metrics being collected is entirely unique to this hearing aid and is based around the way in which you wear your hearing aids. This “brain” category measures the daily duration the user wears the hearing aid, the amount of time spent “engaging” other people (which is important for maintaining a healthy mind), and the various acoustic environments that the user experiences each day.

Brain Image.JPG

So, through gamification, the hearing aid wearer is encouraged to live a healthy lifestyle and use their hearing aids throughout the day in various acoustic settings, engaging in stimulating conversation. To me, this will serve as a really good tool for the audiologist to ensure that the patient is wearing the hearing aid to its fullest. Additionally, for those who are caring for an elderly loved one, this can be a very effective way to track how active your loved one’s lifestyle is and whether they’re actually wearing their hearing aids. That’s the real sweet spot here, as you can quickly pull up their Thrive score history to get a sense of what your aging loved one is doing.

Healthkit SDK Integration

 

Another very subtle thing about the Thrive app that has some serious future applications is that fact that Starkey has integrated Thrive’s data into Apple’s Healthkit SDK. This is one of the only third-party device integrations that I know of to be integrated into this SDK at this point. The image above is a side-by-side comparison of what Apple’s Health app looks like with and without Apple Watch integration. As you can see, the image on the right displays the biometric data that was recorded from my Watch and sent to my Health app. Livio AI’s data will be displayed in the same fashion.

So, what? Well, as I wrote about previously, the underlying reason this is a big deal, is that Apple has designed its Health app with future applications in mind. In essence, Apple appears to be aiming to make the data easily transferable, in an encrypted manner (HIPAA-friendly), across Apple-certified devices. So, it’s completely conceivable that you’d be able to share the biometric data being ported into your Health app (i.e. Livio AI data) and share it with a medical professional.

For an audiologist, this would mean that you’d be able to remotely view the data, which might help to understand why a patient is having a poor experience with their hearing aids (they’re not even wearing them). Down the line, if hearing aids like Livio were to have more sophisticated sensors embedded, such as a PPG sensor to monitor blood pressure, or a sensor that can monitor your body temperature (as the tympanic membrane radiates body heat), you’d be able to transfer a whole host of biometric data to your physician to help them assess what might be wrong with you if you’re feeling ill. As a hearing healthcare professional, there’s a possibility that in the near future, you will be dispensing a device that is not only invaluable to your patient but to their physician as well.

Increased Intelligence

Beyond the fitness and brain activity tracking, there are some other cool use cases that come packed with this hearing aid. There’s a language translation feature that includes 27 languages, which is done in real-time through the Thrive app and is powered through the cloud (so you’ll need to have internet access to use). This seems to draw from the Starkey-Bragi partnership which was formed a few years ago, which was a good indication that Starkey was looking to venture down the path of making a feature-rich hearing aid with multiple uses.

Another aspect of the smartphone that Livio AI leverages is the smartphone’s GPS. This allows the user to use their smartphone to locate their hearing aids if they have gone missing. Additionally, the user can set “memories” to adjust their hearing aid settings based on the acoustic environment they’re in. If there’s a local coffee shop or venue that the user frequents, where they’ll want their hearing aids to have a boost or turned down in some fashion, “memories” will automatically adjust the settings based on the pre-determined GPS location.

If you “pop the hood” of the device and take a look inside, you’ll see that the components comprising the hearing aid have been significantly upgraded too. Livio AI boasts triple the computing power and double the local memory capacity as the previous line of Starkey hearing aids. This should come as no surprise, as the most impressive innovation happening with ear-worn devices is what’s happening with the components inside the devices, due to the economies of scale and massive proliferation of smartphones. This increase in computing power and memory capacity is yet another example of the, “peace dividends of the smartphone war.” This type of computing power allows for a level of machine learning (similar to Widex’s Evoke) to adjust to different sound environments based on all the acoustic data that Starkey’s cloud is processing.

The Race is On

As I mentioned at the beginning of this post, Starkey has initiated a new phase of hearing aid technology and my hope is that it spurs the other four manufacturers to follow suit, in the same way that everyone followed Resound’s lead with bringing to market “connected” hearing aids. Starkey CTO, Achin Bohwmik, believes that traditional sensors and AI will do to the hearing aid what Apple did to the phone, and I don’t disagree.

As I pointed out in a previous post, the last ten years of computing was centered around porting the web to the apps in our smartphone. The next wave of computing appears to be a process of offloading and unbundling the “jobs” that our smartphone apps represent, to a combination of wearables and voice computing. I believe the ear will play a central role in this next wave of computing, largely due to the fact that it serves as a perfect position for an ear-worn computer with biometric sensors equipped that doubles as a home to our smart assistant(s) which will mediate our voice commands. This is the dawn of a brand new day and I can’t help but feel very optimistic about the future of this industry and hearing healthcare professionals who embrace these new offerings. In the end however, it’s the patient who will benefit the most and that’s a good thing when so many people could and should be treating their hearing loss.

-Thanks for Reading-

Dave