The Computerization of In-the-Ear Devices

I was invited this weekend to speak on a panel discussing the impending OTC hearing aid law that will go into effect as soon as the FDA finishes drafting the guidelines and regulations for this new class of in-the-ear device (sometime in 2020). The panel consisted of Dr. Dawn de Neef, an ENT specialist, Dr. Ram Nileshwar, an audiologist who has been practicing for 30+ years, Kate Handley, VP of Sales at hearing aid manufacturer, Unitron, and myself. We each spoke for about 15-20 minutes, providing each of our unique perspectives to the audience of hearing healthcare professionals.
The perspective that I shared with the audience was that all of our in-the-ear devices are becoming computerized. If a hearable is defined as a, “body-worn computer that resides in the ear,” then it should be understood that just about all of our in-the-ear devices are trending toward becoming hearables. The first major step toward this computerization was by making our ear-worn devices an extension of our smartphones, and therefore, reaping all the abilities and processing power from the smartphone. In the very first post I wrote for FuturEar, I pointed out that this fact was the motivation behind starting the blog – we had begun to seamlessly connect our ears to the internet.
As you can see from the two slides above, Bluetooth standardization has transpired across the last 5 years with consumer and medical in-the-ear devices. We’re seeing companies like Sonova bringing innovative approaches to connectivity with the SWORD chip that is capable of handling 5 different Bluetooth protocols, so you can pair hearing aids embedded with that chip to Android and iOS devices (and soon, the Rodger system).
Apple analysts are projecting that nearly 50 million AirPods will be sold in 2019 alone, with that number estimated to climb to 75 million in 2020. To put that in perspective, we’ve never seen annual hearing aid sales cross 4 million in US. We also need to be aware that Samsung and Google have an AirPods competitor on the market and Amazon will be unveiling its hearable in the second half of this year. These are companies with deep pockets and are aiming to put each of its smart assistant directly in your ear.
So, the question starts to become, “what happens when we’re all wearing mini, ear-computers?”
The first thing that becomes possible, is to increase the things that the device can do and make it more multi-functional. We probably already take it for granted that you can stream any audio to your Bluetooth hearing aids, so in the last five years, the device has become capable of playing music, podcasts and streaming calls. In addition, the devices can log and share data with the smartphone, which can then be sent up to the cloud or can use the processing power of the phone. This is the backbone for new applications, such as automatic adjustments being made by the hearing aid on-the-fly, which is doing so via machine learning in the cloud.

Another set of use cases that we’re beginning to see are being derived from sensors that have finally become miniature enough to fit on a RIC hearing aid. These include inertial sensors and PPG optical-based sensors. These type of sensors can capture a wide variety of fitness and heart-related biometrics, so that you can monitor everything from heart rate variability to the orientation of the user’s body to detect if the user has fallen down. We’re in the early infancy of this new aspect to the devices, but it’s plausible that in five years as the devices become more sensor-laden, hearing aids will act as a preventive health tool. Imagine wearing a hearable that alerts you to warning signs of a heart attack or a stroke – that’s where this technology might ultimately be headed.

If we’re to consider these ear-worn devices as “computers” then it makes sense to think about what kind of operating system and user interface is conducive to something we’re not looking at (since it resides in our ears). To me, the most obvious candidate to play the role of both UI and eventually OS, would be our smart assistants that act on our behalf via voice commands. Alexa, Google Assistant, Bixby and yes, even Siri. Those are just the top layer, “master assistants” with a much larger class of specialized assistants residing a level below that serve as conversational interfaces to the companies that sit behind the specialized assistant (i.e. Mayo Clinic).

These are just two of many examples of how these devices and their use cases will evolve as the devices become standalone computers in their own right. The Apple Watch has gone through four product iterations, with the first version being nothing more than a digital watch with a few features, to a device that is now capable of supporting its own cellular, LTE connection and embedded with a medical-grade ECG monitor. It’s possible that our hearables will follow a similar trajectory to being standalone devices, with medical-grade sensors of their own, especially when you factor in what’s possible with the companion charging cases (that will be a fun piece to write one of these days).

As I pointed out in my talk, the computerization of all our ear-worn devices will act like a rising tide that lifts all ships. We’ll likely see OTC devices that have multiple qualities of hearables, such as Bluetooth connectivity and companion apps that are used to program and calibrate the device to the user’s hearing loss. Just as Bluetooth connectivity has become standardized across the past five years, we’ll see other elements of the devices become table-stakes across the next five years, and there’s not much reason to think that OTC devices will be excluded from the increasing sophistication. As the tide rises, so too will the premium hearing aids be lifted in its capabilities such as performing better in situations like speech-in-noise thanks to innovation around machine learning and augmented audio and filtering.
The Role of the Professional with OTC Devices
At the end of the day, in a world of OTC, the professionals’ value resides with their expertise and knowledge. As the market becomes more saturated with options, the consumer can end up paralyzed with too many choices and ultimately look to an expert. At Oaktree, our in-house Audiologist, Dr. AU Bankaitis has worked with Wash U to establish a protocol to run OTC and PSAP devices through, to determine how well the devices perform across nine frequencies in eight standard audiometric configurations.

We’ve started to build a database of our findings and are working with Wash U, University of Pittsburgh and Johns Hopkins to build the database. The idea here is for the busy professional to use the database to help them determine what might make sense to bring into their clinics and offer their patients.
For the hearing healthcare professional, I believe OTC allows for the professional to act as experts and be a provision of knowledgeable assistance. This is an entirely different business model, one that is service-based, rather than device-based in terms of the way to generate revenue. As the landscape of options for the patient to choose from becomes murkier and over-saturated with options, experts can step in to understand the patient’s needs and then guide the patients through their options to help connect them to the best device specific to their needs. In essence, OTC is a perfect opportunity to show more people the value of treating their hearing loss with the help of an expert.
-Thanks for Reading-
Dave