Biometrics, Daily Updates, Future Ear Radio, hearables, wearables

Wearables Grow Up (Future Ear Daily Update 9-19-19)

Wearables Grow Up

One of my favorite podcasts, a16z, recently created a secondary, news-oriented show, “16 minutes.” 16 Minutes is great because the host, Sonal Chokshi (who also hosts the a16Z podcast), brings on various in-house experts from a16Z’s venture capital firm to provide insight into each week’s news topics. This week, Sonal brought on general partner, Vijay Pande, to discuss the current state of wearable computing. For today’s update, I want to highlight this eight minute conversation (it was one of two topics covered on this week’s episode – fast-forward to 7:45), and build on some of the points Sonal and Vijay make during their chat.

The conversation begins by covering a recent deal struck by the government of Singapore and Fitbit. Singaporeans will be able to register to receive a Fitbit Inspire band for free if they commit to paying $10 a month for a year of the company’s premium coaching service. This is part of Fitbit’s pivot toward a SaaS business, and a stronger focus about informing users about what the data being gathered actually means. Singapore’s Health Promotion Board will therefore have a sizeable portion of its population (Fitbit’s CEO projects 1 million of the 5.6 citizens will sign up), monitoring their data consistently via wearable devices that can be tied to each citizen’s broader medical records.

This then leads to a broader conversation about the ways in which wearables have been maturing, and in many ways, wearables are growing up. To Vijay’s point, we’re moving way beyond step-counting into much more clinically relevant measurable data. Consumer wearables that are increasingly being outfit with more sophisticated, medical-grade sensors, combined with the longitudinal data that can be gathered since they’re being worn all day, creates a combination not yet seen before. Previously, we’ve been limited to sporadic data that’s really only gathered when we’re in the doctor’s office. Now, we’re gathering some of the same types of data by the minute, and at the scale of millions and millions of people.

Ryan Kraudel, VP of Marketing at biometric sensor manufacturer Valencell, made me aware of this podcast episode (thanks, Ryan) and added some really good points on twitter about what he’s been observing these past few years. A big part of what’s different between today’s wearables and the first generation devices is the combination of more mature sensors that are proliferating at scale and the machine learning and AI layer that’s being overlaid on top to assess what the data is telling us, which has become more sophisticated as well.

To Sonal’s point, we’ve been benchmarking our data historically against the collective averages of the population, rather than benchmarking our data against our own personal data, because we haven’t had the ability to gather the personal data in the ways that we can now. When you’re recording longitudinal data over a long period of time, such as multiple years, you start to get really accurate baseline measurements unique to each individual.

This enables a level of personalization that will open the door to preventative health use cases. This has been a big application that I’ve been harping on for a while – the ability to have AI/ML assess your wearable data constantly to help identify risks in your data, based on your own historical cache of data that’s years and years old. Therefore, you enable the ability for the user to be notified of threats to their health data. To Vijay’s point at the end, in the near future, our day-to-day will not be that different but what we’re learning will be radically different, as you’ll be measuring certain metrics multiple times per day, rather than once a year during your check up.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to your flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Biometrics, Daily Updates, Future Ear Radio, hearables, VoiceFirst

Hearables Panel Video (Future Ear Daily Update 9-10-19)

9-10-19 - Hearables Panel Video

Back in July, I published an update recapping the hearables panel that I participated on in July at the Voice summit. One of my fellow panelists, Eric Seay, had a friend in the audience who shot a video of the panel, so for today’s update, I’m sharing the video. I had remembered the panel being really insightful, but upon watching it again a few months later, I’m reminded what an awesome panel it really was. The collection of backgrounds and expertise that we each brought to the table fostered a really interesting discussion. Hope you enjoy!

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

 

Apple, Biometrics, Daily Updates, Future Ear Radio

Follow the Integration (Future Ear Daily Update 8-28-19)

8-28-19 - Follow the Integration

There’s been a lot of chatter, analysis, and overall hot takes about Apple’s new credit card, the Apple Card, transpiring the past few weeks online. One of the best pieces I have read on the subject came from longtime Apple analyst, Horace Dediu (@asymco). Horace begins by tearing down the fallacies in the tired, old cliche, “we were promised flying cars and we got x,”  that is often used to trivialize much of the innovation that’s occurred in the past few decades. While flying cars represent, “extrapolated technologies,” innovations like the Apple Card would represent “market creating” technologies that are often much more ubiquitous, popular and behavior changing (read his piece to fully understand this point).

There was one point that he made in the article that I really want to hone in on today as I think there’s a direct parallel that can be drawn that pertains to FuturEar:

“Here’s the thing: follow the integration. First, Apple Card comes after Apple Pay, more than 4 years ago. Apple Card builds on the ability to transact using a phone, watch and has the support of over 5000 banks. Over 10 billion transactions have been made with Apple Cash. Over 40 countries are represented.”

“Follow the integration.” That’s the best way to really understand where Apple is headed. As I have written about before, Apple tends to incrementally work their way into new verticals and offerings, and if you follow the acquisitions, the product development – the integration – you start to get a sense of what’s to come with future product and service offerings.

A good example of this would be to look at the burgeoning Apple Health ecosystem. There are two separate areas to focus on: the software and services, and the hardware.  In 2014, Apple began the formation of said ecosystem by introducing the Apple Health app and its Health Kit software development kit (SDK), which was a year before the Apple Watch. This might have been cause for some head scratching as there wasn’t a whole lot of hardware on the market prior to the Apple Watch that could feed data into Apple Health (except the basic inertial data from the phone)?

A year later, in 2015, the Apple Watch came out which would become the main source to populate the data in the Apple Health app. Flash forward to today and Apple has rolled out two more SDKs and iterated on the Apple Watch four times to create a much more sophisticated biometric data collector. On the SDK front, CareKit allows for third party app developers to create consumer-focused applications around data collected by the Apple Watch or with data collected in the third party apps, such as apps centered around Parkinson’s, diabetes, and depression. ResearchKit helps to facilitate large-scale studies for researchers, all centered around Apple’s health ecosystem.

Five years after the kickoff of Apple’s health ecosystem, Apple has laid the groundwork to move deeper and deeper into the healthcare ecosystem. In 2018, the company announced AC Wellness, a set of medical clinics designed to, “deliver the world’s best health care experience to its employees.” It’s not hard to imagine Apple using these clinics as guinea pigs and then roll them out beyond their own employees. In August of 2018, Apple added its 75th health institution to support personal health records on the iPhone.

Just as there were years of innovation and incremental pieces of progress leading to the Apple Card, the same can be said for Apple Health. Follow the integration and you’ll start to get a sense of where Apple is headed.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Biometrics, Daily Updates, Future Ear Radio

Apple’s Wearables + Research Kit

8-8-19 - Apple's Wearables

In yesterday’s update, I wrote about how we’re seeing medical-grade biometric sensors, such as ECG monitors, being implemented in more and more consumer wearable devices. In the update, I laid out a few ways in which I believe one of the core use cases of wearables (and eventually hearables) will be to serve as, “biometric data collectors.” As the devices become outfitted with more and more sophisticated sensors, the data being collected yields more robust insights, leading to preventative health applications. We’re already seeing this with Apple Watch Series 4’s detecting atrial fibrillation via the embedded ECG sensor.

Yesterday, CNBC reporter, Christina Farr, reported that Apple and Eli Lilly are partnering up in a joint research project to study whether data from iPhones and Apple Watches can detect signs of dementia. It’s an interesting study, and the parameters include each participant being required to use an iPhone, Apple Watch and Beddit sleep tracker. Researchers are looking for ways to identify symptoms of dementia and cognitive decline via the data that can be derived from the participants’ phone habits, sleep patterns and biometric data collected from the watch.

(Quick side note – Apple bought the company Beddit in 2017 and now we’re seeing that one of the reasons why is for these type of studies where they want to monitor sleep patterns. This is such a typical Apple move – make a quite acquisition to use for much broader purposes (Research Kit) a few years later).

 

While this is an intriguing study, I don’t think the point is that Apple is trying to get into the business of detecting dementia. I think this is a byproduct of what Apple has built around collecting data and the unique position that Apple has put itself in within the healthcare space. Kat’s tweet is right on the money with the building blocks that Apple has created and is starting to assemble together.

The iPhone user base, and the Apple Health app in particular, serves as each person’s own health data repository, which is populated by inertial sensor data from the iPhone and biometric data collected via Apple’s Watch (and probably AirPods down the line). Soon, we may see health records being integrated too. These building blocks enable Apple’s healthcare software development kits, such as Research Kit, which helps researchers recruit participants for studies, as well as grants medical researchers access to data from previous studies, such as this dementia study, that have participants opt-in.

Back in March 2017, Apple worked with Mount Sinai Hospital to better understand asthma. Apple worked with the hospital to create an app that 7,600 people downloaded and enrolled in the six-month study within a matter of days. In addition to quickly amassing participants, Apple was able to use geo-location data to correlate the asthma data issued by the participants, with outside metrics such as heat and pollen. That corollary data can be accessed via researchers using Research Kit.

In January 2019, Apple worked with Johnson and Johnson to, ” investigate whether a new heart health program using an app from Johnson & Johnson in combination with Apple Watch’s irregular rhythm notifications and ECG app can accelerate the diagnosis and improve health outcomes of the 33 million people worldwide living with atrial fibrillation (AFib), a condition that can lead to stroke and other potentially devastating complications.”

It doesn’t appear that Apple is attempting to create a product around asthma, just as I don’t think Apple will pursue dementia-detecting technology. I believe that this specific study is part of a broader trend of Apple representing such an extraordinary and unusual position not really seen before in the healthcare space.

Last March, I wrote a long piece titled, “Pondering Apple’s Healthcare Move,” and I believe that Apple’s strategy is to be the ultimate health data collector and facilitator. The healthcare data ecosystem that Apple has incrementally been putting into place since the introduction of the Health App and Health Kit in 2014, puts Apple in a position where a variety of medical professionals might find an aspect of it appealing. Apple may be thinking that the best way to penetrate the healthcare sector is to lean on the way that it is able to uniquely capture, store, and transfer so many different types of data derived from its healthcare ecosystem that it has layered on top of the iPhone user base.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Biometrics, Daily Updates, Future Ear Radio

ECG Sensors Continue Proliferating (Future Ear Daily Update 8-7-19)

8-7-19 - ECG Sensors

One of the most interesting trends occurring with wearables right now is the early implementation of medical-grade sensors into consumer products. The most notable example of this would be when Apple introduced the Apple Watch Series 4, which has an electrocardiogram sensor built into the device. Now, Samsung has announced an upcoming second edition to its Galaxy Watch Active line of smartwatches, one that will include an ECG monitor (initially, this will not be activated on the device until Samsung has gained FDA clearance).

There are essentially three types of sensors that have been embedded into wrist-worn and/or ear-worn wearables. The first would be the inertial sensors, namely the gyroscope and accelerometers. These are the sensors that detect one’s movement and orientation, allowing for all the Fitbit-type metrics we’ve grown accustomed to, such as step tracking. Another interesting way these sensors can be purposed is to detect falls, which is what Starkey’s Livio AI hearing aid uses to do just that.

Image result for apple watch ppg sensor
PPG Sensor on the underside of an Apple Watch emitting green LED laser

The second sensor that has been implemented widely in our body-worn computers, are PPG-optical sensors. PPG stands for photoplethysmogram, and these type of sensors use LED lasers to penetrate the skin and capture blood flow patterns. The patterns are then fed back into the wearables’ data algorithms to be measured, ultimately providing a heart rate readout. It’s a clever and non-invasive way to measure one’s heart rate, and although there are some flaws in the method, many of said flaws have been progressively alleviated as the technology advances.

The newest sensor to be incorporated into wearables are the aforementioned electrocardiogram sensors (ECG). ECG sensors are intended to collect and measure the electrical signals generated by the heart. This means that ECG sensors can be used to detect potential threats or issues transpiring with one’s heart, such as atrial fibrillation. Just look at how many stories there already are of people attributing their Apple Watch series 4 with saving their life (1, 2, 3)

What’s fascinating about ECG sensors being implemented into these consumer devices is that they are one of the first big, bold steps toward transforming wearables into preventative health tools. As we see more and more consumer wearables (and eventually hearables) outfitted with medical grade sensors, one of the primary use cases for wearables will possibly be to serve as, “guardians of our health,” by actively monitoring our biometrics for threats in our bio-data, and then warn us of what’s found in the data. That’s a future scenario that’s well underway in its development.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Biometrics, Daily Updates, Future Ear Radio, hearables

“Hearables – The New Wearables” – Nick Hunn’s Famous White Paper 5 Years Later (Future Ear Daily Update 7-10-19)

7-10-19 - Hearables-the new wearables.jpg

Nick Hunn, the wireless technology analyst and CTO of WiFore Consulting, coined the term “hearables” in his now famous white paper, “Hearables – the new Wearables,” back in 2014. For today’s update, I thought it might be fun to look back at Nick’s initial piece to really appreciate some of his prescient foresight with predicting how our ear-worn devices would mature across the coming years.

For starters, one of the most brilliant insights that Nick shared was around the new Bluetooth standards that were being adopted at the time and the implications for battery life:

“The Hearing Aid industry’s trade body – EHIMA, has just signed a Memorandum of Understanding with the Bluetooth SIG to develop a new generation of the Bluetooth standard which will reduce the power consumption for wireless streaming to the point where this becomes possible, adding audio capabilities to Bluetooth Smart. Whilst the primary purpose of the work is to let hearing aids receive audio streams from mobile phones, music players and TVs, it will have the capability to add low power audio to a new generation of ear buds and headsets.”

To put this into perspective, the first “made-for-iPhone” hearing aid, The Linx, had just been unveiled by Resound at the end of 2013. Nick published his paper in April of 2014, so it may have been apparent for close observers that hearing aids were heading toward iPhone’s 2.4 GHz Bluetooth protocol (every hearing aid manufacturer ended up adopting it), but without a background like Nick’s, working in the broad field of wireless technology, it may have been hard to know about the way in which this new Bluetooth standard (initially called Bluetooth Smart and then later called Bluetooth Low Energy) would allow for more efficient power consumption.

Nick’s insight became more pronounced as Apple rolled out its AirPods in 2016 with its flagship W1 chip, which used ultra-low power Bluetooth allowing for 5 hours of audio streaming (2 hours of talk time). Flash-forward to today, and Apple has released its AirPods 2.0 that uses the H1 chip and Bluetooth 5.0, allowing for even more efficient power consumption.

It needs to be constantly reiterated that hearables were deemed unrealistic up until midway through the 2010’s because of how inefficient the power consumption was with previous Bluetooth standards. Batteries represent one component inside miniature devices that has historically not seen a whole lot of innovation, both in terms of size reduction and also by energy density, so it might not have been obvious to see that the work-around to this major roadblock was actually in the way that the power from the battery was extracted via new methods of Bluetooth signaling.

The other aspect of hearables that Nick absolutely nailed was that ear-worn devices would eventually become laden with biometric sensors:

“Few people realise that the ear is a remarkably good place to measure many vital signs. Unlike the wrist, the ear doesn’t move around much while you’re taking measurements, which can make it more reliable for things like heart rate, blood pressure, temperature and pulse oximetry. It can even provide a useful site for ECG measurement.”

Today, US hearing aid manufacturer, Starkey has incorporated one of Valencell’s heart rate monitors into its Livio AI hearing aids. This new integration unveiled at CES this year was made possible due to the miniaturization of ECG sensors to the point they can be fit onto a tiny, receiver-in-the-canal (RIC) hearing aid. To Nick’s point, there are significant advantages to recording biometric data in the ear rather than the wrist, so it should come as no surprise as future versions of AirPods and its competitors come equipped with various sensors over time.

Nick continues to write and share his insights, so if you’re not already following his work, it might be a good time to start reading up on Nick’s thinking about how our little ear-computers will continue to evolve.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

 

audiology, Biometrics, hearables, Smart assistants, VoiceFirst

Capping Off Year One with my AudiologyOnline Webinar

FuturEar Combo

A Year’s Work Condensed into One Hour

Last week, I presented a webinar through the continuing education website, AudiologyOnline, for a number of audiologists around the country. The same week the year prior, I launched this blog. So, for me, the webinar was basically a culmination of the past year’s blog posts, tweets and videos that I’ve generated, distilled into a one-hour presentation. By having to consolidate so many things I have learned into a single hour, it helped me to choose the things that I thought were most pertinent to the hearing healthcare professional.

If you’re interested, feel free to view the webinar using this link (you’ll need to register, though you can register for free and there’s no type of commitment): https://www.audiologyonline.com/audiology-ceus/course/connectivity-and-future-hearing-aid-31891  

Some of My Takeaways

Why This Time is Different

The most rewarding and fulfilling part of this process has been to see the way things have unfolded and the technological progress that has been made both with the hardware and software of the in-the-ear devices and also the rate at which the emerging use cases for said devices are maturing. During the first portion of my presentation, I laid out why I feel this time is different from any previous era where disruption might feel as if it’s on the doorstep, yet doesn’t come to pass, and that’s largely due to the fact that the underlying technology has matured so much of late.

I would argue that the single biggest reason why this time is different is due to the smartphone supply chain, or as I stated in my talk – The Peace Dividends of the Smartphone War (props to Chris Anderson for describing this phenomenon so eloquently). Through the massive, unending proliferation of smartphones around the world, the components that comprise the smartphone (which also comprise pretty much all consumer technology) have gotten incredibly cheap and accessible.

Due to these economies of scale, there is a ton of innovation occurring with each component (sensors, processors, batteries, computer chips, microphones, etc). This means more companies than ever, from various segments, are competing to set themselves apart in any way they can in their respective industries, and therefore are providing innovative breakthroughs for the rest of the industry to benefit from. So, hearing aids and hearables are benefiting from breakthroughs occurring in smart speakers and drones because much of the innovation can be reaped and applied across the whole consumer technology space, rather than just limited to one particular industry.

Learning from Apple

Another point that I really tried to hammer home is the fact that our “connected” in-the-ear devices are now considered “exotropic” meaning that they appreciate in value over time. Connectivity enables the ability for the device to enhance itself, through software/firmware updates and app integration, even after the point-of-sale; much like a smartphone. So, in a similar fashion to our hearing aids and hearables reaping the innovation from other consumer technology innovation occurring elsewhere, connectivity does a similar thing – it enables network effects.

If you study Apple and examine why the iPhone was so successful, you’ll see that its success was largely predicated on the iOS app store, which served as a marketplace that connected developers with users. The more customers (users) there were, the more incentive there was to come and sell your goods as a merchant (developers) in the marketplace (app store). Therefore the marketplace grew and grew as the two sides constantly incentivized one another to grow, which compounded the growth.

That phenomenon I just described is called two-sided network effects and we’re beginning to see the same type of network effects take hold with our body-worn computers. That’s why a decent portion of my talk was spent around the Apple Watch. Wearables, hearables or smart hearing aids – they’re all effectively the same thing: a body-worn computer. Much of the innovation and use cases beginning to surface from the Apple Watch can be applied to our ear-worn computers too. Therefore, Apple Watch users and hearable users comprise the same user-base to an extent (they’re all body computers), which means that developers creating new functionality and utility for the Apple Watch might indirectly (or directly) be developing applications for our in-the-ear devices too. The utility and value of our smart hearing aids and hearables will just continue to rise, long after the patient has purchased their device, making for a much stronger value proposition.

Smart Assistant Usage will be Big

One of the most exciting use cases that I think is on the cusp of breaking through in a big way in this industry is smart assistant integration into the hearing aids (already happening in hearables). I’ve attended multiple conferences dedicated to this technology and have posted a number of blogs on smart assistants and the Voice user interface so, I don’t want to rehash every reason why I think this is going to be monumental for the product offering of this industry, but the main takeaway is this: the group that is adopting this new user interface the fastest is the same cohort that makes up the largest contingent of hearing aid wearers – the older adults. The reason for this fast adoption, I believe, is because there are few limitations to speaking and issuing commands/controlling your technology with your voice. This is why Voice is so unique; It’s conducive to the full age spectrum from kids to older adults, while something like the mobile interface isn’t particularly conducive to older adults who might have poor eyesight, dexterity or mobility.

This user interface and the smart assistants that mediate the commands are incredibly primitive today relative to what they’ll mature to become. Jeff Bezos famously quipped in 2016 in regard to this technology that, “It’s the first inning. It might even be the first guy’s up at bat.” Even in the technology’s infancy, the adoption of smart speakers among the older cohort is surprising and leads one to believe that they’re beginning to grow a dependency on smart assistant mediated voice commands, rather than tap, touch and swiping on their phones. Once this becomes integrated into hearing aids, patients will be able to conduct many of the same functions that you or I do with our phones, simply by asking their smart assistant to do that for them. One’s hearing aid serving the role (to an extent) of their smartphone further strengthens the value proposition of the device.

Biometric Sensors

If there’s one set of use cases that I think can rival the overall utility of Voice, it would be the implementation of biometric sensors into ear-worn devices. To be perfectly honest, I am startled how quickly this is already beginning to happen, with Starkey making the first move with the introduction of a gyroscope and accelerometer into its Livio AI hearing aid allowing for motion tracking. These sensors support the use cases of fall detection and fitness tracking. If “big data” was the buzz of the past decade, then “small data”, or personal data, will be the buzz of the next 10 years. Life insurance companies like John Hancock are introducing policies built around fitness data, converting this feature from a “nice to have” to a “need to have” for those that need to be wearing an-all day data recorder. That’s exactly the role the hearing aid is shaping up to serve – a data collector.

The type of data being recorded is really only limited to the type of sensors that are embedded into the device, and we’ll soon see the introduction of PPG sensors, as Valencell and Sonion plan to release a commercially available sensor small enough to fit into a RIC hearing available in 2019 for OEMs to implement into their offerings. These light-based sensors are currently built into the Apple Watch and provide the ability to track your hear rate. There have been a multitude of folks who have cited their Apple Watch for saving their life, as they were alerted to abnormal spikes in their resting heart rates, which were discovered to be life-threatening abnormalities in their cardiac activity. So, we’re talking about hearing aids acting as data collectors and preventative health tools that might alert the hearing aid wearer to a life-threatening condition.

As these type of sensors continue to shrink in size and become more capable, we’re likely to see more types of data that can be harvested, such as blood pressure and other cardiac data from the likes of an EKG sensor. We could potentially even see a sensor that’s capable of gathering glucose levels in a non-invasive way, which would be a game-changer for the 100 million people with diabetes or pre-diabetes. We’re truly at the tip of iceberg with this aspect of the devices, and this would mean that the hearing healthcare professional is a necessary component (fitting the “data collector”) for the cardiologist or physician that needs their patient’s health data monitored.

More to Come

This is just some of what’s happened across the past year. One year! I could write another 1500 words on interesting developments that have occurred this year, but these are my favorites. There is seemingly so much more to come with this technology and as these devices continue their computerized transformation into looking like something more akin to the iPhone, there’s no telling what other use cases might emerge. As the movie Field of Dreams so famously put it, “If you build it, they will come.” Well, the user base of all our body-worn computers continues to grow and further enticing the developers to come make their next big pay day. I can’t wait to see what’s to come in year two and I fully plan on ramping up my coverage of all the trends converging around the ear. So stay tuned and thank you to everyone who has supported me and read this blog over this first year (seriously, every bit of support means a lot to me).

-Thanks for Reading-

Dave

audiology, Biometrics, hearables, Live-Language Translation, News Updates, Smart assistants, VoiceFirst

Hearing Aid Use Cases are Beginning to Grow

 

The Next Frontier

In my first post back in 2017, I wrote that the inspiration for creating this blog was to provide an ongoing account of what happens after we connect our ears to the internet (via our smartphones). What new applications and functionality might emerge when an audio device serves as an extension of one’s smartphone? What new hardware possibilities can be implemented in light of the fact that the audio device is now “connected?” This week, Starkey moved the ball forward with changing the narrative and design around what a hearing aid can be with the debut of its new Livio AI hearing aid.

Livio AI embodies the transition to a multi-purpose device, akin to our hearables, with new hardware in the form of embedded sensors not seen in hearing aids to date, and companion apps to allow for more user control and increased functionality. Much like Resound firing the first shot in the race to create connected hearing aids with the first “Made for iPhone” hearing aid, Starkey has fired the first shot in what I believe will be the next frontier, which is the race to create the most compelling multi-purpose hearing aid.

With the OTC changes fast approaching, I’m of the mind that one way hearing healthcare professionals will be able to differentiate in this new environment is by offering exceptional service and guidance around unlocking all the value possible from these multi-purpose hearing aids. This spans the whole patient experience, from the way the device is programmed and fit, to educating the patient around how to use the new features. Let’s take a look what one of the first forays into this arena looks like by breaking down the Livio AI hearing aid.

Livio AI’s Thrive App

Thrive is a companion app that can be downloaded to use with Livio AI, and I think it’s interesting for a number of reasons. For starters, what I find useful about this app is that it’s Starkey’s attempt to combat the potential link of cognitive decline and hearing loss in our aging population. It does this by “gamifying” two sets of metrics that roll into your 200 point “Thrive” score that’s meant to be achieved regularly.

Thrive Score.JPG

The first set of metrics is geared toward measuring your body activity, comprised around data collected through sensors to gauge your daily movement. By embedding a gyroscope and accelerometer into the hearing aid, Livio AI is able to track your movement, so that it can monitor some of the same type of metrics as an Apple Watch or Fitbit. Each day, your goal is to reach 100 “Body” points by moving, exercising and standing up throughout the day.Body

The next bucket of metrics being collected is entirely unique to this hearing aid and is based around the way in which you wear your hearing aids. This “brain” category measures the daily duration the user wears the hearing aid, the amount of time spent “engaging” other people (which is important for maintaining a healthy mind), and the various acoustic environments that the user experiences each day.

Brain Image.JPG

So, through gamification, the hearing aid wearer is encouraged to live a healthy lifestyle and use their hearing aids throughout the day in various acoustic settings, engaging in stimulating conversation. To me, this will serve as a really good tool for the audiologist to ensure that the patient is wearing the hearing aid to its fullest. Additionally, for those who are caring for an elderly loved one, this can be a very effective way to track how active your loved one’s lifestyle is and whether they’re actually wearing their hearing aids. That’s the real sweet spot here, as you can quickly pull up their Thrive score history to get a sense of what your aging loved one is doing.

Healthkit SDK Integration

 

Another very subtle thing about the Thrive app that has some serious future applications is that fact that Starkey has integrated Thrive’s data into Apple’s Healthkit SDK. This is one of the only third-party device integrations that I know of to be integrated into this SDK at this point. The image above is a side-by-side comparison of what Apple’s Health app looks like with and without Apple Watch integration. As you can see, the image on the right displays the biometric data that was recorded from my Watch and sent to my Health app. Livio AI’s data will be displayed in the same fashion.

So, what? Well, as I wrote about previously, the underlying reason this is a big deal, is that Apple has designed its Health app with future applications in mind. In essence, Apple appears to be aiming to make the data easily transferable, in an encrypted manner (HIPAA-friendly), across Apple-certified devices. So, it’s completely conceivable that you’d be able to share the biometric data being ported into your Health app (i.e. Livio AI data) and share it with a medical professional.

For an audiologist, this would mean that you’d be able to remotely view the data, which might help to understand why a patient is having a poor experience with their hearing aids (they’re not even wearing them). Down the line, if hearing aids like Livio were to have more sophisticated sensors embedded, such as a PPG sensor to monitor blood pressure, or a sensor that can monitor your body temperature (as the tympanic membrane radiates body heat), you’d be able to transfer a whole host of biometric data to your physician to help them assess what might be wrong with you if you’re feeling ill. As a hearing healthcare professional, there’s a possibility that in the near future, you will be dispensing a device that is not only invaluable to your patient but to their physician as well.

Increased Intelligence

Beyond the fitness and brain activity tracking, there are some other cool use cases that come packed with this hearing aid. There’s a language translation feature that includes 27 languages, which is done in real-time through the Thrive app and is powered through the cloud (so you’ll need to have internet access to use). This seems to draw from the Starkey-Bragi partnership which was formed a few years ago, which was a good indication that Starkey was looking to venture down the path of making a feature-rich hearing aid with multiple uses.

Another aspect of the smartphone that Livio AI leverages is the smartphone’s GPS. This allows the user to use their smartphone to locate their hearing aids if they have gone missing. Additionally, the user can set “memories” to adjust their hearing aid settings based on the acoustic environment they’re in. If there’s a local coffee shop or venue that the user frequents, where they’ll want their hearing aids to have a boost or turned down in some fashion, “memories” will automatically adjust the settings based on the pre-determined GPS location.

If you “pop the hood” of the device and take a look inside, you’ll see that the components comprising the hearing aid have been significantly upgraded too. Livio AI boasts triple the computing power and double the local memory capacity as the previous line of Starkey hearing aids. This should come as no surprise, as the most impressive innovation happening with ear-worn devices is what’s happening with the components inside the devices, due to the economies of scale and massive proliferation of smartphones. This increase in computing power and memory capacity is yet another example of the, “peace dividends of the smartphone war.” This type of computing power allows for a level of machine learning (similar to Widex’s Evoke) to adjust to different sound environments based on all the acoustic data that Starkey’s cloud is processing.

The Race is On

As I mentioned at the beginning of this post, Starkey has initiated a new phase of hearing aid technology and my hope is that it spurs the other four manufacturers to follow suit, in the same way that everyone followed Resound’s lead with bringing to market “connected” hearing aids. Starkey CTO, Achin Bohwmik, believes that traditional sensors and AI will do to the hearing aid what Apple did to the phone, and I don’t disagree.

As I pointed out in a previous post, the last ten years of computing was centered around porting the web to the apps in our smartphone. The next wave of computing appears to be a process of offloading and unbundling the “jobs” that our smartphone apps represent, to a combination of wearables and voice computing. I believe the ear will play a central role in this next wave of computing, largely due to the fact that it serves as a perfect position for an ear-worn computer with biometric sensors equipped that doubles as a home to our smart assistant(s) which will mediate our voice commands. This is the dawn of a brand new day and I can’t help but feel very optimistic about the future of this industry and hearing healthcare professionals who embrace these new offerings. In the end however, it’s the patient who will benefit the most and that’s a good thing when so many people could and should be treating their hearing loss.

-Thanks for Reading-

Dave

Biometrics, News Updates

Bringing Biometric Sensors to our Ears

Valencell + Sonion

News broke last week that Sonion, a leading components manufacturer for ear-worn devices, had led a $10.5 million Series E round of investment into Valencell, a pioneer in the biometric sensor manufacturing industry. In exchange for its investment into Valencell, Sonion now has exclusivity on Valencell’s bio-sensor technology for the ear-level space. Sonion plans to integrate these biometric sensors into the component packages that they’re developing for hearing aid and hearable manufacturers. This new strategic partnership will help Valencell grow its footprint by leveraging Sonion’s distribution network of ear-worn devices, and ultimately expose more end-users to Valencell’s biometric sensor technology.

The March toward the Ear

The type of sensor that Valencell develops is referred to as an optical PPG (photoplethsymography) sensor. It can record measurements such as your heart rate using a light to illuminate the skin and measure changes in light absorption. It detects the volume of blood and the pressure of the pulse based on the light absorption, allowing for an accurate heart rate measurement. If you’ve used an Apple Watch and have used the Heart Rate app, you’ll notice that a green light on the underside of the Watch lights up. That’s a PPG sensor.

Image result for apple watch ppg sensor
Image from Cult of the Mac

There are a number of reasons that companies like Valencell are so keen on embedding these type of sensors in our ears. Valencell president, Dr. Steven LeBouef, made the case why the ear is the most practical spot on the body to record biometric data:

  1. Due to its unique physiology, the ear is one of the most accurate spots on the body to measure physiological information,
  2. One can measure more biometrics at the ear than any other single location on the body,
  3. Environmental sensors at the ear (exposed to the environment at all times) can assess what airborne vapors and particles one is breathing and expiring, and
  4. People already wear earbuds, headphones, hearing aids, etc. routinely throughout their lives, making compliance quite high.

So in essence, the ear is the most precise, most robust, and most exposed area on the body for recording this information, all while serving as a location where we have already become accustomed to wearing technology. So, this is a no-brainer, right? Why aren’t our ear-worn devices already using these sensors ?

The Challenges of Really Small Devices

As I wrote about in my last post around the innovation happening in our hearables and hearing aids, it can be rather daunting to try and cram all this technology into really small devices that fit in our ears. Battery life is always a challenge because with small devices, there’s only so much power that can go around. Valencell has been able to lower the power consumption of its sensors by a multitude of 25X over the past 5 years, but will still need to find ways to drop that even lower in order for these sensors not to be viewed as major battery drains. Price is another obstacle, as these sensors currently add too much of an incremental manufacturing cost not feasible for the lower-cost end of the market.

That’s why this partnership is so exciting to me. What Sonion really brings to the table here is an expertise in reduction. Reduction in size, price and power consumption, which have been three of the biggest obstacles in making the embedding of these sensors feasible into ear-worn devices.

The Benefits of Putting Biometric Sensors in our Ears

There are two sets of use cases that are currently clear to me around biometric data. The first would be fitness applications. Just think of your hearing aid or earbuds capturing the same fitness data that an Apple Watch or Fitbit records. I think this set of applications gets really interesting when you layer in smart assistants, which can be used to guide or coach the user, but that’s another post for another day. For now, let me just point out that whatever you can do with your wrist-worn wearable today from a data collection standpoint, would seemingly be feasible with our ear-worn wearables that are around the corner.

The next, and much more exciting use case, is around preventative health. If you just search, “Apple Watch saves life,” you’d be amazed at all the people out there who were alerted by their Apple Watch that there was something funky going on with the data that was being logged. Here are a few examples:

  • Teen’s Life Saved by Apple Watch that Alerted her of Heart Condition
    • 18 year old girl is sitting in church when her Apple Watch told her to seek medical attention due to her sitting heart rate spiking to 120-130 beats per minute. The doctors found out she was experiencing kidney failure.
  • Apple Watch Credited with Saving a Man’s Life
    • 32 year old man begins bleeding out of the blue, is prompted by his Apple Watch to seek immediate medical attention, he then calls 911 and by the time the ambulance arrives he has lost 80% of the blood. An ulcer had unknowingly burst in his body and doctors were cited as saying that the Watch notification gave him just enough time to call for help.
  • 76 year old man says Apple Watch Saved his Life
    • “After an electrocardiograph machine indicated something was wrong, doctors conducted tests and discovered that two out of his three main coronary arteries were completely blocked, with the third 90 percent blocked.”
  • And of course, who can forget this guy:Pulmonary embolism tweet.JPG

This is a big part of why I am so bullish on the future of ear-worn devices. I imagine that we’ll see tons of stories like these emerge when these same type of sensors that are currently in the Apple Watch start making their way into our ear-worn devices. We know that the ear is a perfect place to record this type of data, and there’s no new adoption curve for these devices – we’re already wearing tons of things in our ears!

Hearing aids in particular, with their form factors that are conducive to all day usage, really strike me as the perfect preventative health device. The largest demographic of hearing aid wearers (75+ years old), is probably in need of a health monitoring tool like this the most too. As these sensors mature and become more capable at detecting a wider variety of risks, so too will the value proposition of these devices grow.

I don’t think it’s too far-fetched to think that in the not-too-distant future, one’s physician might actually even “prescribe” a preventative health device to monitor a pre-existing condition or some type of medical risk. I can picture them showing a list of body-worn, certified, “preventative health,” devices, and in that list would contain a range of options from the Apple Watch, to sensor equipped hearing aids, to cutting edge hearables. Look no further than the software development kits that Apple has been rolling out over the past few years, and you’ll see that biometric data logging and sharing is very much on the horizon. Exciting times indeed!

-Thanks for Reading-

Dave

 

Biometrics, hearables, Smart assistants, Trends

The Innovation Happening Inside Hearables and Hearing Aids

Smartphones White Flag

The Peace Dividends of the Smartphone War

One of the biggest byproducts of the mass proliferation of smartphones around the planet is the fact that the components inside the devices are becoming increasingly more powerful and sophisticated, while simultaneously becoming smaller and less expensive. Chris Anderson, the CEO of 3D Robotics, refers to this as the, “Peace Dividends of the Smartphone Wars,” where he says:

The peace dividend of the smartphone wars, which is to say that the components in a smartphone — the sensors, the GPS, the camera, the ARM core processors, the wireless, the memory, the battery — all that stuff, which is being driven by the incredible economies of scale and innovation machines at Apple, Google, and others, is now available for a few dollars.

The race to outfit the planet with billions of smartphones served as the foundation for the feasibility of consumer drones, self-driving cars, VR headsets, AR Glasses, dirt-cheap smart speakers, our wearables & hearables, and so many other consumer technology products that have emerged in the past decade. All of these consumer products directly benefit from the efficacy and improvements birthed by the smartphone supply chain.

Since this blog is focused on innovation occurring around ear-worn technology, let’s examine some of the different peace dividends being reaped by hearing aid and hearables manufacturers and how those look from a consumer’s standpoint.

Solving the Connectivity Dilemma

Ever since the debut of the first, “made for iPhone” hearing aid in 2013 (the Linx), each of the major hearing aid manufacturers have followed suit in the pursuit to provide seamless connectivity to the users’ smartphone. This type of connectivity was limited to iOS up until September 2016, when Sonova released it’s Audeo B hearing aid which used a different Bluetooth protocol that allowed for universal connectivity to all smartphones. To keep the momentum going, Google just announced that its Pixel and Pixel 2 smartphones will allow for pairing of any type of Bluetooth hearing aid. The hearing aids and the phones are both becoming more compatible with each other. Every year, we seem to move closer and closer to having universal connectivity among our smartphones and Bluetooth hearing aids.

While connectivity is great and opens up a ton of different new opportunities, it also creates a battery drain on the devices. This poses a challenge to the manufacturers of these super small devices because while the majority of components packed inside these devices have been shrinking in size, the one key component that doesn’t really shrink is the battery.

There are a few things that the manufacturers are doing to circumvent this roadblock based on recent developments largely due to the smartphone supply chain. The first is rechargeability-on-the-go. In the hearables space, you’ll see that pretty much every device has a companion charging case, from Airpods to IQ Buds to Bragi Dash Pro. Hearing aids, which have long been powered by disposable, zinc-air batteries (which last about 4-7 days depending on usage), are now quickly going the rechargeable route as well. Many of which can be charged in similar companion cases akin to what we’re using with hearables.

Rechargeability is a good step forward but it doesn’t really solve the issue of draining batteries quickly. So, if we can’t fit a bigger battery in such a small space and battery innovation is currently stagnant, engineers were forced to look at how we actually use the power. Enter into the equation, computer chips.

Chris Dixon - computers are getting cheaper
Computers are steadily getting cheaper – From Chris Dixon’s What’s Next in Computing

Chip’in In

I’ve written about this before, but the W1 chip that Apple debuted in 2016 was probably one of the biggest moments for the whole hearables industry. Not only did it solve the reliable pairing issue (this chip is responsible for the fast-pairing of AirPods), but it also uses “low-power” Bluetooth, ultimately providing 5 hours of listening time before you need to pop them in their charging case (15 minutes of charge = another 3 hours). With this one chip, Apple effectively removed the two largest detractors to people adopting hearables: battery life and reliable pairing.

Apple has since debuted an updated, improved W2 chip used in its Apple Watch that will likely make its way to AirPods version two. Each iteration will likely continue to increase battery time.

Not to be outdone, Qualcomm introduced its new chipset the QCC5100 at CES this January. Qualcomm’s SVP of Voice & Music, Andy Murray, stated:

“This breakthrough single-chip solution is designed to dramatically reduce power consumption and offers enhanced processing capabilities to help our customers build new life-enhancing, feature-rich devices. This will open new possibilities for extended-use hearable applications including virtual assistants, augmented hearing and enhanced listening,”

This is important because Apple tends not to license out its chips, so for third party hearable and hearing aid manufacturers, they’ll need to reap this type of innovation from a company like Qualcomm to compete with the capabilities that Apple brings to market.

The next one is actually a dividend of a dividend. Smart speakers, like Amazon’s Echo, are cheap to manufacturer due to the smartphone supply chain and as a result have driven down the price of Digital Sound Processing (DSP) chips to a fraction of what they were. These specialized chips are used to process audio (all those Alexa commands) and have long been used by hearing aid manufacturers. Similar to the W1 chip, these chips provide a low-power method that can now be utilized by hearable manufacturers. More options for third party manufacturers.

So, with major tech powerhouses sparring against each other in the innovation ring, hearing aid and hearable manufacturers are able to reap that innovation at a cheap price, ultimately resulting in better devices for the consumers at a depreciating cost.

Chris Dixon - computers are getting smaller
Computers are steadily getting smaller – From Chris Dixon’s What’s Next in Computing

Sensory Overload

What’s on the horizon with the innovation happening within our ear-computers is where things really start to get exciting. The most obvious example of where things are headed are with the sensors being fit in these devices. Starkey announced at its summit this year an upcoming hearing aid that will contain an inertial sensor to detect falls. How can it detect people falling down? Another dividend – the same types of gyroscopes and accelerometers that we have in our phones that work in tandem to detect the orientation of our phones. This sensor combo can also be used to track overall motion, so not only can it detect a person falling down, but it can also serve as an overall fitness monitor. These are small enough and cheap enough now to where virtually any ear-worn device manufacturer can embed these types of sensors into their devices.

Valencell, a biometric sensor manufacturer, has been paving the way with what you can do when you implement heart rate sensors into our ear-worn devices. By using a combination of the metrics recorded by these sensors, you can measure things such as core temperature, which would be great for monitoring and alerting the user of the potential risk of heat exhaustion. You can also gather much more precise fitness metrics, such as intensity levels of one’s workout.

And then there are the efforts around one day being able to non-invasively monitor glucose levels through a hearing aid or hearable. This would most likely be done via some type of biometric sensor or combination of components derived from our smartphones as well. For the 29 million people living with diabetes in America, who also might suffer from hearing loss, a gadget that provides both amplification and glucose monitoring would be much appreciated and compelling.

These types of sensors serve as tools to create new use cases around both preventative health applications, as well as use cases designed for fitness enthusiasts that go beyond what exists today.

The Multi-Function Transformation

One of the reasons that I started this blog was to try and raise awareness around the fact that the gadgets we’re wearing in our ears are on the cusp of transforming from single-function devices, whether that be for audio consumption or amplification, into multi-function devices. All of these disparate innovations make it possible for such a device to emerge without limiting factors such as terrible battery life.

This type of transformation does a number of things. First of all, I believe that it will ultimately kill the negative stigma associated with hearing aids. If we’re all wearing devices in our ears for a multitude of reasons, for increasingly longer periods of time, then who’s to know why you’re even wearing something in your ear, let alone bat an eye at you?

The other major thing I foresee this doing is continue to compound the network effects of these devices. Much like our smartphones, when there is a critical mass of users, there tends to be a virtuous cycle of value creation spearheaded by developers, meaning there’s more and more you can do with these devices. No one could have predicted what the smartphone app economy would look like here in 2018, back in 2008. We’re currently in that same type of starting period with our ear-computers, where the doors are opening to developers to create all the new functionality. Smart assistants alone represent a massive wave of potential new functionality that I’ve written about extensively, and as of January 2018, hearable and hearing aid manufactures can easily integrate Alexa into their devices, thanks to Amazon’s Mobile Accessory Kit.

It’s hard to foresee what all we’ll use these devices for, but the ability for something akin to the app economy to foster and flourish is now enabled due to so many of these recent developments birthed by the smartphone supply chain. Challenges still remain for those producing our little ear-computers, but the fact of the matter is that the components housed inside these small gadgets are simultaneously getting cheaper, smaller, more durable and more sophisticated. There will be winners and losers as this evolves, but one winner that is obvious is the consumer.

-Thanks for reading-

Dave