Daily Updates, Future Ear Radio, hearables, Smart assistants

Podcasting + Search (Future Ear Daily Update 8-9-19)

8-9-19 - Podcasting + Search

A few months ago, I wrote a two-piece article for Voicebot titled, “The Cambrian Explosion of Audio Content.”(part 1, 2) In the articles, I laid out the development of all the necessary “ingredients” required to be combined to create this explosion. We’re seeing significant movement in the financial markets around podcasting, largely fueled by Spotify’s slew of podcast-centric acquisitions made this years. AirPods are increasingly at the core of Apple’s growth strategy and Voicebot just released an article announcing that the smart speaker install base has now reached 76 million. Hardware that is tailored to audio content consumption continues proliferating at scale. Tools designed to create audio content continue to emerge and mature as well, continually reducing the barrier of entry for content creators.

One of the remaining ingredients needed to make this explosion go atomic is intelligent search and discovery. Voicebot reported yesterday that Google will begin adding Podcasts to its search results. This is the beginning of the formation of one of the last pieces of the audio content puzzle to make this all go boom. Initially, Google’s foray into podcast search will be no different than the way it displays search results for the variety of other types of content it surfaces. Where this appears to be headed, however, is where things start to get very interesting.

In the blog post Google published announcing this new aspect to its search engine, it mentioned that later this year this feature will be coming to Google Assistant. This is a really big deal as the implications go beyond “searching” for podcasts, but rather sets the stage for Google Assistant to eventually work on the user’s behalf to intelligently surface podcast recommendations. In the two-part piece I wrote, I mentioned this as being the long-term hope for podcast discovery:

This is the same type of paradox that we’re facing more broadly with smart assistants. Yes, we can access 80,000 Alexa skills, but how many people use more than a handful? It’s not a matter of utility but discoverability; therefore, we need our smart assistant’s help. The answer to the problem would seem to lie in a personalized smart assistant having a contextual understanding of what we want. In the context of audio consumption, smart assistants would need to learn from our listening habits and behavior what it is that we like based on the context that it can infer from the various data available. These data points would include signals such as, the time (post-work hours; work hours), geo-location (airport; office), peer behavior (what our friends are listening to), past listening habits, and any other information our assistants can glean from our behavior.

Say what you want about Google’s privacy position, but Google is leaning into the fact that it knows so much about its users (i.e Duplex on the Web). Obviously, not everyone will be gung-ho about Google having so much data on its users and the ways in which it will use said data. That being said, I’m not sure there is a voice assistant on the market today that is capable of the level of contextual understanding required for these intelligent, context-rich applications, such as podcast recommendations through learned behavior. Time will tell, but Google just took a sizable step toward enabling this type of future.

-Thanks for Reading-


To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”


Daily Updates, Future Ear Radio, Smart assistants

Alexa’s Customer Acquisition Cost (Future Ear Daily Update 7-17-19)

7-17-19 - Alexa's Customer Acquisition Cost

There were two headlines coming out of this year’s two-day Amazon Prime event that really caught my attention as I believe they tell a similar story about customer acquisition cost (CAC) and the long game Amazon is playing. For starters, according to an Amazon blog post, “Amazon welcomed more new Prime members on July 15 than any previous day, and almost as many on July 16 – making these the two biggest days ever for member signups.”

Tren Griffin, who writes the blog 25iq and is a long time Silicon Valley veteran, succinctly summarized the broader takeaway:

Tren Griffin - 7-17-19

On the surface, Prime Day appears to be a giant flash sale, but in reality, it’s a ploy for Amazon to sign up more Prime members. This is important to note, as Consumer Intelligence Research Partners found in 2018 that Prime members purchase $1,400 a year on Amazon goods, while non-Prime customers spend an average of $600. Amazon will easily and quickly make back whatever revenue it gave away during Prime Day with the new members it signed up, as those members’ value tends to more than double.

The second headline that caught my attention was that, according to Amazon, “Prime Day was also the biggest event ever for Amazon devices, when comparing two-day periods – top-selling deals worldwide were Echo Dot, Fire TV Stick with Alexa Voice Remote, and Fire TV Stick 4K with Alexa Voice Remote.”

As Brian Roemmele astutely points out, this wave of new users can be evidenced by the surge Alexa is having in the iOS app store rankings from 96 to 36 in less than 24 hours (I imagine this keeps climbing as more people receive their devices):

We don’t know for sure how many Alexa-enabled devices have been sold this Prime Day yet, but Voicebot published an article that points out multiple Alexa items being sold out and currently on back order, including the Echo Show 5, which is on back order until September. There was seemingly strong demand for Alexa devices during Prime Day once again this year.

What strikes me here is that, similar to Prime Day being a ploy to grow the Prime membership base under the guise of a flash sale, Amazon’s method of slashing of Alexa device prices on Prime Day serves a similar purpose with Alexa, as Alexa is a “membership” of sorts. What I mean is that once a consumer has bought their first Alexa or Google smart speaker, and then decides to add more devices down the line, they’ll likely stick with the same assistant as they’ll be augmenting their living spaces with more access points to the same assistant.

According to Voicebot’s Consumer Adoption report published in March of 2019, it was found that 40% of smart speakers owners have multiple devices, which is up from 34% the previous year. So, to Tren Griffin’s point, Amazon knows that while they’re forfeiting margin on the Alexa devices they’re slashing now, they’ll make up on the back end as more people begin buying additional devices.

Furthermore, the bigger picture is that Amazon wants to own the next generation of computing. Amazon missed the boat on mobile and it’s banking on voice assistants (a 10,000 person team-sized bet) as being the future. So, the CAC of Alexa users in the long run will probably be viewed as incredibly cheap in hindsight as the utility of Alexa rises, and fellow voice assistant providers find that the CAC of poaching Alexa customers becomes increasingly more expensive.

Amazon seems to be using the same playbook with Alexa that it did with Prime – “buy” their user base early, on-the-cheap and lock them in with increasing value. Traditional brick and mortar companies are finding today that trying to poach Prime Members is immensely challenging considering all the value Amazon has been baking into its Prime membership; something that none of Amazon’s retail and e-commerce competitors can seem to match as the cost to build a service to compete with Prime, aka CAC, is just too high today.

-Thanks for Reading-


To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”


audiology, Biometrics, hearables, Smart assistants, VoiceFirst

Capping Off Year One with my AudiologyOnline Webinar

FuturEar Combo

A Year’s Work Condensed into One Hour

Last week, I presented a webinar through the continuing education website, AudiologyOnline, for a number of audiologists around the country. The same week the year prior, I launched this blog. So, for me, the webinar was basically a culmination of the past year’s blog posts, tweets and videos that I’ve generated, distilled into a one-hour presentation. By having to consolidate so many things I have learned into a single hour, it helped me to choose the things that I thought were most pertinent to the hearing healthcare professional.

If you’re interested, feel free to view the webinar using this link (you’ll need to register, though you can register for free and there’s no type of commitment): https://www.audiologyonline.com/audiology-ceus/course/connectivity-and-future-hearing-aid-31891  

Some of My Takeaways

Why This Time is Different

The most rewarding and fulfilling part of this process has been to see the way things have unfolded and the technological progress that has been made both with the hardware and software of the in-the-ear devices and also the rate at which the emerging use cases for said devices are maturing. During the first portion of my presentation, I laid out why I feel this time is different from any previous era where disruption might feel as if it’s on the doorstep, yet doesn’t come to pass, and that’s largely due to the fact that the underlying technology has matured so much of late.

I would argue that the single biggest reason why this time is different is due to the smartphone supply chain, or as I stated in my talk – The Peace Dividends of the Smartphone War (props to Chris Anderson for describing this phenomenon so eloquently). Through the massive, unending proliferation of smartphones around the world, the components that comprise the smartphone (which also comprise pretty much all consumer technology) have gotten incredibly cheap and accessible.

Due to these economies of scale, there is a ton of innovation occurring with each component (sensors, processors, batteries, computer chips, microphones, etc). This means more companies than ever, from various segments, are competing to set themselves apart in any way they can in their respective industries, and therefore are providing innovative breakthroughs for the rest of the industry to benefit from. So, hearing aids and hearables are benefiting from breakthroughs occurring in smart speakers and drones because much of the innovation can be reaped and applied across the whole consumer technology space, rather than just limited to one particular industry.

Learning from Apple

Another point that I really tried to hammer home is the fact that our “connected” in-the-ear devices are now considered “exotropic” meaning that they appreciate in value over time. Connectivity enables the ability for the device to enhance itself, through software/firmware updates and app integration, even after the point-of-sale; much like a smartphone. So, in a similar fashion to our hearing aids and hearables reaping the innovation from other consumer technology innovation occurring elsewhere, connectivity does a similar thing – it enables network effects.

If you study Apple and examine why the iPhone was so successful, you’ll see that its success was largely predicated on the iOS app store, which served as a marketplace that connected developers with users. The more customers (users) there were, the more incentive there was to come and sell your goods as a merchant (developers) in the marketplace (app store). Therefore the marketplace grew and grew as the two sides constantly incentivized one another to grow, which compounded the growth.

That phenomenon I just described is called two-sided network effects and we’re beginning to see the same type of network effects take hold with our body-worn computers. That’s why a decent portion of my talk was spent around the Apple Watch. Wearables, hearables or smart hearing aids – they’re all effectively the same thing: a body-worn computer. Much of the innovation and use cases beginning to surface from the Apple Watch can be applied to our ear-worn computers too. Therefore, Apple Watch users and hearable users comprise the same user-base to an extent (they’re all body computers), which means that developers creating new functionality and utility for the Apple Watch might indirectly (or directly) be developing applications for our in-the-ear devices too. The utility and value of our smart hearing aids and hearables will just continue to rise, long after the patient has purchased their device, making for a much stronger value proposition.

Smart Assistant Usage will be Big

One of the most exciting use cases that I think is on the cusp of breaking through in a big way in this industry is smart assistant integration into the hearing aids (already happening in hearables). I’ve attended multiple conferences dedicated to this technology and have posted a number of blogs on smart assistants and the Voice user interface so, I don’t want to rehash every reason why I think this is going to be monumental for the product offering of this industry, but the main takeaway is this: the group that is adopting this new user interface the fastest is the same cohort that makes up the largest contingent of hearing aid wearers – the older adults. The reason for this fast adoption, I believe, is because there are few limitations to speaking and issuing commands/controlling your technology with your voice. This is why Voice is so unique; It’s conducive to the full age spectrum from kids to older adults, while something like the mobile interface isn’t particularly conducive to older adults who might have poor eyesight, dexterity or mobility.

This user interface and the smart assistants that mediate the commands are incredibly primitive today relative to what they’ll mature to become. Jeff Bezos famously quipped in 2016 in regard to this technology that, “It’s the first inning. It might even be the first guy’s up at bat.” Even in the technology’s infancy, the adoption of smart speakers among the older cohort is surprising and leads one to believe that they’re beginning to grow a dependency on smart assistant mediated voice commands, rather than tap, touch and swiping on their phones. Once this becomes integrated into hearing aids, patients will be able to conduct many of the same functions that you or I do with our phones, simply by asking their smart assistant to do that for them. One’s hearing aid serving the role (to an extent) of their smartphone further strengthens the value proposition of the device.

Biometric Sensors

If there’s one set of use cases that I think can rival the overall utility of Voice, it would be the implementation of biometric sensors into ear-worn devices. To be perfectly honest, I am startled how quickly this is already beginning to happen, with Starkey making the first move with the introduction of a gyroscope and accelerometer into its Livio AI hearing aid allowing for motion tracking. These sensors support the use cases of fall detection and fitness tracking. If “big data” was the buzz of the past decade, then “small data”, or personal data, will be the buzz of the next 10 years. Life insurance companies like John Hancock are introducing policies built around fitness data, converting this feature from a “nice to have” to a “need to have” for those that need to be wearing an-all day data recorder. That’s exactly the role the hearing aid is shaping up to serve – a data collector.

The type of data being recorded is really only limited to the type of sensors that are embedded into the device, and we’ll soon see the introduction of PPG sensors, as Valencell and Sonion plan to release a commercially available sensor small enough to fit into a RIC hearing available in 2019 for OEMs to implement into their offerings. These light-based sensors are currently built into the Apple Watch and provide the ability to track your hear rate. There have been a multitude of folks who have cited their Apple Watch for saving their life, as they were alerted to abnormal spikes in their resting heart rates, which were discovered to be life-threatening abnormalities in their cardiac activity. So, we’re talking about hearing aids acting as data collectors and preventative health tools that might alert the hearing aid wearer to a life-threatening condition.

As these type of sensors continue to shrink in size and become more capable, we’re likely to see more types of data that can be harvested, such as blood pressure and other cardiac data from the likes of an EKG sensor. We could potentially even see a sensor that’s capable of gathering glucose levels in a non-invasive way, which would be a game-changer for the 100 million people with diabetes or pre-diabetes. We’re truly at the tip of iceberg with this aspect of the devices, and this would mean that the hearing healthcare professional is a necessary component (fitting the “data collector”) for the cardiologist or physician that needs their patient’s health data monitored.

More to Come

This is just some of what’s happened across the past year. One year! I could write another 1500 words on interesting developments that have occurred this year, but these are my favorites. There is seemingly so much more to come with this technology and as these devices continue their computerized transformation into looking like something more akin to the iPhone, there’s no telling what other use cases might emerge. As the movie Field of Dreams so famously put it, “If you build it, they will come.” Well, the user base of all our body-worn computers continues to grow and further enticing the developers to come make their next big pay day. I can’t wait to see what’s to come in year two and I fully plan on ramping up my coverage of all the trends converging around the ear. So stay tuned and thank you to everyone who has supported me and read this blog over this first year (seriously, every bit of support means a lot to me).

-Thanks for Reading-


audiology, Biometrics, hearables, Live-Language Translation, News Updates, Smart assistants, VoiceFirst

Hearing Aid Use Cases are Beginning to Grow


The Next Frontier

In my first post back in 2017, I wrote that the inspiration for creating this blog was to provide an ongoing account of what happens after we connect our ears to the internet (via our smartphones). What new applications and functionality might emerge when an audio device serves as an extension of one’s smartphone? What new hardware possibilities can be implemented in light of the fact that the audio device is now “connected?” This week, Starkey moved the ball forward with changing the narrative and design around what a hearing aid can be with the debut of its new Livio AI hearing aid.

Livio AI embodies the transition to a multi-purpose device, akin to our hearables, with new hardware in the form of embedded sensors not seen in hearing aids to date, and companion apps to allow for more user control and increased functionality. Much like Resound firing the first shot in the race to create connected hearing aids with the first “Made for iPhone” hearing aid, Starkey has fired the first shot in what I believe will be the next frontier, which is the race to create the most compelling multi-purpose hearing aid.

With the OTC changes fast approaching, I’m of the mind that one way hearing healthcare professionals will be able to differentiate in this new environment is by offering exceptional service and guidance around unlocking all the value possible from these multi-purpose hearing aids. This spans the whole patient experience, from the way the device is programmed and fit, to educating the patient around how to use the new features. Let’s take a look what one of the first forays into this arena looks like by breaking down the Livio AI hearing aid.

Livio AI’s Thrive App

Thrive is a companion app that can be downloaded to use with Livio AI, and I think it’s interesting for a number of reasons. For starters, what I find useful about this app is that it’s Starkey’s attempt to combat the potential link of cognitive decline and hearing loss in our aging population. It does this by “gamifying” two sets of metrics that roll into your 200 point “Thrive” score that’s meant to be achieved regularly.

Thrive Score.JPG

The first set of metrics is geared toward measuring your body activity, comprised around data collected through sensors to gauge your daily movement. By embedding a gyroscope and accelerometer into the hearing aid, Livio AI is able to track your movement, so that it can monitor some of the same type of metrics as an Apple Watch or Fitbit. Each day, your goal is to reach 100 “Body” points by moving, exercising and standing up throughout the day.Body

The next bucket of metrics being collected is entirely unique to this hearing aid and is based around the way in which you wear your hearing aids. This “brain” category measures the daily duration the user wears the hearing aid, the amount of time spent “engaging” other people (which is important for maintaining a healthy mind), and the various acoustic environments that the user experiences each day.

Brain Image.JPG

So, through gamification, the hearing aid wearer is encouraged to live a healthy lifestyle and use their hearing aids throughout the day in various acoustic settings, engaging in stimulating conversation. To me, this will serve as a really good tool for the audiologist to ensure that the patient is wearing the hearing aid to its fullest. Additionally, for those who are caring for an elderly loved one, this can be a very effective way to track how active your loved one’s lifestyle is and whether they’re actually wearing their hearing aids. That’s the real sweet spot here, as you can quickly pull up their Thrive score history to get a sense of what your aging loved one is doing.

Healthkit SDK Integration


Another very subtle thing about the Thrive app that has some serious future applications is that fact that Starkey has integrated Thrive’s data into Apple’s Healthkit SDK. This is one of the only third-party device integrations that I know of to be integrated into this SDK at this point. The image above is a side-by-side comparison of what Apple’s Health app looks like with and without Apple Watch integration. As you can see, the image on the right displays the biometric data that was recorded from my Watch and sent to my Health app. Livio AI’s data will be displayed in the same fashion.

So, what? Well, as I wrote about previously, the underlying reason this is a big deal, is that Apple has designed its Health app with future applications in mind. In essence, Apple appears to be aiming to make the data easily transferable, in an encrypted manner (HIPAA-friendly), across Apple-certified devices. So, it’s completely conceivable that you’d be able to share the biometric data being ported into your Health app (i.e. Livio AI data) and share it with a medical professional.

For an audiologist, this would mean that you’d be able to remotely view the data, which might help to understand why a patient is having a poor experience with their hearing aids (they’re not even wearing them). Down the line, if hearing aids like Livio were to have more sophisticated sensors embedded, such as a PPG sensor to monitor blood pressure, or a sensor that can monitor your body temperature (as the tympanic membrane radiates body heat), you’d be able to transfer a whole host of biometric data to your physician to help them assess what might be wrong with you if you’re feeling ill. As a hearing healthcare professional, there’s a possibility that in the near future, you will be dispensing a device that is not only invaluable to your patient but to their physician as well.

Increased Intelligence

Beyond the fitness and brain activity tracking, there are some other cool use cases that come packed with this hearing aid. There’s a language translation feature that includes 27 languages, which is done in real-time through the Thrive app and is powered through the cloud (so you’ll need to have internet access to use). This seems to draw from the Starkey-Bragi partnership which was formed a few years ago, which was a good indication that Starkey was looking to venture down the path of making a feature-rich hearing aid with multiple uses.

Another aspect of the smartphone that Livio AI leverages is the smartphone’s GPS. This allows the user to use their smartphone to locate their hearing aids if they have gone missing. Additionally, the user can set “memories” to adjust their hearing aid settings based on the acoustic environment they’re in. If there’s a local coffee shop or venue that the user frequents, where they’ll want their hearing aids to have a boost or turned down in some fashion, “memories” will automatically adjust the settings based on the pre-determined GPS location.

If you “pop the hood” of the device and take a look inside, you’ll see that the components comprising the hearing aid have been significantly upgraded too. Livio AI boasts triple the computing power and double the local memory capacity as the previous line of Starkey hearing aids. This should come as no surprise, as the most impressive innovation happening with ear-worn devices is what’s happening with the components inside the devices, due to the economies of scale and massive proliferation of smartphones. This increase in computing power and memory capacity is yet another example of the, “peace dividends of the smartphone war.” This type of computing power allows for a level of machine learning (similar to Widex’s Evoke) to adjust to different sound environments based on all the acoustic data that Starkey’s cloud is processing.

The Race is On

As I mentioned at the beginning of this post, Starkey has initiated a new phase of hearing aid technology and my hope is that it spurs the other four manufacturers to follow suit, in the same way that everyone followed Resound’s lead with bringing to market “connected” hearing aids. Starkey CTO, Achin Bohwmik, believes that traditional sensors and AI will do to the hearing aid what Apple did to the phone, and I don’t disagree.

As I pointed out in a previous post, the last ten years of computing was centered around porting the web to the apps in our smartphone. The next wave of computing appears to be a process of offloading and unbundling the “jobs” that our smartphone apps represent, to a combination of wearables and voice computing. I believe the ear will play a central role in this next wave of computing, largely due to the fact that it serves as a perfect position for an ear-worn computer with biometric sensors equipped that doubles as a home to our smart assistant(s) which will mediate our voice commands. This is the dawn of a brand new day and I can’t help but feel very optimistic about the future of this industry and hearing healthcare professionals who embrace these new offerings. In the end however, it’s the patient who will benefit the most and that’s a good thing when so many people could and should be treating their hearing loss.

-Thanks for Reading-


Conferences, hearables, Smart assistants, VoiceFirst

The State of Smart Assistants + Healthcare


Last week, I was fortunate to travel to Boston to attend the Voice of Healthcare Summit at Harvard Medical School. My motivation for attending this conference was to better understand how smart assistants are currently being implemented into the various segments of our healthcare system and to learn what’s on the horizon in the coming years. If you’ve been following my blog or twitter feed, then you’ll know that I am envisioning a near-term future where smart assistants become integrated into our in-the-ear devices (both hearables and bluetooth hearing aids). Once that integration becomes commonplace, I imagine that we’ll see a number of really interesting and unique health-specific use cases that leverage the combination of the smartphone, sensors embedded on the in-the-ear device, and smart assistants.


Bradley Metrock, Matt Cybulsky and the rest of the summit team that put on this event truly knocked it out of the park, as the speaker set and the attendees included a wide array of different backgrounds and perspectives, which resulted in some very interesting talks and discussions. Based on what I gathered from the summit, smart assistants will yield different types of value to three groups: patients, remote caregivers, and clinicians and their staff.


At this point in time, none of our mainstream smart assistants are HIPAA-compliant, limiting the types of skills and actions that be developed specific to healthcare. Companies like Orbita are working around this limitation by essentially taking the same building blocks required to create of voice skills and then building secure voice skills from scratch in its platform. Developers who want to create skills/actions for Alexa or Google that use HIPAA data, however, will have to wait until the smart assistant platforms have become HIPAA-compliant, which could happen this year or next.

It’s easy to imagine the upside that will come with HIPAA-compliant assistants, as that would allow for the smart assistant to retrieve one’s medical data. If I had a chronic condition that required me to take five separate medications, Alexa could audibly remind me to take each of the five, by name, and respond to any questions I might have regarding any of the five medications. If I am telling Alexa of a side effect I’m having, Alexa might even be able to identify which of the five medications are possibly causing that side-effect and loop in my physician for her input. As Brian Roemmele has pointed out repeatedly, the future ahead for our smart assistants is routed through each of our own personalized, contextual information, and until these assistants are HIPAA-compliant, the assistant has to operate at a more general level than a personalized one.

That’s not to say there isn’t value in generalized skills or skills that don’t use data that falls under the HIPAA umbrella and therefore can be personalized. Devin Nadar from Boston Children’s Hospital walked us through their KidsMD skill, which ultimately allows for parents to ask general questions about their children’s illness, recovery, symptoms, etc and then have the peace of mind that the answers they’re receiving are being sourced and vetted by Boston Children’s Hospital; it’s not just random responses being retrieved from the internet. Cigna’s Rowena Track showed how their skill allows for you to check things such as your HSA-balance or urgent care wait times.

Care Givers and “Care Assistants”

By 2029, 18% of America will be above the age of 65 years old and the average US life expectancy rate is already climbing above 80. That number will likely continue to climb which brings us to the question, “how are we going to take care of our aging population?”  As Laurie Orlov, industry analyst and writer of the popular Aging In Place blog, so eloquently stated during her talk, “The beneficiaries of smart assistants will be disabled and elderly people…and everyone else.” So, based on that sentiment and the fact that the demand to support our aging population is rising, enter into the equation what John Loughnane of CCA described as, “care assistants.”

Triangulation Pic.jpg
From Laurie Orlov’s “Technology for Older Adults: 2018 Voice First — What’s Now and Next” Presentation at the VOH Summit 2018

As Laurie’s slide above illustrates, smart assistants or “care assistants” in this scenario, help to triangulate the relationship between the doctor, the patient and those who are taking care of the patient, whether that be care givers or family. These “care assistants” can effectively be programmed with helpful responses around medication cadence, what the patient can or can’t do and for how long they’re restricted, what they can eat, when to change bandages and how to do so. In essence, the “care assistant” serves as an extension to the care giver and the trust they provide, allowing for more self-sufficiency and therefore, less of a burden on the care giver.

As I have written about before, the beauty of smart assistants is that even today in their infancy and primitive state, smart assistants can empower disabled and elderly people in ways that no previous interface has before. This matters from a fiscal standpoint too, as Nate Treloar, President of Orbita, pointed out that social isolation costs Medicare $6.7 billion per year. Smart assistants act as a tether to our collective social fabric for these groups and multiple doctors at the summit cited disabled or elderly patients who described their experience of using a smart assistant as “life changing.” What might seem trivial to you or I, like being able to send a message with your voice, might be truly groundbreaking to someone who has never had that type of control.

The Clinician and the System

The last group that stands to gain from this integration would be the doctor and those working in the healthcare system. According to the annals of Internal Medicine, for every hour that a physician spends with a patient, they must spend two hours on related administration work. That’s terribly inefficient and something that I’m sure drives physicians insane. The drudgery of clerical work seems to be ripe for smart assistants to provide efficiencies. Dictating notes, being able to quickly retrieve past medical information, share said medical information across systems, etc. Less time doing clerical work and more time helping people.

Boston Children’s Hospital uses an internal system called ALICE and by layering voice onto this system, admins, nurses and other staff can very quickly retrieve vital information such as:

  • “Who is the respiratory therapist for bed 5?”
  • “Which beds are free on the unit?”
  • “What’s the phone number of the MSICU Pharmacist?”
  • “Who is the Neuro-surgery attending?”

And boom, you quickly get the answer to any of these. That’s removing friction in a setting where time might really be of the essence. As Dr. Teri Fisher, host of the VoiceFirst Health podcast, pointed out during his presentation, our smart assistants can be used to reduce the strain on the overall system by playing the role of triage nurse, admin assistant, healthcare guide and so on.

 What Lies Ahead

It’s always important with smart assistants and Voice to simultaneously temper current expectations while remaining optimistic about the future. Jeff Bezos joked in 2016 that, “not only are we in the first inning of this technology, we might even be at the first batter.” It’s early, but as Bret Kinsela of VoiceBot displayed during his talk, smart speakers represent the fastest adoption of any consumer technology product ever:

Fastest Adoption
From Bret Kinsela’s “Voice Assistant Market Adoption” presentation at the VOH Summit 2018

The same goes for how smart assistants are being integrated into our healthcare system. Much like Bezos’ joke, very little of this is even HIPAA-compliant yet. With that being said, you still have companies and hospitals the size Cigna and Boston Children’s Hospital putting forth resources to start building out their offerings in an impending VoiceFirst world. We might not be able to offer true, personalized engagement with the assistant yet, but there’s still lots of value that can be derived at the general level.

As this space matures, so too will the level of which we can unlock efficiencies within our healthcare system across the board. Patients of all ages and medical conditions will be more empowered to receive information, prompts and reminders to better manage their conditions. This means that those taking care of the patients are less burdened too, as they can offload the information aspect of their care giving to the “care assistant.” This then frees up the system as a whole, as there are less general inquiries (and down the line, personal inquiries), meaning less patients who need to come in and can be served at home. Finally, the clinicians can be more efficient too, as they can offload clerical work to the assistant and better retrieve data and information on a patient-to-patient basis, and also more efficiently communicate with their patient, even remotely.

As smart assistants become more integral to our healthcare system, my belief is that on-body access to the assistant will be desired. Patients, caregivers, clinicians and medical staff all have their own reasons for wanting their assistant right there with them at all times. What better a place than a discreet, in-the-ear device that allows for one-to-one communication with the assistant?

-Thanks for Reading-


hearables, Smart assistants, VoiceFirst

10 Years after the App Store, what’s Next?

What’s Next?

As we celebrate the 10 year anniversary of the App store this week, it seems only natural that we begin wondering what the next 10 years will look like. What modalities, devices, interfaces and platforms will rise to the top of our collective preferences? There’s clearly an abundance of buzzwords that are thrown around these days that indicate a potential direction things may go, but the area that I want to focus on is the Voice interface. This includes smart assistants and all the devices they’re housed in.

Gartner’s L2 recently published the chart below, which might seem to pour some water on the momentum that has been touted around the whole Voice movement:

Before I go into why this chart probably doesn’t matter in the grand scheme of things, there were some solid responses as to why these trend lines are so disparate. Here’s what Katie McMahon, the VP of SoundHound, had to say:

Katie McMahon Tweet

One of the primary reasons the app economy took off was due to two-sided network effects predicated on developer buy-in based on huge monetary incentive. Of course there was an explosion of new applications and things you could do with your smartphone, as there was a ton of money to be made to develop those apps. This was a modern day gold rush. The same financial incentive around developing voice skills doesn’t yet exist.

Here’s a good point Chris Geison, senior Voice UX researcher at AnswerLab, made around one of the current, major pitfalls of Voice skill/action discovery:

Chris Geison Tweet

So, based on Chris and AnswerLabs’ research, a third of users don’t really know that an “app-like economy” exists for their smart speakers. That’s rather startling, given that it was reported by Voicebot at the end of June that there are now 50 million smart speaker users in the US. Is it really possible that tens of millions of people don’t fully understand the capabilities and the companion ecosystem that comes with the smart speaker that they own? It would appear so, as the majority of users are using their smart speakers for native functionality that doesn’t require a downloaded skill as illustrated by this awesome chart from Voicebot’s Smart Speaker Consumer Adoption Report 2018:

As you can see from the chart above, only 46.5% of respondents from this survey have used a skill/action.

Jobs to be Done

In order to understand how we move forward and what’s necessary to do so, it’s important to look at how we use our phones today. As I wrote about in a previous post, each computer interface evolution has been a progression of reducing friction, or time spent doing a mechanical task. Today’s dominant consumer interface – mobile –  is interfaced with Apps. Apps represent jobs that need doing, whether that be a tool to get us from A to B (maps), filling time when you’re bored (games/social media/video), exercising or relaxing the mind(sudoku/chess/books/music/podcasts), etc. Every single app on your phone is a tool for you to execute the job you’re trying to accomplish.

User Interface Shift
From Brian Roemmele’s Read Multiplex 9/27/17

So, if we’re looking to reduce friction as we enter into a new era of computing interaction, we should note that the majority of friction with mobile is primarily consolidated around the mechanical process of pulling out your phone, digging through and toggling between your apps to achieve the job needing to be done. That mechanical process is the friction that needs to be removed.

Workflow + Siri Shortcuts

I was initially underwhelmed by Apple’s WWDC this year because I felt that Siri had once again been relegated to the backseat of Apple’s agenda, which would be increasingly negligent given how aggressive Amazon, Google and the others have been moving into this area. What I didn’t fully understand was how crucial Apple’s Workflow acquisition was back in 2017 and how it might apply to Siri.

Siri Shortcuts ultimately represent a way in which users can program “shortcuts” between apps, so that they can execute a string of commands together into a “workflow” via a voice command. The real beauty of this is that each shortcut can be made public (hello, developers) and Siri will proactively suggest shortcuts for you based on what Siri learns about your preferences and contextual behavior. Power-users empowering mainstream-users with their shortcuts, as suggested by Siri. Remember, context is king with our smart assistants.

Brian Roemmele expanded on this acquisition and the announced integration of Workflow with Siri on Rene Ritchie’s Vector podcast this week. Brian said something in this podcast that really jumped out at me (~38 min mark):

“Imagine every single app on the app store. Now deconstruct all those apps into Jobs to be Done, or intents, or taxonomies. And then imagine, with something like crayons, you can start connecting these things artistically any way you want… Imagine you can do that without mechanically doing it.”

This cuts right to the core of what I think the foreseeable future looks like. Siri Shortcuts powered by Workflow take the role of those crayons. If we’ve extracted out all the utility and jobs that each app represents and put them together into one big pile, we can start to combine various elements of different apps to result in increased efficiencies. This to me really screams “removing mechanical friction.” When I can speak one command and have my smart assistant knock out the work I’m currently doing when I’m digging, tapping and toggling through my apps, that’s significant increases in efficiency

  • “Start my morning routine” – starts my morning playlist, compares Lyft and Uber and displays the cheaper (or quicker, depending on what I prefer) commute, orders my coffee from Starbucks, and queue’s up three stories I want to read on my way to work.
  • “When’s a good time to go to DC” – pulls together things like airfare, AirBnB listings, events that might be going on at the time like concerts or sports games surfaced from Ticketmaster/SeatGeek/Songkick, weather trends, etc.

The options are up to one’s imagination and this interface really does begin to resemble a conversational dialogue as the jobs that need to be done become increasingly more self-programmed by the smart assistant over time.

All Together Now

Apple isn’t the only one deploying this strategy; Google’s developer conference featured a strikingly similar approach to unbundling apps called Slices and App Actions. It would appear that the theme here heading into the next 10 years is to find ways to create efficiencies by leveraging our smart assistants to do the grunt work for us. Amazon’s skill ecosystem is currently plagued by discovery issues as highlighted above, but the recent deployment of CanFulfillIntentRequest for developers will hopefully allow for easier discovering of skills and functionality for mainstream users. The hope is that all the new voice skills and the jobs that they do can be surfaced much more proactively. That’s why I don’t fixate on the amount of skills created to this point, because the way in which we effectively access those skills hasn’t really matured yet.

What’s totally uncertain is whether the companies that sit behind the assistants will play nice with each other. In an ideal world, our assistants would specialize in their own domains and work together. It would be nice to be able to use Siri on my phone, which would work with Alexa when I’m needing something from the Amazon empire or control an IoT-Alexa based device. It would be great if Siri and Google Assistant communicated in the background so that all my gmail and calendar context was available for Siri to access.

Access Point

It’s possible that we’ll continue to have “silos” of skills and apps, and therefore silos of contextual data, if the platforms aren’t playing nice together. Regardless, within each platform the great unbundling seems to be underway. As we move towards a truly conversational interface where we’re conversing with our assistants to accomplish our jobs to be done, we then should think about where we’re accessing the assistant.

I’m of the mind that as we depend on our smart assistants more and more, we’ll want access to our assistants at all times. Therefore, I believe that we’ll engage with smart assistants across multiple different devices, but with continuity, all throughout the day. I may be conversing with my assistants in my home via smart speakers or IoT devices, in my car on the way to work, and in my smart-assistant integrated hearables or hearing aids throughout the course of my day while I’m on-the-go.

While the past 10 years was all about consolidating and porting the web to our mobile devices via apps, the next 10 might be about unlocking new efficiencies and further reducing friction by unbundling the apps and allowing our smart assistants to operate in the background doing our grunt work and accomplishing for us the jobs we need done. It’s not as if smartphones and tablets are going to go away, on the contrary, but its how we use them and derive utility from them that will fundamentally change.

-Thanks for Reading-


Biometrics, hearables, Smart assistants, Trends

The Innovation Happening Inside Hearables and Hearing Aids

Smartphones White Flag

The Peace Dividends of the Smartphone War

One of the biggest byproducts of the mass proliferation of smartphones around the planet is the fact that the components inside the devices are becoming increasingly more powerful and sophisticated, while simultaneously becoming smaller and less expensive. Chris Anderson, the CEO of 3D Robotics, refers to this as the, “Peace Dividends of the Smartphone Wars,” where he says:

The peace dividend of the smartphone wars, which is to say that the components in a smartphone — the sensors, the GPS, the camera, the ARM core processors, the wireless, the memory, the battery — all that stuff, which is being driven by the incredible economies of scale and innovation machines at Apple, Google, and others, is now available for a few dollars.

The race to outfit the planet with billions of smartphones served as the foundation for the feasibility of consumer drones, self-driving cars, VR headsets, AR Glasses, dirt-cheap smart speakers, our wearables & hearables, and so many other consumer technology products that have emerged in the past decade. All of these consumer products directly benefit from the efficacy and improvements birthed by the smartphone supply chain.

Since this blog is focused on innovation occurring around ear-worn technology, let’s examine some of the different peace dividends being reaped by hearing aid and hearables manufacturers and how those look from a consumer’s standpoint.

Solving the Connectivity Dilemma

Ever since the debut of the first, “made for iPhone” hearing aid in 2013 (the Linx), each of the major hearing aid manufacturers have followed suit in the pursuit to provide seamless connectivity to the users’ smartphone. This type of connectivity was limited to iOS up until September 2016, when Sonova released it’s Audeo B hearing aid which used a different Bluetooth protocol that allowed for universal connectivity to all smartphones. To keep the momentum going, Google just announced that its Pixel and Pixel 2 smartphones will allow for pairing of any type of Bluetooth hearing aid. The hearing aids and the phones are both becoming more compatible with each other. Every year, we seem to move closer and closer to having universal connectivity among our smartphones and Bluetooth hearing aids.

While connectivity is great and opens up a ton of different new opportunities, it also creates a battery drain on the devices. This poses a challenge to the manufacturers of these super small devices because while the majority of components packed inside these devices have been shrinking in size, the one key component that doesn’t really shrink is the battery.

There are a few things that the manufacturers are doing to circumvent this roadblock based on recent developments largely due to the smartphone supply chain. The first is rechargeability-on-the-go. In the hearables space, you’ll see that pretty much every device has a companion charging case, from Airpods to IQ Buds to Bragi Dash Pro. Hearing aids, which have long been powered by disposable, zinc-air batteries (which last about 4-7 days depending on usage), are now quickly going the rechargeable route as well. Many of which can be charged in similar companion cases akin to what we’re using with hearables.

Rechargeability is a good step forward but it doesn’t really solve the issue of draining batteries quickly. So, if we can’t fit a bigger battery in such a small space and battery innovation is currently stagnant, engineers were forced to look at how we actually use the power. Enter into the equation, computer chips.

Chris Dixon - computers are getting cheaper
Computers are steadily getting cheaper – From Chris Dixon’s What’s Next in Computing

Chip’in In

I’ve written about this before, but the W1 chip that Apple debuted in 2016 was probably one of the biggest moments for the whole hearables industry. Not only did it solve the reliable pairing issue (this chip is responsible for the fast-pairing of AirPods), but it also uses “low-power” Bluetooth, ultimately providing 5 hours of listening time before you need to pop them in their charging case (15 minutes of charge = another 3 hours). With this one chip, Apple effectively removed the two largest detractors to people adopting hearables: battery life and reliable pairing.

Apple has since debuted an updated, improved W2 chip used in its Apple Watch that will likely make its way to AirPods version two. Each iteration will likely continue to increase battery time.

Not to be outdone, Qualcomm introduced its new chipset the QCC5100 at CES this January. Qualcomm’s SVP of Voice & Music, Andy Murray, stated:

“This breakthrough single-chip solution is designed to dramatically reduce power consumption and offers enhanced processing capabilities to help our customers build new life-enhancing, feature-rich devices. This will open new possibilities for extended-use hearable applications including virtual assistants, augmented hearing and enhanced listening,”

This is important because Apple tends not to license out its chips, so for third party hearable and hearing aid manufacturers, they’ll need to reap this type of innovation from a company like Qualcomm to compete with the capabilities that Apple brings to market.

The next one is actually a dividend of a dividend. Smart speakers, like Amazon’s Echo, are cheap to manufacturer due to the smartphone supply chain and as a result have driven down the price of Digital Sound Processing (DSP) chips to a fraction of what they were. These specialized chips are used to process audio (all those Alexa commands) and have long been used by hearing aid manufacturers. Similar to the W1 chip, these chips provide a low-power method that can now be utilized by hearable manufacturers. More options for third party manufacturers.

So, with major tech powerhouses sparring against each other in the innovation ring, hearing aid and hearable manufacturers are able to reap that innovation at a cheap price, ultimately resulting in better devices for the consumers at a depreciating cost.

Chris Dixon - computers are getting smaller
Computers are steadily getting smaller – From Chris Dixon’s What’s Next in Computing

Sensory Overload

What’s on the horizon with the innovation happening within our ear-computers is where things really start to get exciting. The most obvious example of where things are headed are with the sensors being fit in these devices. Starkey announced at its summit this year an upcoming hearing aid that will contain an inertial sensor to detect falls. How can it detect people falling down? Another dividend – the same types of gyroscopes and accelerometers that we have in our phones that work in tandem to detect the orientation of our phones. This sensor combo can also be used to track overall motion, so not only can it detect a person falling down, but it can also serve as an overall fitness monitor. These are small enough and cheap enough now to where virtually any ear-worn device manufacturer can embed these types of sensors into their devices.

Valencell, a biometric sensor manufacturer, has been paving the way with what you can do when you implement heart rate sensors into our ear-worn devices. By using a combination of the metrics recorded by these sensors, you can measure things such as core temperature, which would be great for monitoring and alerting the user of the potential risk of heat exhaustion. You can also gather much more precise fitness metrics, such as intensity levels of one’s workout.

And then there are the efforts around one day being able to non-invasively monitor glucose levels through a hearing aid or hearable. This would most likely be done via some type of biometric sensor or combination of components derived from our smartphones as well. For the 29 million people living with diabetes in America, who also might suffer from hearing loss, a gadget that provides both amplification and glucose monitoring would be much appreciated and compelling.

These types of sensors serve as tools to create new use cases around both preventative health applications, as well as use cases designed for fitness enthusiasts that go beyond what exists today.

The Multi-Function Transformation

One of the reasons that I started this blog was to try and raise awareness around the fact that the gadgets we’re wearing in our ears are on the cusp of transforming from single-function devices, whether that be for audio consumption or amplification, into multi-function devices. All of these disparate innovations make it possible for such a device to emerge without limiting factors such as terrible battery life.

This type of transformation does a number of things. First of all, I believe that it will ultimately kill the negative stigma associated with hearing aids. If we’re all wearing devices in our ears for a multitude of reasons, for increasingly longer periods of time, then who’s to know why you’re even wearing something in your ear, let alone bat an eye at you?

The other major thing I foresee this doing is continue to compound the network effects of these devices. Much like our smartphones, when there is a critical mass of users, there tends to be a virtuous cycle of value creation spearheaded by developers, meaning there’s more and more you can do with these devices. No one could have predicted what the smartphone app economy would look like here in 2018, back in 2008. We’re currently in that same type of starting period with our ear-computers, where the doors are opening to developers to create all the new functionality. Smart assistants alone represent a massive wave of potential new functionality that I’ve written about extensively, and as of January 2018, hearable and hearing aid manufactures can easily integrate Alexa into their devices, thanks to Amazon’s Mobile Accessory Kit.

It’s hard to foresee what all we’ll use these devices for, but the ability for something akin to the app economy to foster and flourish is now enabled due to so many of these recent developments birthed by the smartphone supply chain. Challenges still remain for those producing our little ear-computers, but the fact of the matter is that the components housed inside these small gadgets are simultaneously getting cheaper, smaller, more durable and more sophisticated. There will be winners and losers as this evolves, but one winner that is obvious is the consumer.

-Thanks for reading-




hearables, Smart assistants, VoiceFirst

The Unexpected #VoiceFirst Power Users

Smart Assistant Power Users

In-The-Ear Assistants

There was a really good post published last week in the #VoiceFirst world by Cathy Pearl, the author of the book, “Designing Voice User Interfaces.” In her post, she goes through  some of the positive effects that smart assistants are having on the disabled and elderly communities. The unique and awesome thing about the Voice user interface is it enables these demographic groups that had previously been left behind by past user interfaces. Due to physical limitations or the deterioration of one’s senses and dexterity, mobile computing (and all prior generations of computing) is not very conducive to these groups of people. Additionally,  Voice is being adopted by all ages, from young children to elderly folks, in large part due to the fact that there is virtually no learning curve. “Just tell Alexa what you want her to do.”

Cathy’s article dovetails nicely into what I see as being the single biggest value-add that hearing aids and hearables have yet to offer – smart assistant integration. As I wrote about back in January, one of the most exciting announcements this year was Amazon’s Mobile Accessory Kit (AMAK). This software kit makes it dramatically easier for OEMs, such as hearable and hearing aid manufacturers, to integrate Alexa into their devices.

(I should note that, as of now, “integration” represents a pass-through connection from the phone to the audio device. In the future, as our mini ear-computers become more independent from our phones, so too should we see full smart assistant integration as our audio devices further mature and become more capable as standalone devices.)

AMAK will help accelerate the smart assistant integration that’s already taking place in the hearables market, which now includes Airpods (Siri), Bose QC35 (Google Assistant), Bragi Dash Pro (Siri/Google/Alexa), Jabra Elite 65t (Siri/Google/Alexa), NuHeara IQ Buds (Siri/Google) and a handful of others. Hearing aids will soon see this type of integration too. Starkey CTO, Achin Bhowmik, alluded to being able to activate and engage smart assistants with taps on the hearing aids, verbal cues and head gestures. Given the partnerships between hearing aid and hearable companies (i.e. Starkey and Bragi) or full-on acquisitions (i.e. GN Resound owning Jabra), it seems that we’ll see this integration with all of our new “connected” hearing aids too.

A Convergence of Needs

Convergence Of Needs

For our aging population, there’s a convergence of needs that tends to exist. For starters, one out of every three US adults 65+ years old has a certain degree of hearing loss. Add in the fact that dating back to January 2011, 10,000 baby boomers turn 65 every day. By 2029, 18% of America will be above the age of 65 years old. Our population is living longer, the baby boomers are all surpassing the age of 65, and we’re all being exposed to levels of sound pollution not yet seen before. Mix that all together and we’re looking at increasing number of people who could benefit from a hearing aid.

Next, it’s important to consider what happens to our day-to-day tasks that we depend on technology for when a new interface arrives. I mentioned this in a previous post, in which I wrote:

“Just as we unloaded our various tasks from PCs to mobile phones and apps, so too will we unload more and more of what we currently depend on our phones for, to our smart assistants. This shift from typing to talking implies that as we increase our dependency on our smart assistants, so too will we increase our demand for an always-available assistant(s).”

My point was that just about everything you now depend on your phone for – messaging, maps, social media, email, ordering food, ridesharing, checking weather/ stock prices/ scores/ fantasy sports/ etc – will likely manifest itself in some way via Voice. This is a big deal in general, but for our aging and disabled population, this can be truly life-changing.

That was the aha! moment for me reading Cathy’s post. The value proposition for smart assistants is much more compelling at this point for these communities of people compared to someone like myself who has no problem computing via a smartphone. I certainly enjoy using my Alexa devices, and in some instances it might cut down on friction, but there’s nothing that it currently offers that I can’t otherwise do on my phone.

It’s similar to why mobile banking is growing like crazy in places like Kenya and India. For a large portion of people in those countries, there is no legacy, incumbent system in place in which people need to migrate from, unlike here in the US where the vast majority of people have traditional bank accounts. Along the same vein, many elderly people and those with physical limitations would not be migrating from existing systems, but rather adopting a new system from scratch that yields entirely new value.

If I’m already a hearing aid candidate or considering a hearable, smart assistant integration makes owning this type of device that much more compelling. Even in its current crude, primitive state, smart assistants provide brand new functionality and value for those that struggle to use a smartphone. There’s an unmet need in these communities to connect and empower oneself via the internet and smart assistants supply a solution.

The Use Cases of Today

Old Lady and Alexa.jpg

Building off this idea that we’re just shifting tasks to a new interface, let’s consider messaging. As Cathy highlighted in her post, we’re already seeing some really cool use cases being deployed by assisted living facilities like Front Porch in California, where the facility is outfitting residents with Amazon Echos. The infrastructure is being built out to facilitate audio messaging between residents, staff and residents’ families.

Taking it one step further, if the resident has their smart assistant integrated in their hearing aid, they can seamlessly communicate with fellow residents, staff members and their family members anywhere they want in the assisted living facility. Not to mention being able to actually hear and understand the assistant responding since it’s housed directly in the ear. Whereas I prefer to text, audio messaging mediated by Alexa or Siri provides a much more conducive messaging system for these groups.

The Alexa skill, My Life Story, is built specifically for those suffering from Alzheimer’s. It allows for the user’s family members to program “memories” for their loved one, so that Alexa reads back the memories to help trigger their memory. Again, putting this directly in the hearing aid allows for this type of functionality to exist anywhere the user is with their hearing aid, empowering them to be more mobile while remaining tethered to something they may become dependent on. (Reminds me of this scene from the movie “50 First Dates.”)

Another great example of how smart assistants can provide a level of independence for the user is this story describing how a stroke victim uses smart assistants. The victim’s family created routines, such as saying “Alexa, good morning” which triggers the connected devices in her room to open the blinds, turn the lights to 50%, and turn on the TV. “Alexa, use the bathroom” turns her room’s lights yellow to notify the staff that she needs to use the bathroom. So, while connected light bulbs and TVs might seem excessive or unnecessary for you or I, they serve as tools to help restore another’s dignity.

These are just a few specific use cases among many tailored to these communities that tie in with the more broad ones that already exist for the masses, such as ordering an Uber, streaming audio content, checking into flights, answering questions, checking the weather, or the other 30,000+ skills that can be accessed via Alexa.

The Use Cases of Tomorrow

We’re already seeing network effects broadly take hold with smart assistants, and I think its fair to say that we’ll see pockets of network effects within specific segments of the total user base too. If there are disproportionately high numbers of users or the engagement levels are higher within a certain segment of the user base, you can expect software developers to migrate toward creating more functionality for these pockets of users. There’s more money and incentive to cater to the power users.

Where the software development will get really interesting is when the accompanying hardware matures too. In this instance, the hearing aids and hearables. CNET’s Roger Cheng and Shara Tibken dove into what a more technologically mature hearing aid might look like with Starkey’s Achin Bhowmik. In this excerpt, Bhowmik describes the hearing aid’s transformation into a multipurpose device:

“Using AI to turn these into real-time health monitoring devices is an enormous opportunity for the hearing aid to transform itself,” he says.

By the end of this year, Starkey also will be bringing natural language translation capabilities to its hearing aids. And it plans to eventually integrate optical sensing to detect heart rate and oxygen saturation levels in the blood. It’s also looking at ways to noninvasively monitor the glucose levels in a user’s blood and include a thermometer to detect body temperature.

So, the hardware will supply a whole host of new capabilities rife with opportunities for developers to overlay smart assistant functionality on top of. Going back to the idea of there being convergence of needs – if I am 80 years old, have hearing loss, diabetes, and dexterity issues – a hearing aid that provides amplification, monitors glucose levels, and houses a smart assistant that interprets those glucose readings for me and gives me the functionality I currently derive from my iPhone, then that’s a very compelling device.

A single device that serve multiple roles and meets a number of unmet needs simultaneously. Empower these communities with something like that and these groups of people are going to adopt smart assistants en masse. Finally, an all-inclusive tool to connect those on the sidelines to the digital age.

-Thanks for Reading-


News Updates, Smart assistants, VoiceFirst

Smart Assistants Continue to Incrementally Improve


This week we saw a few developments in the #VoiceFirst space that might seem trivial on the surface, but are landmark improvements toward more fully-functional voice assistants. The first was from Amazon with the introduction of “Follow-up mode.” As you can see from Dr. Ahmed Bouzid’s video below, this removes having to say, “Alexa” for each subsequent question that you ask in succession as there is now a five-second window where the mic stays on (when this setting is activated). I know it seems minor, but this is an important step for making communication with our assistants feel more natural.

The second, as reported by The Verge, was the introduction of Google’s multi-step smart home routines. These routines are an incremental improvement on the smart home as they allow for you to link multiple actions into one command. If I had a bunch of smart appliances all synced to my Google Assistant, I could create a routine built around the voice command, “Honey, I’m homeeee” and have that trigger my Google assistant to start playing music, turn on my lights, adjust my thermostat, etc. In the morning, I might say, “Rise and shine” which then starts brewing a cup of coffee and reading my morning briefing, the weather and the traffic report.

This will roll out in waves in terms of what accessories and android functionality are compatible with routines. Routines make smart home devices that much more compelling for those interested in this type of home setting.

The last piece of news pertains to the extent of which Amazon is investing in its Alexa division. If you recall, Jeff Bezos said during Amazon’s Q4 earnings call in February that Amazon would, “double down” on Alexa. Here’s one example of what “doubling down” might entail as Amazon continues to aggressively scale the Amazon Alexa division within the company:

As our smart assistants keep taking baby steps in their progression toward being true, personal assistants, it’s becoming increasingly clear that this is one of the biggest arms races among the tech giants.

-Thanks for Reading-



Biometrics, hearables, News Updates, Smart assistants, VoiceFirst

Pondering Apple’s Healthcare Move

Apple red cross

Outside Disruption

There have been a number of recent developments that involve impending moves from non-healthcare companies intending to venture into the healthcare space in some capacity. First, there was the joint announcement from Berkshire Hathaway, JP Morgan and Amazon that they intend to team up to “disrupt healthcare” by creating an independent healthcare company specifically for their collective employees. You have to take notice anytime you have three companies of that magnitude, led by Buffett, Bezos and Dimon, announcing an upcoming joint venture.

Not to be outdone, Apple released a very similar announcement last week stating that, “Apple is launching medical clinics to deliver the world’s best health care experience to its employees.” The new venture, AC Wellness, will start as two clinics near the new “spaceship” corporate office (the one where Apple employees keep walking into the glass walls). Here’s an example of what one of the AC Wellness job postings look like:

AC Wellness Job Posting
Per Apple’s Job Postings

So in a matter of weeks, we have Amazon, Berkshire Hathaway, JP Morgan and now Apple, publicly announcing that they plan to create distinct healthcare offerings for their employees. I don’t know what the three-headed joint venture will ultimately look like, or if either of these two ventures will extend beyond their employees, but I think that there is a trail of crumbs to follow to try and discern what Apple might ultimately be aspiring for.

Using the Past to Predict the Future

If you go back and look at the timeline of some of Apple’s moves over the past four years, this potential move into healthcare seems less and less surprising. Let’s take a look at some of the software and hardware developments over the past few years, and how they might factor into Apple’s healthcare play:

The Software Developer Kits – The Roads and Repositories

Apple Health SDKs

The first major revelation that Apple might be planning something around healthcare was the introduction of the software development kit (SDK), HealthKit, back in 2014. HealthKit allows for third-party developers to gather data from various apps on users’ iPhones and then feed that health-based data into Apple’s Health app (a pre-loaded app that comes standard on all iPhones running iOS 8 and above). For example, if you use a third-party fitness app (i.e. Nike + Run) developers could feed data from said third-party app into Apple’s Health app, so that the user can see all of the data gathered in that app alongside any other health-related data that was gathered. In other words, Apple leveraged third party developers to make their Health app more robust.

When HealthKit debuted in 2014, it was a bit of a head-scratcher because the type of biometric data you can gather from your phone is very limited and non-accurate. Then Apple introduced its first wearable, the Apple Watch in 2015, and suddenly HealthKit made a lot more sense as the Apple Watch represented a much more accurate data collector. If your phone is in your pocket all day, you might be able to get a decent pedometer reading around how many steps you’ve taken, but if you’re wearing an Apple Watch, you’ll record much more precise and actionable data, such as your blood pressure and heart rate.

Apple followed up on this a year later with the introduction of a second SDK, ResearchKit. ResearchKit allowed for Apple users to opt into sharing their data with researchers for studies being conducted, providing a massive influx of new participants and data which in turn could yield more comprehensive research. For example, researchers studying asthma developed an app to help track Apple users suffering from asthma. 7,600 people enrolled through the app in a six-month program, which consisted of surveys around how they treated their asthma. Where things got really interesting was when researchers started looking at ancillary data from the devices, such as geo-location of each user, to identify any possible neighboring data such as the pollen and heat index to identify any correlations.

Then in 2016, Apple introduced a third SDK called CareKit. This new kit served as an extension to HealthKit that allowed developers to build medically focused apps that track and manage medical care. The framework provides distinct modules for developers to build off of around common features a patient would use to “care” for their health. For example, reminders around medication cadences, or objective measurements taken from the device, such as blood pressure readouts. Additionally, CareKit provides easy templates for sharing of data (i.e. primary care physician), which is what’s really important to note.

These SDK Kits served as tools to create roads and houses to transfer and store data. In the span of a few years, Apple has turned its Health app into a very robust data repository, while incrementally making it easier to deposit, consolidate, access, build-upon, and share health-specific data.

Apple’s Wearable Business – The Data Collectors

Apple watch and airpods

Along with the Apple Watch in 2015 and AirPods in 2016, Apple introduced a brand new, proprietary, wearable-specific computer chip used to power these devices called the W1 chip. For anyone that has used AirPods, the W1 chip is responsible for the automatic, super-fast pairing to your phone. The first two series of the Apple Watch and the current, first generation AirPods use the W1 chip, while the Apple Watch series 3 now uses an upgraded W2 chip. Apple claims that the W2 chip is 50% more power efficient and boosts speeds up to 85%.

W1 Chip
W1 Chip via The Verge

Due to the size constraints of something as small as AirPods, chip improvements are crucial to the devices becoming more capable as it allows for engineers to allocate more space and power for other things, such as biometric sensors. In an article from Steve Taranovich from Planet Analog, Dr. Steven LeBoeuf, the president of biometric sensor manufacturer Valencell said, “the ear is the best place on the human body to measure all that is important because of its unique vascular structure to detect heart rate (HR) and respiration rate. Also, the tympanic membrane radiates body heat so that we are able to get accurate body temperature here.”

AirPods Patents.JPG
Renderings of AirPods with biometric sensors included

Apple seems to know this too, as they filed three patents (1, 2 and 3) in 2015 around adding biometric sensors to AirPods. If Apple can fit biometric sensors onto AirPods, then it’s feasible to think hearing aids can support biometric sensors as well. There are indicators that this is already becoming a reality, as Starkey announced an inertial sensor that will be embedded in its next line of hearing aids to detect falls. While the main method of logging biometric data currently resides with wearables, it’s very possible that our hearables will soon serve that role as they’re the optimal spot on the body to do such. A brand new use case for our ever-maturing ear computers.

AC Wellness & Nurse Siri

The timing for these AC Wellness clinics makes sense. Apple has had four years to build out the data-level aspect to their offering via the SDKs. They’ve made it both easy to access and share data between apps, while simultaneously making their own Health app more robust. At the same time, they now sell the most popular wearable and hearable, effectively owning the biometric data collection market. The Apple Watch is already beginning to yield the types of results we can expect when this all gets combined:

Pulmonary embolism tweet.JPG

To add more fuel to the fire, here’s how the AC Wellness about page reads:

“AC Wellness Network believes that having trusting, accessible relationships with our patients, enabled by technology, promotes high-quality care and a unique patient experience.”

“Enabled by technology” sure seems to indicate that these clinics will draw heavily from all the groundwork that’s been laid. It’s possible that patients would log their data via the Apple Watch (and down the line maybe AirPods/MFi hearing aids) and then transfer said data to their doctor. The preventative health opportunities around this type of combination are staggering. Monitoring glucose levels for diabetes. EKG monitoring. Medication management for patients with depression. These are just scratching the surface of how these tools can be leveraged in conjunction. When you start looking at Apple’s wearable devices as biometric data recorders and you consider the software kits that Apple is enabling developers with, Apple’s potential venture into healthcare begins making sense.

The last piece of the puzzle, to me, is Siri. What patients really now need, with all of these other pieces in place, is for someone (or thing) to understand the data they’re looking at. The pulmonary embolism example above assumes that all users will be able to catch that irregularity. The more effective way would be to enlist an AI (Siri) to parse through your data, alert you to what you need to be alerted to, and coordinate with the appropriate doctor’s office to schedule time with a doctor. You’d then show up to the doctor, who can review the biometric data Siri sent over.  If Apple were to give Siri her due and dedicate significant resources, she could be the catalyst to making this all work. That to me, would be truly disruptive.

Nurse Siri.jpg

-Thanks for Reading-