Back in December of 2017, I wrote a long piece titled, “A Journey to the Center of the Ear.” This was back before I started the daily update, when I would write 1,500 word essays that included original artwork that my coworker would help create with me (she’s fantastic, not on social media, and the artist of all my artwork). This was my third FuturEar blog post and a post that I still consider to be one of my better FuturEar pieces that I’ve written, even as I am writing my 126th today.
The gist of my piece was that the trajectory of voice assistants would eventually lead to voice-assistant enabled hearables. I had been following Brian Roemmele for a few months prior to launching my blog and subsequently writing that piece, and one of the big takeaways that I had from Brian was that user interfaces tend to change in 10-year cycles. The chart below is still one of my favorites to use in presentations to illustrate this point
Historically, each new user interface and the accompanying technology housing the interface, has provided a reduction in friction (time and effort). Consider maps and the gradual lessening of time and effort it takes to navigate in light of today’s technology. The method of navigation has gone from traditional maps (pre-hypertext/internet), to printable turn-by-turn directions (post-internet), to real-time virtual maps (mobile). It’s as simple as punching in address and following an arrow today.
To date, I think Google best demonstrates the possibilities in reducing friction with the voice user interface via Google Assistant, but Alexa and Siri currently provide some ways to save time and effort as well. This effect has become considerably more pronounced since, “A Journey to the Center of the Ear,” was written two years ago, and the ways in which voice will allow for a greater reduction in friction will only continue to increase as the underlying technology keeps maturing.
In the closing paragraph of the piece, I wrote:
“I believe that smart assistant integration will become standard in any connected audio device in the near future – be it ear-buds, over-the-ear headphones or hearing aids. This will provide a level of control over our environments that we have not yet seen before.”
Which brings us to Amazon’s hardware event tomorrow, September 25th, and the rumors circulating that one of the devices Amazon will be unveiling is a competitor to AirPods and a new home for Alexa to reside in: an Alexa-powered hearable. Better yet, Amazon’s hearable will likely include inertial sensors (hoping for a PPG sensor too), allowing for it to be used as a biometric data collector – something I’ve been writing about a lot lately!
Scoop: Amazon is working on new wireless earbuds that work as a fitness tracker (can monitor distance run, calories burned, etc). Would be Amazon’s first move into the health-tracking wearable space. https://t.co/WVUnfg8Qgp
— Eugene Kim (@eugenekim222) September 23, 2019
It’s great to see so much development and maturation around the, “future use cases,” of hearing aids and hearables that I initially laid out in my first handful of posts. The first phase of hearables has been all about enabling the next generation of functionality by standardizing many of the components inside the devices and the ways in which those components can be used (i.e. sensors or DSP chips).
We’re rapidly moving through phase one, and if Amazon were to enter into the hearables arena with a compelling alternative to AirPods, things will likely move even faster as Apple will be forced to respond in order to maintain their massive cut of the total “ear-share.” As smart ear-worn devices continue to proliferate from all directions (consumer/medical/industrial/etc), we’ll enter into a new phase of specialization and new cool features, as many of the features that might be novel today, become ubiquitous and a standard over time.
-Thanks for Reading-