The Conduit to the Future
I’ve been thinking and building towards a #Screenless future involving #hearables since 2012 (😮) and was having a chat with someone about the opportunity recently. In an effort to be a bit more open, I thought I’d share my thoughts. May prove useful to others
— Carl Thomas (@carlosdajackal) June 8, 2019
Carl Thomas, who runs the website Wearables London, published a fantastic twitter thread this weekend about how he sees the hearables sector developing into the future based on his experience working in the space since 2012. He hit on a number of excellent points and developments that have transpired across the past 7 years and extrapolated on how those developments will continue to change as more and more trends continue to collide, altering the total trajectory of hearables. Today, I want to build on what Carl started as I found the wheels in my head turning when I re-read Carl’s thread multiple times this weekend.
Let’s start here. For those who aren’t familiar, Nick Hunn is the godfather of hearables. Nick’s the one who coined the term in his now infamous original white paper from 2014 that Carl is referring to in his tweet. If you’ve never read Nick’s “Hearables are the new Wearables” piece, stop now and go read it.
This is an excerpt from Nick’s paper from 2014: “There are plenty of problems to be solved before Hearables become mainstream, but none that are likely to be insurmountable. Today’s hearing aids already contain an amazing level of technical complexity, miniaturized beyond anything you’ll find in a smartphone or tablet. The capabilities exist; the standards are coming.”
The main two “problems” that existed back in 2014 were battery life and solid pairing between true wireless headphones and the smartphone. Around the same time Nick wrote his paper, hearing aid manufacturer Resound, introduced the first made-for-iPhone (MFi) hearing aid, the Linx, which used Apple-developed low-energy Bluetooth protocol. Apple’s own low-energy Bluetooth system largely alleviated many of the battery and pairing concerns, and began being deployed en-masse around 2015 by all types of in-the-ear device manufacturers. The companion charging cases that have become standard have almost completely alleviated all user concerns with battery life. Nick was right – these were not insurmountable challenges.
Back to Carl’s thread – “The true benefit for hearables would be the value of the platform layer.” This is exactly what I began to realize too a few years back, as I started to search for the use cases that hearables would be able to uniquely support. What Carl and I seemingly both discovered is that hearables aren’t really “THE thing” so much as they’re the conduit to “THE Thing.”
To paraphrase a quote from one of my fav TV shows in recent years,
— Carl Thomas (@carlosdajackal) June 8, 2019
“THE thing” in my opinion is the platform layer that Nick Hunn was describing to Carl many years ago. What I discovered when I was searching for use cases, was that the most plausible candidate to be “THE thing” is voice computing mediated by voice assistants (Alexa; Google Assistant). As I’ve written about before, when you think about the way you use all the apps on your phone as “jobs for hire” (i.e. Google Maps to get you from point A to point B), it should be understood that there was a previous incumbent we relied on prior to smartphones for that job (i.e. Map Quest, traditional Map).
The jobs don’t change. It’s the products that we “hire” for those jobs that change. In a #VoiceFirst future, I’d simply ask my Google Assistant to navigate me. I’m not hiring “an app” so much as I’m hiring my assistant for that specific task. Now apply that logic to the whole app-economy. What can be offloaded to our assistants? It should reiterated constantly that voice technology as a whole is still very much in its infancy, but already, 26% of American adults own a smart speaker. Smart speakers represent the fastest rate of adoption by any consumer technology product ever (see the chart below).
So, we have this emerging platform that is in the midst of its early development poised to inherit our computing needs. Since this new platform is audio-based, it’s much more conducive to being an ambient platform, accessible through a range of devices spanning from our phones, smart speakers, connected cars, light switches, and in-the-ear devices.
Over time, as people begin to get accustomed to “hiring” their voice assistants for an increasing number of “jobs”, users will want access to their assistants more often. I’m curious if people will choose to outfit their environments with a host of voice-enabled devices, or if they’ll choose to consolidate their access points to a few “main” ones, such as their hearables.
As Carl points out, the average usage of our in-the-ear devices today is up 450% compared to the average from 2016, which is the year that AirPods debuted. This increase can be attributed to more content to stream, whether that be music, podcasts or video, and enhancements in battery life and form factor to support longer periods of usage. In my opinion, the most likely contributor of the next spike in usage will likely be the platform, “THE thing”, that Nick Hunn was describing to Carl more than five years ago. In that scenario, hearables become the conduit to the future of computing.
-Thanks for Reading-