Yesterday, as I was leaving the Voice summit and boarding my plane back to St. Louis, I was thinking about what would be the best way for me to recap the hearables panel I participated on Thursday afternoon. When this panel was first organized by the Voice summit, it was slated to only consist of Andy Bellavia and myself, which I’m sure would have been a good panel discussion, but as the event loomed closer, the summit organizers added three more panelists and our moderator. I honestly don’t think we could have had a more well-rounded panel of backgrounds and perspectives.

I decided that the best way to recap this panel would be to try and illustrate how I think each of the panelists’ backgrounds and area of focus will weave together as time progresses. When Claire asked me how I see the world of hearables taking shape across the next few years, I attempted to paint a mental picture for the audience. I haven’t seen any video clips yet from our talk, so I’m kind of going off my memory, but here’s the gist of how I responded and how I think each of my fellow panelists’ areas of focus will factor in:
“The first thing that we need to acknowledge is the behavioral shift that’s been occurring since December of 2016 when AirPods debuted. It has progressively become normalized to wear truly wireless earbuds for extended periods of time. The reason this behavior shift has been enabled is due to the hardware advancements coming out of Andy’s world. Much of the hearables’ innovation that transpired in the first half of this decade was largely around the components inside the devices, from the systems-on-the-chips to DSP chips to sensors to Bluetooth advancements. These innovations eventually ended up manifesting themselves in ways such as, better battery life or automatic Bluetooth pairing. In essence, the cultural behavior shift ushered in by AirPods that we’ve been witness to would not have been possible without the preceding component innovation that makes the behavioral shift feasible.
So, you asked, where do we go from here? One way to think about hearables’ role is how it impacts the “attention economy.” One of the biggest byproducts of the mobile era is how we’re constantly able to consume content. Facebook, Instagram, Snapchat, Twitter, etc. exchange free content for our attention; time is the currency of the attention economy. So, in a world where it has become socially acceptable and technologically feasible to wear ear-worn devices for extended periods throughout the day, it’s realistic to think that the attention economy begins to be divided between our eyes and our ears. We’re already seeing this with podcasting, but just as Instagram stories represent one specific type of visual content that drives the visual attention economy, podcasts represent one specific type of audio content that will drive the “aural attention economy.”
The attention economy is predicated on tools that enable easy content creation, leading to more content supply to be consumed. Therefore, tools that enable audio content creation are paramount to the near-term future of the aural attention economy. Supply generation, however, is only one part of the equation, we need to more intelligently surface and curate the audio content supply, so that people are discovering content that they want to listen to. Rachel’s company, Audioburst, is a perfect example of how we can better connect content to users. Through a tool like Audioburst, I can say, “give me an update on what’s going on with Tesla,” and rather than being fed a summary of a business insider article, I’m being fed radio and podcast clips where they are specifically talking about Tesla.
The other aspect of the emergence of hearables that needs to be solved is how we design for experiences that might be audio-only. Eric’s work around sonic branding and non-verbal audio cues is going to play a big role in the foreseeable future, because we’ll need to rely on a variety of audio cues that we intuitively understand. For example, if I receive an audio message, I’ll want to be alerted of that message from a cue that’s non-invasive. Or if I’m walking down the street and I’ve indicated that I’m hungry, I might hear McDonald’s sonic brand (ba-dah-ba-ba-baah) indicating that there’s a McDonald’s close by.
Creating and designing audio cues is a challenge in and of itself, but the implementation of said cues adds another level of complexity. As Andreea described from her work designing audio experiences for a variety of hearables, the challenge tends to stem around the contexts of the user. The way that I interact with my hearable will be far different if I’m really straining myself on a 5 mile run, compared to a leisurely walk. The experiences we have will need to be tailored to the contexts, so for example, when I’m running I might just want to say, “heartrate” to my hearable as an input and receive my heart rate readout, but when I’m walking around I might input a full sentence, “what’s my heartrate reading?” These are subtleties but enough poor experiences, however subtle they are, will ultimately lose the user’s interest.
So, we should see more component innovation coming out of Andy’s world, which will facilitate continual behavior shifts. Tools like Rachel’s company, Audioburst, will allow for more intelligent content aggregation and more reason to wear our devices for longer periods of time as we begin dividing our attention between our eyes and ears. Longer usage, means more opportunity for audio-augmentation and sonic branding & advertising, which will need to be carefully thought out by the UX folks like Andreea and Eric so as not to create poor experiences and drive users away.”
-Thanks for Reading-
Dave
To listen to the broadcast on your Alexa device, enable the skill here
To add to you flash briefing, click here
To listen on your Google Assistant device, enable the skill here
and then say, “Alexa/Ok Google, launch Future Ear Radio.”