This week’s episode of the Future Ear Podcast is the longest conversation I’ve had on the podcast to date and features one of my favorite people, Brian Roemmele. For those who are unfamiliar with Brian, he is the person who coined the term, “VoiceFirst.” What I love about Brian is that while he’s considered a, “futurist,” he’s really more of a polymathic history student.
Follow him for a few days on Twitter and you’ll see why. He runs one of the most fun (and increasingly, popular) accounts on there. He’s constantly surfacing all kinds of fascinating images and videos from previous eras and then relating them to today’s times and drawing parallels. History doesn’t repeat itself, but it does rhyme, and Brian does a good job with issuing persistent reminders of that.
To frame today’s discussion, I should note that Brian was a big inspiration for me to launch Future Ear and begin to develop my thesis on how I see hearing aids & hearables evolving into the future. One of the biggest aspects to this evolution is the idea that we’ll eventually have a conversational voice assistant(s) residing in our ear-worn devices that is capable of supplanting many of the “Jobs-to-be-done” that we currently “hire” our phone (and the apps within) today. I launched the blog in 2017 when this idea was pretty novel at the time, and it’s validating to see all of today’s newest ear-worn devices being outfitted with functionality to quickly interface with a voice assistant.
(We talk a lot about JTBD throughout this convo, so here’s an overview I wrote about Clayton Christen’s famous framework.)
We kick things off with the premise of voice commerce and why Brian thinks commerce will be such a critical use case for voice assistants. I view voice assistants as being the conversational layer that sits between the user and the payment offering that each of the major voice assistant providers offers (Apple Pay, Amazon Pay, Google Pay, Samsung Pay). With today’s currency being largely tied to our time & attention, we talk at length about how voice assistants can move us into a more JTBD-focused economy.
One of the reoccurring themes throughout this conversation is the fallacy that we all tend to fall into when thinking about new paradigms in technology, and how we’ll all abide by the previous paradigm’s parameters and norms. Norms that have been established by our smartphones would have seemed ridiculous to people prior to the smartphone’s existence. In so many ways, we thought PCs would just be scaled down into pocket-sized devices, rather than entirely different types of computers with different user interfaces and experiences.
The core insight that I have gathered from Brian is that there’s a future that can exist where we’re not increasingly heads-down, buried in our phone. Again, if everything ultimately boils down to JTBD, and the reason we’re on our phone for increasing amounts of time is because we’re constantly hiring our phones and the apps inside for all the jobs that we’ve offloaded to it, we have to ask ourselves whether there’s a better method to completing said jobs.
Enter contextual voice assistants. In this scenario, the assistant would learn from your digital behavior to then predict what you’re looking and can operate on your behalf. I don’t have to necessarily scroll Twitter when I can have a quick synopsis of the things my agent has learned that I’ll click on and find interesting. Apply this type of modeling across the board, and think about the time-savings that contextual assistants could ultimately yield.
This type of agency, however, can only be achieved by having a deep-level of contextual understanding of the user, which must be predicated on trust and security. That’s the trade-off that we’re going to be facing here – will people be willing to allow Alexa/Google/Siri to have a deep-enough understanding of their behavior in order to develop contextual insights? Therefore, Brian believes that these type of agents should not be cloud-based, but rather live on-device to circumvent some of the privacy concerns, and will require new methods and innovation in the way we store data (holographics?).
Another area that we explore in this beast of a conversation, is to decipher what exactly Amazon is building toward with its recent crop of new “primitives.” As I mention in the episode, one of my all-time favorite articles is Ben Thompson’s “Amazon Tax” piece in which he describes Amazon’s infatuation with creating “primitives” or building blocks that are extremely compelling and in return Amazon takes a cut off the top, or a “tax.” Want to sell to the biggest marketplace of people in North America. No problem, list your product on Amazon.com and pay Amazon a 15% tax on each purchase. Want super affordable, highly robust cloud infrastructure? No problem, here’s AWS. Now pay Amazon its tax in order to operate in the cloud.
Now factor in the fact that you have one of the richest people in the world (Bezos) who owns his own rocket company (Blue Origin). On July 30th, Amazon got FCC approval to go ahead with Project Kuiper, which is a constellation of 3,200 satellites that will deliver satellite-enabled broadband. Hmmm ok. Billionaire rocket man launching satellites… there aren’t a whole lot of people or companies that can create those type of primitives (maybe just Elon Musk). Now combine that with the four different Amazon wearables that have been introduced in the last year and what do you have? A functional satellite-enabled mesh network for wearables to use: Amazon Sidewalk.
This whole conversation was a doozy and yes, we went down some crazy tangents as is fitting whenever you’re talking with Brian. Ultimately, though, there’s a lot of wisdom packed in here from Brian that goes way beyond technology, and provides for some solid food-for-thought.
-Thanks for Reading-