Daily Updates, Future Ear Radio

The Aural Attention Economy (Future Ear Daily Update 6-27-19)

Monetizing our Ears

6-26-19
Headline from an article in Ad Age

I came across an article the other day from Ad Age that reported findings from a study that was conducted by Ipsos for iHeartRadio. The following quote from the article is what jumped out at me, “According to the Ipsos-iHeartRadio study, Americans of all ages listen to an average of 17.2 hours of audio per week, with millennials topping the list at 18.8 hours per week each and baby boomers coming in last at 15 hours per week.”

That’s a pretty high weekly average, even if you factor in traditional radio.

A few months back, I wrote a two piece column for Voicebot about what I considered to be the impending, “Cambrian Explosion of Audio Content” (if you’d like to read it, here’s part 1 & 2). The reason for this impending explosion is based on a few things. For one, there is a burgeoning set of new devices – smart speakers, smart displays, connected cars and hearables – that are conducive to on-demand audio consumption.

The second facet is that the attention economy is moving toward our ears. The “currency” of the smartphone era is our attention (time) and the apps that have been most successful in this era, tend to be the ones that dominate our attention (FB, IG, Snap, Twitter, Netflix, YouTube, etc). These companies have been monetizing our eyes via ads or subscriptions to the point where we’ve pretty much maxed out the attention that can be derived from our eyes. There’s only a finite amount of time we can dedicate our busy lives to the attention economy through our eyes. So, what we’re starting to see is an emerging “aural attention economy” where companies are beginning to exchange audio-based content (and ads) for our time.

From Edison Research’s Infinite Dial Report

 

The most prominent example of this migration is the explosion of podcasts among the last few years. According to Edison Research’s Infinite Dial Report, the US population of 12+ year olds saw an increase in the weekly average listenership by 5% year-over-year. At 22% of the total population aged 12 and older, roughly 62 million people in the US are listening to one or more podcasts weekly. That’s an increase by about 14-15 million people in the last year alone, whereas we were seeing an increase in only about 7-8 million people each of the previous three years.

In the same way that visual-based digital content evolved beyond text, to images, memes, videos, etc, we’ll likely see audio content evolve and take different shapes. One early example of these new shapes is that as podcasting might represent long-form audio content, flash briefings offer micro-doses. As the barriers of audio content production continually get reduced by companies that are rushing in to arm publishers, there will likely be a multitude of new forms of content production, similar to how the publishing tools during the mobile era armed publishers with camera apps (snap/IG) and instant publishing (twitter/FB/etc) to allow for new types of content creation.

Ultimately, the stage is set for the aural attention economy to really blossom. New devices, means more opportunities for consumption. More consumption means more incentive for tool makers to enter the market and better enable production. It’s a virtuous cycle of value creation that compounds its own growth; a network effects of sorts. Major publishers, like the New York Times, will likely make their content accessible via the audio platforms of tomorrow, the same way that they leverage Twitter and Facebook to push their articles today. Indie publishers, like bloggers and influencers, will view audio as another avenue to grow their audiences. We’re only getting started with he ways in which we monetize our ears.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

Nearly Half of Adults 60+ Use their Smart Speaker Daily (Future Ear Daily Update 6-25-19)

Building the Daily Habit

From Voicebot’s Smart Speaker Market Data 1/19

Voicebot released some data findings last week, and one stat that really caught my eye was that 46.6% of adults aged 60 and older use their smart speaker daily. It should be noted that this same cohort, adults aged 60+, only comprise 20% of the total smart speaker ownership base, but the frequency of usage matters more in my opinion. The reason is that, as Bret Kinsella often points out, smart speakers are like training wheels in terms of getting people to adopt the habit. Once the habit of defaulting to speak to your smart assistant(s) begins to form, that habit starts to spread out beyond the smart speaker and into more modalities like our phones, cars, computers, and Bluetooth in-the-ear devices.

I’ve written pretty extensively about the developing trend of integrating smart assistants into the recently minted “smart” Bluetooth connected hearing aids. As I wrote about in a recent Voicebot article, the leading indicator of hearing loss tends to be age. As we get older, our hearing depreciates due to a variety of factors such as long-term exposure to harmful sound levels or the degeneration of sensory cells. In short, the bulk of the people wearing hearing aids are in the 60+ cohort, which we know from Voicebot’s data that when exposed to the utility of voice assistants and the accompanying voice user interface, about half of which will gravitate toward using it daily.

By adding in more facets to hearing aids, such as Bluetooth streaming or a home for a smart assistant, the value proposition increases. A hearing aid that has the single function of amplification is compelling in its own right, but adding in more functionality is gravy. Some potential hearing aid users might not find quick access to a smart assistant or podcast streaming that compelling, but for others, it might be what gets them over the hump and begin wearing hearing aids.

The fact of the matter is that there’s crossover here. There are likely hearing aid candidates that are using their smart speakers daily, and would find it attractive if they knew their Bluetooth hearing aids can function as a smart speaker in their own right. Vice-versa, there are likely hearing aid users that might find voice assistants really compelling, but have yet to be exposed to the technology because they don’t own a smart speaker. There’s a great opportunity forming here for hearing healthcare professionals to include this emerging aspect of Bluetooth hearing aids in the pitch during the initial patient consultation.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

The Echo Show 5 & Mass Market Multimodal (Future Ear Daily Update 6-21-19)

Multimodal Goes Mainstream

Image result for amazon echo show 5
Amazon’s Echo Show 5

This week, as I mentioned in Tuesday’s update, I’m working in my company’s warehouse and therefore have been a podcasting machine, listening to 5-6 podcasts per day with my AirPods while I work. Yesterday, I listened to an awesome episode from Emily Binder’s Beetle Moment Marketing podcast that featured Katherine Prescott as the guest. Emily and Katherine are both voice technology experts and work as marketers and consultants in the voice space. During their conversation they discuss Amazon’s new Alexa-powered smart display, the Echo Show 5.

The Echo Show 5 is exciting for a few different reasons. First and foremost, it’s $89, which is quite a bit cheaper than its main competitor, the Google Nest Hub, which is $129. Surprisingly, it’s even cheaper than the Echo 2 smart speaker, which is $99.  As Emily and Katherine point out, the pricing here signals that Amazon really wants to move people toward multimodal Alexa devices.

Smart displays serve as an important bridge connecting the interface of tomorrow, VUI, with the interface of today, GUI (graphical user interface). It’s going to take time for the conversational AI that will truly bolster the VUI to develop and mature, and so screens that can be controlled via voice during this interim period make a whole lot of sense. Since smart displays utilize videos and images, they offer more robust use cases today too, which should attract more users and different types of usage from the devices.

As Katherine mentions during the podcast, one use case that the Echo Show 5 will support well is voice commerce. Voice shopping through a smart speaker has a number of limitations today, many of which can be circumvented when you layer on a visual display, such as reading product descriptions and side by side comparisons of items. It’s just easier to quickly read a product description or glance at two products side by side, than it is to have Alexa read them to you.

The final piece that I believe Amazon got right with the Echo Show 5 is the privacy shutter on the display. This is akin to people applying a piece of tape to their laptop webcam. The chances that users have been hacked and spied on through their webcam is razor thin, but people still like the peace of mind. Amazon introducing a physical shutter follows the same logic as it will block the camera when users would like to ensure privacy. Amazon is catering to the portion of people who are hesitant to use a smart display due to privacy concerns with this hardware feature.

Ultimately, the Echo Show 5 has the potential to bring multimodal #VoiceFirst usage mainstream. Given the fact that it’s $40 cheaper than Google’s Nest Hub, it will be interesting to see how Google responds with their next smart display and if it falls in line with the pricing of the Echo Show 5. Either way, a whole lot more people are about to be exposed to the utility of a multimodal smart assistant experience.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

Three Things I Learned from Dr. Eric Topol (Future Ear Daily Update 6-19-19)

AI and Your Doctor, Today and Tomorrow

A16z recently published a fantastic podcast titled, “AI and Your Doctor, Today and Tomorrow” where A16z general partner, Vijay Pande, interviews Dr. Eric Topol around how AI will impact healthcare. Dr. Topol is a longtime cardiologist and chair of innovative medicine at Scripps Research and he provides a fascinating perspective into how AI is already transforming healthcare and how it’s bound to shape the world of healthcare across the next decade. Here were three things I learned from this podcast conversation.

  1. “Natural Language Processing can liberate from keyboards” – this is one of the most promising use cases for voice in the healthcare space in the near-term as it applies to doctors leveraging the recent improvements in NLP to begin shifting the note taking aspect of their job from manual, typed inputs, to spoken inputs.

    “The fact that voice recognition is moving so fast in terms of accuracy and speed is really encouraging.”  – Speed is key, as doctors will only move toward a voice-based note taking system if they feel that there’s a real time reduction. Accuracy of the transcription is even more critical because it allows for machine learning to take place on top of the data. Today’s data is so error-ridden, which makes the overlaying of machine learning applications more challenging.

  2. “Multi-modal data processing – we are not doing it yet” – this was one of the most interesting portions of the discussion. Eric illustrates this concept by referencing someone with diabetes only having access to their glucose levels and that the patient can see whether the glucose levels are going up or down. Eric posits, “why aren’t we factoring in everything they eat and drink, sleep and activity and the whole works.” Wearables factor into the multi-modal data inputting that Eric is describing and will be key to adding more types of data that can constantly be monitored and factored into the total equation. Rather than going to the doctor and saying “I felt my heart flutter,” which isn’t all that helpful, patients can point to when exactly this happened on their Apple Watch.
  3. “How do you overcome the fact that not everything is quantitative, like cholesterol levels?” – Eric answers this question by alluding to something that had previously been subjective, like state of mind or mood, is becoming more objective. The way that’s happening is through new ways to measure voice biometrics, breathing patterns, and facial recognition to start establishing objective metrics. A sophisticated voice biometric system would be able to identify whether you’re depressed through all the rich data points it can collect and compare against the catalog of data that you’re constantly sharing, such as tone of voice.

My big takeaway from this conversation is that voice computing and wearables will have a significant impact on the future of healthcare, both from the doctor and the patients’ standpoint. As Dr. Topol mentions multiple times throughout the podcast, the hope is that AI and machine learning will offload a lot of the non-personal work to the computers, and free up the doctor’s time to get back providing “the human touch” that’s been lacking in the doctor’s office as they continually get saddled with more and more clerical work and drudgery.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

AirPods, Podcasting and Learning On-the-Go (Future Ear Daily Update 6-18-19)

Optimizing Time

Hourglass in Close-up Photography

One of the re-occurring themes that I keep coming across is that, “time is our most valuable commodity.” This has been a topic that I’ve heard smart thinkers like Simon Sinek, Gary Vaynerchuck, and Annie Duke expand upon recently in podcasts, articles and video I’ve consumed. It makes a lot of sense – time represents a common asset we all own, however one that we can never accrue more of. So, how do we make the most out of our time?

I work at Oaktree Products, which is a wholesale medical supplier to the hearing healthcare professional, and therefore, we ship hundreds of packages from our warehouse each day to our customers. During my high school summers, I worked at Oaktree in the warehouse, picking, packing and shipping out orders. Now that I work here full time, I sometimes will work in the warehouse filling in when the fulfillment team is short-staffed, like I am this week.

What has occurred to me is how easy it is to better optimize my time while I work in the warehouse in ways that were unfathomable a decade ago when I was in high school. Yesterday, I was able to seamlessly work while I streamed six podcast episodes from my phone to my AirPods. To put that in perspective, when I was working back there in 2009, I could have occupied my time listening to the radio, or using wired headphones connected to an iPod. I certainly wasn’t aware of what podcasting was. In 2019, I’m able to learn on-the-go without it negatively impacting my job performance.

This is such a profound shift for those working in jobs that are conducive to audio content consumption while they work. This brings me back to the way Marc Andreessen recently answered the question, “what impact do you think wearables will have into the future?”

“The really big one right now is audio. Audio is on the rise just generally and particularly with Apple and the AirPods, which has been an absolute home run [for Apple]. It’s one of the most deceptive things because it’s just like this little product, and how important could it be? And I think it’s tremendously important, because it’s basically a voice in your ear any time you want.

For example, there are these new YouTube type celebrities, and everybody’s kind of wondering where people are finding the spare time to watch these YouTube videos and listen to these YouTube people in the tens and tens of millions. And the answer is: they’re at work. They have this Bluetooth thing in their ear, and they’ve got a hat, and that’s 10 hours on the forklift and that’s 10 hours of Joe Rogan. That’s a big deal.

They’re at work. The question now becomes, “what will be the byproducts of portions of our workforce learning on-the-go?” There seems to be a whole lot of potential here that’s only recently been made available through new, non-visual forms of content, like podcasts, in conjunction with the rise of ear-worn hardware that is more discreet, comfortable for longer periods of time, and paired to our phones, like AirPods. We tend to split our lives in thirds – work, personal time and sleep. If we can start to blend work and personal time, then, as Marc points out, it’s a really big deal, because we’re collectively optimizing our most precious commodity more efficiently.

-Thanks for reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

Voice of the Flash Briefing (Future Ear Daily Update 6-17-19)

A Powerhouse Lineup

Voice of the Flash Briefing Logo.jpg

This October, I will be joining 9 other flash briefing creators to share what we have learned around all things micro-casting. For those who might not be familiar with Flash Briefings, they’re essentially short-form podcasts that typically range anywhere from 1-7 minutes. During The Voice of the Flash Briefing online event, the 10 of us, along with the host, Bradley Metrock, will present on how we utilize our flash briefing and what we our key takeaways are from experimenting with this new medium. The other panelists include:

  1. Dr. Teri FisherVoice in Canada – Teri has a daily flash briefing that he uses in conjunction with his podcasts (Alexa in Canada & Voice First Health) centered around Alexa usage in Canada and Alexa + healthcare. Along with his expertise around these two topics, he’s also a wealth of knowledge on setting up and implementing a flash briefing and has set up a free online course to set one up.
  2. Emily BinderDaily Beetle Moment – Emily provides daily tips and info around all things voice marketing, podcasting, smart assistants, etc. She does a great job weaving business insight into her flash briefing and making the case for why any and all businesses need to begin thinking about how to begin migrating portions of their marketing to voice-enabled services.
  3. Daniel HillInstagram Stories – Daniel’s flash is entirely geared around Instagram and how to most effectively use the app. Daniel provides daily insights around new features and developments, teasing out upcoming features in order to prepare for, and shares how people are leveraging various aspects of Instagram for their personal or businesses’ benefit.
  4. Jen LehnerFront Row Entrepeneur – Jen’s flash briefing covers all things social media, productivity, digital marketing and online business. She maintains a 5 star review across 86 reviews on Amazon, making her flash briefing one of the highest rated ones out there.
  5. Suze Cooper (Keynote) – Social Days – Suze’s company, Big Tent Media, helps small businesses tell their stories online through the creation of websites and social media. Her flash briefing, Social Days, provides listeners with a quick rundown of what “special day of the year it is” such as “#Superhero Day” so that marketers can stay on top of what’s trending in an effort to keep their content fresh.
  6. Janice Mandel –  VOICE Daily Briefing – Janice is the programming and content director for the upcoming VOICE summit in July (where I will be speaking), and uses the VOICE Summit flash briefing to share updates around the conference as it all comes together.
  7. Peter StewartThe Smart Speakers Daily – Peter is an award-winning BBC broadcaster and his flash briefing features news about everything from recently released features and functions of smart speakers to voice branding your own personality to voice-oriented sales strategy. He routinely provides fresh insights into all things voice first.
  8. Adrian SimpleThe Gaming Observer – Adrian’s flash briefing shares daily updates on the world of gaming, including news around upcoming titles, in-game updates, and tips & tricks for some of the most popular games. His flash briefing has boasts a 5 star rating across 183 customer reviews, showing that he’s been at this for a while and has clearly found a recipe for success.
  9. Armel BeaudryTrebble.fm – Armel is the founder and CEO of trebble.fm: Trebble FM lets you stay up-to-date with all the topics you care about by listening to shortcasts (1 to 2-minute podcasts) from your favorite sources. Listen on your phone or any voice-enabled speakers with Amazon Alexa or Google Assistant.

This should be a pretty awesome event. It’s a free, online event, so for anyone that’s even remotely interested in learning more about this new form of content creation or would like to set up their own, I highly encourage them to attend.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

Fortnite + Houseparty (FuturEar Daily Update 6-12-19)

The Gen-Z Match Made in Heaven

Marshmello's Fortnite concert reportedly sets new concurrent player record

The company behind Fortnite, Epic Games, announced today that it was acquiring the popular video chat app, Houseparty. While this isn’t really related to the usual topics that are covered on FuturEar, I thought this to be significant enough to warrant a blog post on it. In my opinion, this is one of the more interesting acquisitions I’ve seen recently, so let’s go ahead and break this down.

Image result for fortnite daily active users
Image: Statista

Fortnite is huge and still growing, recently topping 250 million registered users. The really interesting thing I find about Fortnite isn’t really the game (I’m terrible), but rather what Fortnite has the potential to be – a digital environment to hang out in with friends. We saw the first signs that Epic Games might be aspiring for more than just facilitating gameplay, with the Marshmello concert held last year.

Think about that tweet for a second. A rather random DJ hosted a concert for 10 million people. That is an absurdly large audience. If Bruce Springsteen held a concert at the biggest stadium in the US, The Big House, he’d only be able to perform for about 120,000+ people. Marshmello had an audience roughly 100 times the size of that!

Houseparty is a video chat app that allows for up to 8 people to join a “houseparty.” Kerry Flynn of Digiday compared it to AOL instant messenger, but with video chat, for a new generation. According to Digiday, 60% of Houseparty’s user base is under the age of 24, aka Gen-Z. Similarly, a survey conducted by Newzoo showed that 53% of Fortnite users were ages 10-25. Chances are a large number of Fortnite players are using Houseparty, and vice-versa.

So, now that Epic Games owns Houseparty, it stands to reason that we might start to see a video chat feature added to the game. On the surface, this applies to the game itself. Today, if you’re playing the battle royale on a team of four, you can communicate through a headset. Down the line, we might see a video chat feature to join a group chat of your teammates, and maybe even some other friends who aren’t even playing but hanging out and watching.

Below the surface, think about the way this can fuse shared experiences together in the real and digital world. It’s not that far-fetched to think that you’d be able to go see a concert, or maybe an e-sports game, or go virtual shopping, with your friends as avatars in Fortnite, while simultaneously hanging out together via video chat. Who knows, maybe it will be Epic Games that is first to usher in something like Ready Player One’s OASIS.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

“Hearables aren’t THE thing, they’re the thing that gives you access to THE thing” Part 2 (Future Ear Daily Update 6-10-19)

The Broader Picture

Thread 2 pt 1.JPG

For yesterday’s update, I wrote a piece building on a twitter thread from long-time hearables’ industry expert, Carl Thomas. During Carl’s initial twitter thread, he referenced an iconic line from the TV show, Halt and Catch Fire, where the main character shares his vision of personal computers in the early 80’s, “computer’s aren’t THE thing, they’re what give you access to THE thing.” Carl referenced this line, as it’s an apt way to think about hearables – they’re not THE thing, they’re what gives you access to THE thing. So the question then becomes, what exactly is THE thing?

In yesterday’s update, I concluded that #VoiceFirst is the most plausible candidate to be the more grandiose use case for hearables, and like any good twitter discussion, Carl pushed back and argued a slightly different vision for the future. I want to use today’s update to shed a light on Carl’s vision that he laid out in this follow up thread, not only because it’s a truly fascinating view on what’s to come, but because this is what I designed Future Ear Radio for. To aggregate really interesting ideas, articles, tweets, podcasts from large publications and publishers, but also from the various communities of experts contained in each of the fields that Future Ear covers so that we can all learn from each other.

Seismic Shifts.JPG

“Seismic shifts in human behavior brought about by technology seem to happen when trends collide.” This line of thinking is actually the basis for this blog and is the sub-header of the website, “Connecting the trends converging around the ear.” I love his example too, as WYSIWYG (what you see is what you get) and easy-to-use web hosting platforms, in conjunction with the rising penetration rate of high-speed DSL, collided and created an explosion of user-created web content. It had suddenly become significantly easier (new tools) and faster (high-speed DSL) to publish content on the internet, to the point where anyone could become a publisher. This was the backbone that spurred on social networking.

wearable blockchain.JPG

We’ve all heard of Big Data – the macro-data that companies desire to help them make smarter business decisions. The flip side, would be “Small Data”, which is each user’s personal data (the first time I heard this term was from Brian Roemmele at the 2018 Alexa conference). This is what makes Google and Facebook so valuable – we’ve collectively shared so much of our small data with each company whenever we’ve revealed the motivation behind our purchasing decisions through our searches, likes, shares and other online behavior. But, what are users getting in return? Better targeted ads? Duplex on the web?

So as Carl is suggesting here, one impending collision that might have interesting ripple effects, is microphone-equipped and biometric sensor-laden wearables (wrist or ear-worn), plus the emergence of blockchain-based, tokenized ecosystems. Rather than sharing all of your personal data with these companies for no perceived trade-off, what if we were able to be paid for all our most-valuable data?

part 3

As Carl says, this might all seem far-fetched, but is it really so unrealistic to think that companies wouldn’t pay for this data? While few alternatives to Google and Facebook exist today, that isn’t guaranteed for tomorrow, and the advertising business model of each company is almost entirely dependent on our small data today. People are already seeming to become increasingly weary of sharing more and more data with these tech giants, so a platform that is underpinned by a, “transparent, permissioned, and reversible,” ledger system might stand as an appealing alternative with more perceived value in exchange for our data.

The fact of the matter is that the transformation our in-the-ear devices are undergoing from rudimentary “dumb” devices into smart little ear-computers is going to open multiple doors that were previously locked. What lies on the other side of those doors might include a conduit to a new user interface to communicate with technology, a home for smart assistants to operate on our behalf, a personal data-collector to farm our data to be sold on the blockchain, or a combination of all these and more. Just as it was hard to predict all the byproducts that stemmed from the colliding trends during the beginning of the internet or the advent of mobile, it’s tough to know what will stem from all of today’s innovations as they begin to crash together.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

“Hearables aren’t THE thing, they’re the thing that gives you access to THE thing” (Future Ear Daily Update 6-10-19)

The Conduit to the Future

Carl Thomas, who runs the website Wearables London, published a fantastic twitter thread this weekend about how he sees the hearables sector developing into the future based on his experience working in the space since 2012. He hit on a number of excellent points and developments that have transpired across the past 7 years and extrapolated on how those developments will continue to change as more and more trends continue to collide, altering the total trajectory of hearables. Today, I want to build on what Carl started as I found the wheels in my head turning when I re-read Carl’s thread multiple times this weekend.

Nick Hunn.JPG

Let’s start here. For those who aren’t familiar, Nick Hunn is the godfather of hearables. Nick’s the one who coined the term in his now infamous original white paper from 2014 that Carl is referring to in his tweet. If you’ve never read Nick’s “Hearables are the new Wearables” piece, stop now and go read it.

This is an excerpt from Nick’s paper from 2014: “There are plenty of problems to be solved before Hearables become mainstream, but none that are likely to be insurmountable. Today’s hearing aids already contain an amazing level of technical complexity, miniaturized beyond anything you’ll find in a smartphone or tablet. The capabilities exist; the standards are coming.”

The main two “problems” that existed back in 2014 were battery life and solid pairing between true wireless headphones and the smartphone. Around the same time Nick wrote his paper, hearing aid manufacturer Resound, introduced the first made-for-iPhone (MFi) hearing aid, the Linx, which used Apple-developed low-energy Bluetooth protocol. Apple’s own low-energy Bluetooth system largely alleviated many of the battery and pairing concerns, and began being deployed en-masse around 2015 by all types of in-the-ear device manufacturers. The companion charging cases that have become standard have almost completely alleviated all user concerns with battery life. Nick was right – these were not insurmountable challenges.

Back to Carl’s thread – “The true benefit for hearables would be the value of the platform layer.” This is exactly what I began to realize too a few years back, as I started to search for the use cases that hearables would be able to uniquely support. What Carl and I seemingly both discovered is that hearables aren’t really “THE thing” so much as they’re the conduit to “THE Thing.”

“THE thing” in my opinion is the platform layer that Nick Hunn was describing to Carl many years ago. What I discovered when I was searching for use cases, was that the most plausible candidate to be “THE thing” is voice computing mediated by voice assistants (Alexa; Google Assistant). As I’ve written about before, when you think about the way you use all the apps on your phone as “jobs for hire” (i.e. Google Maps to get you from point A to point B), it should be understood that there was a previous incumbent we relied on prior to smartphones for that job (i.e. Map Quest, traditional Map).

The jobs don’t change. It’s the products that we “hire” for those jobs that change. In a #VoiceFirst future, I’d simply ask my Google Assistant to navigate me. I’m not hiring “an app” so much as I’m hiring my assistant for that specific task. Now apply that logic to the whole app-economy. What can be offloaded to our assistants? It should reiterated constantly that voice technology as a whole is still very much in its infancy, but already, 26% of American adults own a smart speaker. Smart speakers represent the fastest rate of adoption by any consumer technology product ever (see the chart below).

This slideshow requires JavaScript.

So, we have this emerging platform that is in the midst of its early development poised to inherit our computing needs. Since this new platform is audio-based, it’s much more conducive to being an ambient platform, accessible through a range of devices spanning from our phones, smart speakers, connected cars, light switches, and in-the-ear devices.

Over time, as people begin to get accustomed to “hiring” their voice assistants for an increasing number of “jobs”, users will want access to their assistants more often. I’m curious if people will choose to outfit their environments with a host of  voice-enabled devices, or if they’ll choose to consolidate their access points to a few “main” ones, such as their hearables.

Carl

As Carl points out, the average usage of our in-the-ear devices today is up 450% compared to the average from 2016, which is the year that AirPods debuted. This increase can be attributed to more content to stream, whether that be music, podcasts or video, and enhancements in battery life and form factor to support longer periods of usage. In my opinion, the most likely contributor of the next spike in usage will likely be the platform, “THE thing”, that Nick Hunn was describing to Carl more than five years ago. In that scenario, hearables become the conduit to the future of computing.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

Alexa Conversations (Future Ear Daily Update 6-6-19)

Another Major Leap Forward for Alexa

Image result for remars

Amazon held it’s brand new re:MARS conference yesterday to showcase all types of innovation from inside Amazon and outside in other companies around machine learning, automation, robotics and space (MARS). This looked like a totally different type of conference than what tech companies typically host, as this was more of a top-notch science fair showcasing all kinds of breakthroughs in each of the four fields, than a conference around products and services.

That being said, there were still Amazon-related product announcements, such as new robots that will be used in Amazon’s many fulfillment centers, new drone delivery systems, the announcement that Amazon has gained FAA-clearance to begin testing drone deliveries starting in a few months, and, of course, announcements around Jeff Bezos’ much adored, Alexa. Amazon announced a new tool called Alexa Conversations, which might be one of the more significant developments around Alexa in years.

According to Rohit Prasad, Alexa VP and head scientist, “Our objective is to shift the cognitive burden from the customer to Alexa.” There’s a reduction in cognitive load both for the user and also the developer. This is feasible due to deep neural networks that provide a level of automation to aid developers to build natural dialogues faster, easier and with less training data, which ultimately translates into less interactions and invocations required by the user.

Cross-skill_predictor.png
Image: Alexa Developer’s Blog

The best analysis I have read around Alexa Conversations was Bret Kinsella’s Voicebot breakdown. Bret interviewed a number of Alexa developers at re:MARS (which will likely be aired on future Voicebot podcast episodes), and the common theme is that this is one of the most important developments regarding skill discovery, which has been one of the core issues with Alexa from the start.

Users can’t remember all the various invocations associated with skills, so by shifting that “discovery” element to Alexa based on the context of the search and your learned behavior, it can begin to surface suggested skills to you that you’d otherwise not known about or had forgotten how to invoke. As Bret points out at the end of his analysis, “If successful, however, it will likely usher in the most significant change in Alexa skill development since the introduction of Alexa Skills Kit in 2015.”

We’ll have to keep an eye on how well this new tool is received by the developer community and how users are responding to skills built with Alexa conversations. It certainly seems like a major change and potential step forward for Alexa to become more conversational.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”