Daily Updates, Future Ear Radio

Voice of the Flash Briefing (Future Ear Daily Update 6-17-19)

A Powerhouse Lineup

Voice of the Flash Briefing Logo.jpg

This October, I will be joining 9 other flash briefing creators to share what we have learned around all things micro-casting. For those who might not be familiar with Flash Briefings, they’re essentially short-form podcasts that typically range anywhere from 1-7 minutes. During The Voice of the Flash Briefing online event, the 10 of us, along with the host, Bradley Metrock, will present on how we utilize our flash briefing and what we our key takeaways are from experimenting with this new medium. The other panelists include:

  1. Dr. Teri FisherVoice in Canada – Teri has a daily flash briefing that he uses in conjunction with his podcasts (Alexa in Canada & Voice First Health) centered around Alexa usage in Canada and Alexa + healthcare. Along with his expertise around these two topics, he’s also a wealth of knowledge on setting up and implementing a flash briefing and has set up a free online course to set one up.
  2. Emily BinderDaily Beetle Moment – Emily provides daily tips and info around all things voice marketing, podcasting, smart assistants, etc. She does a great job weaving business insight into her flash briefing and making the case for why any and all businesses need to begin thinking about how to begin migrating portions of their marketing to voice-enabled services.
  3. Daniel HillInstagram Stories – Daniel’s flash is entirely geared around Instagram and how to most effectively use the app. Daniel provides daily insights around new features and developments, teasing out upcoming features in order to prepare for, and shares how people are leveraging various aspects of Instagram for their personal or businesses’ benefit.
  4. Jen LehnerFront Row Entrepeneur – Jen’s flash briefing covers all things social media, productivity, digital marketing and online business. She maintains a 5 star review across 86 reviews on Amazon, making her flash briefing one of the highest rated ones out there.
  5. Suze Cooper (Keynote) – Social Days – Suze’s company, Big Tent Media, helps small businesses tell their stories online through the creation of websites and social media. Her flash briefing, Social Days, provides listeners with a quick rundown of what “special day of the year it is” such as “#Superhero Day” so that marketers can stay on top of what’s trending in an effort to keep their content fresh.
  6. Janice Mandel –  VOICE Daily Briefing – Janice is the programming and content director for the upcoming VOICE summit in July (where I will be speaking), and uses the VOICE Summit flash briefing to share updates around the conference as it all comes together.
  7. Peter StewartThe Smart Speakers Daily – Peter is an award-winning BBC broadcaster and his flash briefing features news about everything from recently released features and functions of smart speakers to voice branding your own personality to voice-oriented sales strategy. He routinely provides fresh insights into all things voice first.
  8. Adrian SimpleThe Gaming Observer – Adrian’s flash briefing shares daily updates on the world of gaming, including news around upcoming titles, in-game updates, and tips & tricks for some of the most popular games. His flash briefing has boasts a 5 star rating across 183 customer reviews, showing that he’s been at this for a while and has clearly found a recipe for success.
  9. Armel BeaudryTrebble.fm – Armel is the founder and CEO of trebble.fm: Trebble FM lets you stay up-to-date with all the topics you care about by listening to shortcasts (1 to 2-minute podcasts) from your favorite sources. Listen on your phone or any voice-enabled speakers with Amazon Alexa or Google Assistant.

This should be a pretty awesome event. It’s a free, online event, so for anyone that’s even remotely interested in learning more about this new form of content creation or would like to set up their own, I highly encourage them to attend.

-Thanks for Reading-

Dave

Daily Updates, Future Ear Radio

Fortnite + Houseparty (FuturEar Daily Update 6-12-19)

The Gen-Z Match Made in Heaven

Marshmello's Fortnite concert reportedly sets new concurrent player record

The company behind Fortnite, Epic Games, announced today that it was acquiring the popular video chat app, Houseparty. While this isn’t really related to the usual topics that are covered on FuturEar, I thought this to be significant enough to warrant a blog post on it. In my opinion, this is one of the more interesting acquisitions I’ve seen recently, so let’s go ahead and break this down.

Image result for fortnite daily active users
Image: Statista

Fortnite is huge and still growing, recently topping 250 million registered users. The really interesting thing I find about Fortnite isn’t really the game (I’m terrible), but rather what Fortnite has the potential to be – a digital environment to hang out in with friends. We saw the first signs that Epic Games might be aspiring for more than just facilitating gameplay, with the Marshmello concert held last year.

Think about that tweet for a second. A rather random DJ hosted a concert for 10 million people. That is an absurdly large audience. If Bruce Springsteen held a concert at the biggest stadium in the US, The Big House, he’d only be able to perform for about 120,000+ people. Marshmello had an audience roughly 100 times the size of that!

Houseparty is a video chat app that allows for up to 8 people to join a “houseparty.” Kerry Flynn of Digiday compared it to AOL instant messenger, but with video chat, for a new generation. According to Digiday, 60% of Houseparty’s user base is under the age of 24, aka Gen-Z. Similarly, a survey conducted by Newzoo showed that 53% of Fortnite users were ages 10-25. Chances are a large number of Fortnite players are using Houseparty, and vice-versa.

So, now that Epic Games owns Houseparty, it stands to reason that we might start to see a video chat feature added to the game. On the surface, this applies to the game itself. Today, if you’re playing the battle royale on a team of four, you can communicate through a headset. Down the line, we might see a video chat feature to join a group chat of your teammates, and maybe even some other friends who aren’t even playing but hanging out and watching.

Below the surface, think about the way this can fuse shared experiences together in the real and digital world. It’s not that far-fetched to think that you’d be able to go see a concert, or maybe an e-sports game, or go virtual shopping, with your friends as avatars in Fortnite, while simultaneously hanging out together via video chat. Who knows, maybe it will be Epic Games that is first to usher in something like Ready Player One’s OASIS.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

“Hearables aren’t THE thing, they’re the thing that gives you access to THE thing” Part 2 (Future Ear Daily Update 6-10-19)

The Broader Picture

Thread 2 pt 1.JPG

For yesterday’s update, I wrote a piece building on a twitter thread from long-time hearables’ industry expert, Carl Thomas. During Carl’s initial twitter thread, he referenced an iconic line from the TV show, Halt and Catch Fire, where the main character shares his vision of personal computers in the early 80’s, “computer’s aren’t THE thing, they’re what give you access to THE thing.” Carl referenced this line, as it’s an apt way to think about hearables – they’re not THE thing, they’re what gives you access to THE thing. So the question then becomes, what exactly is THE thing?

In yesterday’s update, I concluded that #VoiceFirst is the most plausible candidate to be the more grandiose use case for hearables, and like any good twitter discussion, Carl pushed back and argued a slightly different vision for the future. I want to use today’s update to shed a light on Carl’s vision that he laid out in this follow up thread, not only because it’s a truly fascinating view on what’s to come, but because this is what I designed Future Ear Radio for. To aggregate really interesting ideas, articles, tweets, podcasts from large publications and publishers, but also from the various communities of experts contained in each of the fields that Future Ear covers so that we can all learn from each other.

Seismic Shifts.JPG

“Seismic shifts in human behavior brought about by technology seem to happen when trends collide.” This line of thinking is actually the basis for this blog and is the sub-header of the website, “Connecting the trends converging around the ear.” I love his example too, as WYSIWYG (what you see is what you get) and easy-to-use web hosting platforms, in conjunction with the rising penetration rate of high-speed DSL, collided and created an explosion of user-created web content. It had suddenly become significantly easier (new tools) and faster (high-speed DSL) to publish content on the internet, to the point where anyone could become a publisher. This was the backbone that spurred on social networking.

wearable blockchain.JPG

We’ve all heard of Big Data – the macro-data that companies desire to help them make smarter business decisions. The flip side, would be “Small Data”, which is each user’s personal data (the first time I heard this term was from Brian Roemmele at the 2018 Alexa conference). This is what makes Google and Facebook so valuable – we’ve collectively shared so much of our small data with each company whenever we’ve revealed the motivation behind our purchasing decisions through our searches, likes, shares and other online behavior. But, what are users getting in return? Better targeted ads? Duplex on the web?

So as Carl is suggesting here, one impending collision that might have interesting ripple effects, is microphone-equipped and biometric sensor-laden wearables (wrist or ear-worn), plus the emergence of blockchain-based, tokenized ecosystems. Rather than sharing all of your personal data with these companies for no perceived trade-off, what if we were able to be paid for all our most-valuable data?

part 3

As Carl says, this might all seem far-fetched, but is it really so unrealistic to think that companies wouldn’t pay for this data? While few alternatives to Google and Facebook exist today, that isn’t guaranteed for tomorrow, and the advertising business model of each company is almost entirely dependent on our small data today. People are already seeming to become increasingly weary of sharing more and more data with these tech giants, so a platform that is underpinned by a, “transparent, permissioned, and reversible,” ledger system might stand as an appealing alternative with more perceived value in exchange for our data.

The fact of the matter is that the transformation our in-the-ear devices are undergoing from rudimentary “dumb” devices into smart little ear-computers is going to open multiple doors that were previously locked. What lies on the other side of those doors might include a conduit to a new user interface to communicate with technology, a home for smart assistants to operate on our behalf, a personal data-collector to farm our data to be sold on the blockchain, or a combination of all these and more. Just as it was hard to predict all the byproducts that stemmed from the colliding trends during the beginning of the internet or the advent of mobile, it’s tough to know what will stem from all of today’s innovations as they begin to crash together.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

“Hearables aren’t THE thing, they’re the thing that gives you access to THE thing” (Future Ear Daily Update 6-10-19)

The Conduit to the Future

Carl Thomas, who runs the website Wearables London, published a fantastic twitter thread this weekend about how he sees the hearables sector developing into the future based on his experience working in the space since 2012. He hit on a number of excellent points and developments that have transpired across the past 7 years and extrapolated on how those developments will continue to change as more and more trends continue to collide, altering the total trajectory of hearables. Today, I want to build on what Carl started as I found the wheels in my head turning when I re-read Carl’s thread multiple times this weekend.

Nick Hunn.JPG

Let’s start here. For those who aren’t familiar, Nick Hunn is the godfather of hearables. Nick’s the one who coined the term in his now infamous original white paper from 2014 that Carl is referring to in his tweet. If you’ve never read Nick’s “Hearables are the new Wearables” piece, stop now and go read it.

This is an excerpt from Nick’s paper from 2014: “There are plenty of problems to be solved before Hearables become mainstream, but none that are likely to be insurmountable. Today’s hearing aids already contain an amazing level of technical complexity, miniaturized beyond anything you’ll find in a smartphone or tablet. The capabilities exist; the standards are coming.”

The main two “problems” that existed back in 2014 were battery life and solid pairing between true wireless headphones and the smartphone. Around the same time Nick wrote his paper, hearing aid manufacturer Resound, introduced the first made-for-iPhone (MFi) hearing aid, the Linx, which used Apple-developed low-energy Bluetooth protocol. Apple’s own low-energy Bluetooth system largely alleviated many of the battery and pairing concerns, and began being deployed en-masse around 2015 by all types of in-the-ear device manufacturers. The companion charging cases that have become standard have almost completely alleviated all user concerns with battery life. Nick was right – these were not insurmountable challenges.

Back to Carl’s thread – “The true benefit for hearables would be the value of the platform layer.” This is exactly what I began to realize too a few years back, as I started to search for the use cases that hearables would be able to uniquely support. What Carl and I seemingly both discovered is that hearables aren’t really “THE thing” so much as they’re the conduit to “THE Thing.”

“THE thing” in my opinion is the platform layer that Nick Hunn was describing to Carl many years ago. What I discovered when I was searching for use cases, was that the most plausible candidate to be “THE thing” is voice computing mediated by voice assistants (Alexa; Google Assistant). As I’ve written about before, when you think about the way you use all the apps on your phone as “jobs for hire” (i.e. Google Maps to get you from point A to point B), it should be understood that there was a previous incumbent we relied on prior to smartphones for that job (i.e. Map Quest, traditional Map).

The jobs don’t change. It’s the products that we “hire” for those jobs that change. In a #VoiceFirst future, I’d simply ask my Google Assistant to navigate me. I’m not hiring “an app” so much as I’m hiring my assistant for that specific task. Now apply that logic to the whole app-economy. What can be offloaded to our assistants? It should reiterated constantly that voice technology as a whole is still very much in its infancy, but already, 26% of American adults own a smart speaker. Smart speakers represent the fastest rate of adoption by any consumer technology product ever (see the chart below).

This slideshow requires JavaScript.

So, we have this emerging platform that is in the midst of its early development poised to inherit our computing needs. Since this new platform is audio-based, it’s much more conducive to being an ambient platform, accessible through a range of devices spanning from our phones, smart speakers, connected cars, light switches, and in-the-ear devices.

Over time, as people begin to get accustomed to “hiring” their voice assistants for an increasing number of “jobs”, users will want access to their assistants more often. I’m curious if people will choose to outfit their environments with a host of  voice-enabled devices, or if they’ll choose to consolidate their access points to a few “main” ones, such as their hearables.

Carl

As Carl points out, the average usage of our in-the-ear devices today is up 450% compared to the average from 2016, which is the year that AirPods debuted. This increase can be attributed to more content to stream, whether that be music, podcasts or video, and enhancements in battery life and form factor to support longer periods of usage. In my opinion, the most likely contributor of the next spike in usage will likely be the platform, “THE thing”, that Nick Hunn was describing to Carl more than five years ago. In that scenario, hearables become the conduit to the future of computing.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

Alexa Conversations (Future Ear Daily Update 6-6-19)

Another Major Leap Forward for Alexa

Image result for remars

Amazon held it’s brand new re:MARS conference yesterday to showcase all types of innovation from inside Amazon and outside in other companies around machine learning, automation, robotics and space (MARS). This looked like a totally different type of conference than what tech companies typically host, as this was more of a top-notch science fair showcasing all kinds of breakthroughs in each of the four fields, than a conference around products and services.

That being said, there were still Amazon-related product announcements, such as new robots that will be used in Amazon’s many fulfillment centers, new drone delivery systems, the announcement that Amazon has gained FAA-clearance to begin testing drone deliveries starting in a few months, and, of course, announcements around Jeff Bezos’ much adored, Alexa. Amazon announced a new tool called Alexa Conversations, which might be one of the more significant developments around Alexa in years.

According to Rohit Prasad, Alexa VP and head scientist, “Our objective is to shift the cognitive burden from the customer to Alexa.” There’s a reduction in cognitive load both for the user and also the developer. This is feasible due to deep neural networks that provide a level of automation to aid developers to build natural dialogues faster, easier and with less training data, which ultimately translates into less interactions and invocations required by the user.

Cross-skill_predictor.png
Image: Alexa Developer’s Blog

The best analysis I have read around Alexa Conversations was Bret Kinsella’s Voicebot breakdown. Bret interviewed a number of Alexa developers at re:MARS (which will likely be aired on future Voicebot podcast episodes), and the common theme is that this is one of the most important developments regarding skill discovery, which has been one of the core issues with Alexa from the start.

Users can’t remember all the various invocations associated with skills, so by shifting that “discovery” element to Alexa based on the context of the search and your learned behavior, it can begin to surface suggested skills to you that you’d otherwise not known about or had forgotten how to invoke. As Bret points out at the end of his analysis, “If successful, however, it will likely usher in the most significant change in Alexa skill development since the introduction of Alexa Skills Kit in 2015.”

We’ll have to keep an eye on how well this new tool is received by the developer community and how users are responding to skills built with Alexa conversations. It certainly seems like a major change and potential step forward for Alexa to become more conversational.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

“An Intelligent Guardian for your Health” (Future Ear Daily Update 6-5-19)

Image: Apple

Robbie Gonzalez of Wired wrote a great piece yesterday about Apple’s positioning of the Apple Watch as your health control center. Tim Cook referred to the Watch at WWDC as, “An intelligent guardian for your health.” As Robbie points out in his article, this hasn’t always been the way that Apple has touted the Watch, in fact, the first model of the Watch lacked all the basic components and features to be considered a health-based wearable.

Robbie sums up well here what Apple has done since that first generation Watch to make it the ultimate health-based wearable:

With these latest updates, opting into Apple’s jack-of-all trades approach no longer means sacrificing on specialized features. For consumers who wanted to track their menstrual cycles, Fitbit had been an obvious choice. To monitor long-term trends in their fitness, Garmin was the clear option. But later this year, when a software update enables the Apple Watch to do both, that decision will become more difficult.

This is how Apple eats its competition’s lunch: one bite at a time. Personal health, as the phrase suggests, means different things to different people. The most effective, individualized devices will need to meet users where they are, no matter where that is. By covering as many bases as possible, Apple is positioning itself to do exactly that.

He’s exactly right – Apple has been slowly adding each and every feature and hardware component necessary to match its competitors feature set. The history of the Apple Watch and its 6 WatchOS updates, along with the 4 hardware updates, indicates that Apple plans on making the most comprehensive biometric data collector. The real kicker, however, is the Apple Health app, because it has all the markings of becoming a traffic hub for data coming in and out of the Apple Health app.

Apple made it very apparent at WWDC that the core of their motivation when developing products and services will be built around trust, privacy and security. Therefore, the Apple Health app stands to be not only a safe hub for information as sensitive as our biometrics, but also can be leveraged as a two-way sharing system, to allow for data sharing between medical professionals and their patients. This would require HIPAA-compliance, but as we recently saw Amazon roll out a HIPAA-compliant version of Alexa, I don’t doubt that Apple can gain HIPAA-compliance with its Health-based data.

If this is Apple’s real goal with its Health app, then the question begins to become, “what data collectors will Apple allow to feed into its Health app?”

It’s likely that we’ll see future generations of AirPods equipped with biometric sensors that can capture many (and more) metrics that the Watch captures. We’re seeing hearing aids being outfitted with these type of sensors, so its feasible that the same type of sensors will make their way to AirPods too. In that scenario, Fitbit, Garmin and the other wearables competitors, would then be competing with both the Apple Watch and AirPods.

What about third party collectors, like Fitbit, Garmin or Bluetooth hearing aids equipped with sensors? Apple might ultimately determine that third-party data collected on non-Apple devices, may make sense to add to its Health app due to the more grandiose vision of being the data facilitator between doctor and patient. In that scenario, whether you’re feeding your Apple Health app with data from a Fitbit, a Bluetooth hearing aid, or a Garmin, Apple still benefits because third-party data collection device users would still need to use iOS in order to use Apple’s Health app. Apple then has another arrow in its quiver to convince Android users to make the switch to iOS for its Health hub.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

WWDC 2019 Rundown (Future Ear Daily Update 6-3-19)

Image result for apple wwdc 2019
Image: 9to5 Mac

Yesterday, Apple hosted its annual developer meeting, WWDC, to unveil the most recent slew of tools and updates pertaining to the Apple developer community. Here are all the announcements from the event surrounding the software behind the Apple’s wearables and Siri:

Apple WatchOS 6.0

Apple introduced an update to Watch’s operating system and along with it, an Apple Watch-specific app store that can be accessed directly on the watch. This is significant as Apple continues to unbundle Watch from the iPhone to be a standalone product with its own line of independent apps. Previously, all of the apps that the watch uses were ported from the iPhone, so we should begin to see developers building apps and functionality specifically designed for the Watch.

In addition, Apple rolled out a new Health app feature to capture the sound levels in one’s environment to measure and notify the user of potentially dangerous noise levels. A simple glance at one’s watch will indicate to them how loud their environment is, and users can take it a step further by assessing their Health app data at the end of the day to understand which locations they’re being exposed to potentially harmful sound levels.

Siri & AirPods

Last year, Apple introduced Siri Shortcuts, which I believed to be a major step forward toward the future of the App store. The problem, however, was that while Shortcuts represented a great way to link apps together and make a flow of commands, the feature was ultimately too buried. Only 10% of iPhone users had ever attempted to create or even use a shortcut. This year, Apple built Shortcuts right into iOS13 and suggesting shortcuts for people based on their app usage. This has the potential to be huge as it allows for many menial tasks to be automated, and as Brian points out, could be the underpinnings for a full blown SiriOS down the road (hopefully next year).

Apple also showed off rather significant improvements in Siri’s underlying technology too, with an upgraded neural text-to-speech enhancement. Now, Siri’s speech is entirely software driven and uses machine learning to continually improve itself. This ultimately allows for a much more natural sounding Siri that will only get better over time.

One of the most obvious use cases for a more natural sounding Siri is with voice messaging used in conjunction with AirPods. It’s not just iMessages either as Siri can relay messages from third party apps too. As we move into an era where hundreds of millions of people own and use AirPods, it seems likely that voice messaging, powered by Siri, will be a killer use case for those walking around throughout the day with AirPods in their ears.

Apple also rolled out an “Audio Sharing” feature for AirPods, allowing two users to listen to the same source. This is essentially the Bluetooth version of a cable splitter. This is just another subtle feature that makes owning and using AirPods that much more attractive and compound its already powerful network effects.

Voice Control

One of the most interesting developments unveiled during the conference was Voice Control. This allows for complete control of your Mac or iOS device with your voice by “voice tapping and clicking” through a numbered grid. This feature is an awesome addition to Apple’s accessibility suite and it’s possible that we see a near-future where Apple begins to marry Voice Control with Siri to allow for the user to communicate with Siri with more context using the numbered voice grid. For example, I might want to reference something on my phone to Siri, and can more effectively teach Siri what I’m communicating through the numbered grid.

Race for the Future

Siri Shortcuts is arguable one of the most innovative areas within Apple currently, so by baking Shortcuts directly into iOS13 and adding the aspect of suggesting pre-made shortcuts will only expose more people to the power and utility of this feature. Moving Siri to a neural TTS makes Siri that much more usable and functional, which will be on full-display for voice messaging with AirPods. Apple continues to unbundle the Watch from the iPhone and work to create specific use cases built for the Watch, such as all the developments around the Health app. Apple’s Watch strategy and slow unbundling from the iPhone provides a blueprint for AirPods’ trajectory as the device becomes capable be unbundled as well.

A lot of what we saw from WWDC this year appears to be baby steps toward Apple’s next generation UI and OS. Many of these improvements are incremental on their own, but combined, they start to accrue into something more meaningful. Apple appears to be warming up to the idea that Siri and voice-as-a-UI will play an important role for the company’s future, but the question remains whether Apple is internally structured and motivated to significantly innovate around Siri and match parity with the blistering pace of innovation that we’re seeing with Alexa and Google Assistant, along with Amazon and Google’s build out of a third-party developer network. 

-Thanks for Reading-

Dave

 

Daily Updates, Future Ear Radio

The Booming Hearables Market (Future Ear Daily Update 5-30-19)

Hearables Comprise 1/3 of all Wearables Device Sales

Ear valley

The International Data Corporation (IDC) released its 2019 Q1 wearable market data today, stating that 49.6 million wearable devices shipped in the first quarter of the year and that wearables as a whole experienced 55.2% year-over-year growth. It’s clear that wearables are doing well overall in terms of market adoption, but the real story with this data is around the growth of hearables.

Hearables, or ear-worn wearables, grew 135% year-over-year and now comprise 34.6% of all wearables sales. So, just doing a little back of the napkin math, we can estimate that about 17 million hearables were sold in 2019 and at their current rate there would be about 68 million hearables sold in 2019. My guess is that number will ultimately be higher, as I would bet that the growth rate continues to increase. Why? Well, let’s look at why this segment of the wearables market is growing in the first place.

According to Jitesh Ubrani, research manager for IDC, “The elimination of headphone jacks and the increased usage of smart assistants both inside and outside the home have been driving factors in the growth of ear-worn wearables.”

SS2

The death of the headphone jack only accelerated the consumer shift toward Bluetooth headsets that was already fast underway. Apple described their move to kill the headphone jack as “courageous”, when in reality, they were seeing the writing on the wall and realized that the majority of people were already buying wireless Bluetooth headsets when the iPhone 7 was released. All the handset manufacturers followed suit, which has continued to push people toward truly wireless headsets.

According to Voicebot.ai’s Smart Speaker Consumer Adoption report from March 2019, 66.4 million U.S. adults own a smart speaker, comprising 26.2% of the U.S. adult population. As Jitesh pointed out, people are beginning to get accustomed to voice technology via their smart assistant-filled speakers. So as those same smart assistants are now beginning to move into hearables, that aspect of the devices might appeal to the quarter of US adults who own smart speakers.

We are at the very early stages of smart assistant technology and as our assistants evolve and mature in their capabilities to the point where they’re serving as a primary method of computing for users, this dimension to hearables is only going to become more appealing. While 17 million quarterly hearables units sold and 155% annual growth is an awesome sight to see, I would bet that we’re only at the beginning of this product category’s growth.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

Top 5 Voicebot Podcast Episodes (Future Ear Daily Update 5-29-19)

Image result for the voicebot podcast

As Voicebot approaches its 100th podcast episode, the host, Bret Kinsella, asked that the Voicebot listeners tweet him their favorite episodes of the podcast across the first 100. I figured I’d use the daily update today to list out my top 5 favorite episodes along with why I liked them so much. Here are my top 5 in no particular order:

    1. Vijay Balasubramaniyan – CEO Pindrop Security – Ep 86: Among all the Voicebot podcast episodes I have listened to, I think this is the one that has resonated the most for me. I was so blown away with the conversation that I immediately blogged about it here.

      Pindrop is a voice security company that uses 1,380 “voice parameters” to identify the user in a highly accurate way. I ultimately see voice biometrics as becoming the “password” system used for voice computing, voice commerce, the IoT, and so on. Security is a critical aspect necessary to continue moving voice computing along.

    2. Amir Hirsh – CEO Audioburst – Ep 97: Along with security, one of the other areas that needs attention for voice computing to really take off is search and discovery for audio content. After listening to this episode, I realized that Audioburst represents the closest parallel to Google in the voice space (to my knowledge). Indexing the voice web is a gargantuan task, but that’s what Amir and Audioburst are setting out to do.

      I plan on writing a full piece on what Audioburst is conceptually working on, but for now, imagine being able to search for any company or figure and then get results of all the instances that company/figure was mentioned on a radio show, podcast and other forms of audio content sources. You could say, “what’s the latest on Tesla” and have five different clips fed to you that have been indexed and curated by Audioburst.

    3. Niko Vuori – CEO Drivetime.fm – Ep 81: As our smart assistants branch out of our homes into a variety of new environments, the area that is perhaps gaining the most momentum is the car. Niko shares what his company, Drivetime.fm, is working on designing games for the car, starting with trivia. Each game would be it’s own “channel” and be a distinct experience. Very interesting to hear how the gaming ecosystem might take shape as cars become more and more voice-equipped. As saw with previous computing generations, games tend to be at the forefront of new computing paradigms.
    4. Adam Cheyer – Co-founder of Siri and Viv. Engineering Lead at Bixby – Ep 69: Adam Cheyer is one of the founding architects of voice assistant technology. He helped to build Siri, moved on to start the next evolution of a smart assistant with Viv, which was acquired by Samsung, and then tasked by Samsung to implement Viv into Bixby’s redesigned version 2.0. If you’re even remotely interested in the voice space, need I say more as to why you should check out this interview? Adam’s interview is a must listen to understand what sets Bixby apart from a fundamental standpoint, and how Adam sees the next ten years of the voice computing looking like and what obstacles need to be overcome in order to get to the future he describing.
    5. Heidi Culbertson – CEO of Marvee – ep 68: One of my favorite people in the voice space, Heidi, talks about the inspiration for her company (her mom) and how she got into designing experiences in the voice space for the elderly community. One of the things that sets Heidi apart is that she conducts a ton of field research with older adults to really understand how they’re using voice technology and what stands out to them. As I just wrote about in Voicebot, our aging population and voice computing is like a match made in heaven, and its people like Heidi who are on the front lines of designing ways that older adults can derive as much value from their smart speakers and assistants today.

Other great episodes that stick out to me – Mark C Webster ep 33; Chris Messina ep 38; Brian Roemmele ep 80; Stuart Patterson ep 72; Amy Stapleton ep 84.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

New Post on Voicebot.ai (Future Ear Daily Update 5-28-19)

Empowering Our Aging Population with Voice Assistant Enabled Hearing Aids

5-28-19

I published an article for Voicebot.ai late Friday afternoon around the evolution of hearing aids and how they’re beginning to serve as a home to voice assistants like Siri, Alexa and Google. This is a topic I’ve been thinking a lot about for a while now, as it was what led me to finding the #VoiceFirst community after I began to realize that hearing aids’ Bluetooth connectivity would open the door to a multitude of new features and use cases.

I believe that this aspect to hearing aids will not only be appealing, but also empowering, especially to the cohort who is most susceptible to hearing loss – our older adults due to age-related hearing loss. There are a number of potential reasons why hearing aid adoption has historically been as low as it has (price & stigma to name a few), but ultimately, the consumer is not seeing enough value in the devices to compel them to purchase, which is why the hearing aid sales cycle tends to be around 7 years. As hearing aids begin to experience their “iPhone moment” as a product category, by truly becoming computerized, the hope is that the value proposition will only get stronger, and therefore more compelling.

Link to article: https://voicebot.ai/2019/05/24/empowering-our-aging-population-with-voice-assistant-enabled-hearing-aids/

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”