Conferences, Daily Updates, Future Ear Radio

Empowering Our Disabled Population With Voice Tech (Future Ear Daily Update 7-29-19)

7-29-19 - Empowering Our Disabled Population

One of the best sessions that I attended at the Voice summit last week, was Cathy Pearl’s Democratization of Voice. The reason this talk resonated with me is that it really highlighted one of the strongest themes at this year’s conference and that is that one of the biggest cohorts to benefit from voice technology, even in its infancy and primitive state, is our disabled population. I live-tweeted this session if you’re interested in seeing my live takeaways:

 

Back at Google I/O’s developer conference in May this year, Google rolled out a handful of new accessibility programs, which I wrote about in a May daily update. Each of these new programs made an appearance in Cathy’s talk, which was largely centered around how voice technology can be leveraged by a wide range of people with disabilities. For example, we saw a video of a 19 year old with a rare type of muscular dystrophy that restricts his mobility greatly. He was able to use Google Assistant in conjunction with a number of connected IoT devices to outfit his bedroom and control every thing from the lights to the TV to the blinds, using his voice.

The ability to outfit one’s home like this is life-changing for both the user as well as the caregivers. That’s what makes me so excited about this technology. To me, it might seem trivial to be able to control my lights via my voice, but it’s a godsend for the folks who live with debilitating diseases and disabilities, not to mention the caregivers which the technology reduces the burden on.

Which is why I think it’s so cool that Google announced it will be giving away 100,000 free Google Minis through the Christopher Reeve foundation to celebrate the American with Disabilities Act’s 29th birthday. How cool is that? My main takeaway from Cathy’s talk is that one of the most obvious impacts voice technology has had over these past few years, is the ability to empower our disabled populations, restore one’s dignity, and reduce the burden on our caregivers. So when I see that Google is stepping up and pledging 100,000 devices to help amplify this movement, well, that’s good by me.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio, VoiceFirst

Hearables Panel Recap (Future Ear Daily Update 7-26-19)

7-26-19 - Hearables panel recap

Yesterday, as I was leaving the Voice summit and boarding my plane back to St. Louis, I was thinking about what would be the best way for me to recap the hearables panel I participated on Thursday afternoon. When this panel was first organized by the Voice summit, it was slated to only consist of Andy Bellavia and myself, which I’m sure would have been a good panel discussion, but as the event loomed closer, the summit organizers added three more panelists and our moderator. I honestly don’t think we could have had a more well-rounded panel of backgrounds and perspectives.

Hearables Panel
From left to right: Eric Seay, Andy Bellavia, myself, Rachel Batish, Andreea Danielescu, Claire Mitchell

I decided that the best way to recap this panel would be to try and illustrate how I think each of the panelists’ backgrounds and area of focus will weave together as time progresses. When Claire asked me how I see the world of hearables taking shape across the next few years, I attempted to paint a mental picture for the audience. I haven’t seen any video clips yet from our talk, so I’m kind of going off my memory, but here’s the gist of how I responded and how I think each of my fellow panelists’ areas of focus will factor in:

“The first thing that we need to acknowledge is the behavioral shift that’s been occurring since December of 2016 when AirPods debuted. It has progressively become normalized to wear truly wireless earbuds for extended periods of time. The reason this behavior shift has been enabled is due to the hardware advancements coming out of Andy’s world. Much of the hearables’ innovation that transpired in the first half of this decade was largely around the components inside the devices, from the systems-on-the-chips to DSP chips to sensors to Bluetooth advancements. These innovations eventually ended up manifesting themselves in ways such as, better battery life or automatic Bluetooth pairing. In essence, the cultural behavior shift ushered in by AirPods that we’ve been witness to would not have been possible without the preceding component innovation that makes the behavioral shift feasible.

So, you asked, where do we go from here? One way to think about hearables’ role is how it impacts the “attention economy.” One of the biggest byproducts of the mobile era is how we’re constantly able to consume content. Facebook, Instagram, Snapchat, Twitter, etc. exchange free content for our attention; time is the currency of the attention economy. So, in a world where it has become socially acceptable and technologically feasible to wear ear-worn devices for extended periods throughout the day, it’s realistic to think that the attention economy begins to be divided between our eyes and our ears. We’re already seeing this with podcasting, but just as Instagram stories represent one specific type of visual content that drives the visual attention economy, podcasts represent one specific type of audio content that will drive the “aural attention economy.”

The attention economy is predicated on tools that enable easy content creation, leading to more content supply to be consumed. Therefore, tools that enable audio content creation are paramount to the near-term future of the aural attention economy. Supply generation, however, is only one part of the equation, we need to more intelligently surface and curate the audio content supply, so that people are discovering content that they want to listen to. Rachel’s company, Audioburst, is a perfect example of how we can better connect content to users. Through a tool like Audioburst, I can say, “give me an update on what’s going on with Tesla,” and rather than being fed a summary of a business insider article, I’m being fed radio and podcast clips where they are specifically talking about Tesla.

The other aspect of the emergence of hearables that needs to be solved is how we design for experiences that might be audio-only. Eric’s work around sonic branding and non-verbal audio cues is going to play a big role in the foreseeable future, because we’ll need to rely on a variety of audio cues that we intuitively understand. For example, if I receive an audio message, I’ll want to be alerted of that message from a cue that’s non-invasive. Or if I’m walking down the street and I’ve indicated that I’m hungry, I might hear McDonald’s sonic brand (ba-dah-ba-ba-baah) indicating that there’s a McDonald’s close by.

Creating and designing audio cues is a challenge in and of itself, but the implementation of said cues adds another level of complexity. As Andreea described from her work designing audio experiences for a variety of hearables, the challenge tends to stem around the contexts of the user. The way that I interact with my hearable will be far different if I’m really straining myself on a 5 mile run, compared to a leisurely walk. The experiences we have will need to be tailored to the contexts, so for example, when I’m running I might just want to say, “heartrate” to my hearable as an input and receive my heart rate readout, but when I’m walking around I might input a full sentence, “what’s my heartrate reading?” These are subtleties but enough poor experiences, however subtle they are, will ultimately lose the user’s interest.

So, we should see more component innovation coming out of Andy’s world, which will facilitate continual behavior shifts. Tools like Rachel’s company, Audioburst, will allow for more intelligent content aggregation and more reason to wear our devices for longer periods of time as we begin dividing our attention between our eyes and ears. Longer usage, means more opportunity for audio-augmentation and sonic branding & advertising, which will need to be carefully thought out by the UX folks like Andreea and Eric so as not to create poor experiences and drive users away.”

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

 

Conferences, Daily Updates, Future Ear Radio, VoiceFirst

Voice Summit Day One Recap (Future Ear Daily Update 7-24-19)

img_3651

Although the Voice Summit technically started on Monday, that day was considered “Day 0” and Tuesday marked Day One of the summit. The thing that makes this summit so impressive and valuable is that it’s packed full of speakers from start to finish. It led off with three back-to-back keynotes from three of the most influential people in the voice space.

I took to twitter yesterday to try and live-tweet each of the sessions as best as I could, so feel free to click into each embedded twitter thread for each session for my real-time thoughts.

First up, was Dave Isbitski, chief evangelist of Alexa. Dave talked a lot about the first 5 years of the Alexa journey, highlighting various phases along the way to where we are today with Alexa. We’ve moved to single-turn conversations to multi-turn, and as Dave detailed, the next phase is for multi-session, which means that Alexa will start to understand the user’s context and in time learn things about the user like preferences. This is all achieved through deep learning modules.

Dave also unveiled a new workflow tool called “Skill Flow Builder” that allows anyone, developer or non-developer, to easily input and adjust dialogue within skills. The use case that Dave highlighted for this was interactive story telling games. Just as I tweeted, this really harkens back to Brian Roemmele talking about the need for the technology to be simple enough to, “bring the creatives into the fold.” Skill Flow Builder does just that.

One of my favorite portions of Dave’s talk was around flash briefings and some creative ways that people are starting to use them, such as for internal corporate communications. Flash briefings continue to strike me as one of the most unique aspects to Alexa and something that we’re only just starting to scratch the surface with.

Next was Adam Cheyer who co-founded Siri and sold it to Apple, then moved onto Viv Labs, which was purchased by Samsung, where Adam now works. Adam heads up the Bixby division, and Bixby 2.0 is the first iteration of the voice assistant under Adam’s leadership. Obviously, when one of the founding fathers of voice assistants is presenting, you’re due for some interesting insight.

To round out the initial keynotes, we had the pleasure of Noelle LaCharite of Microsoft talking about Cortana’s makeover. I think Microsoft is smart to have pivoted Cortana away from competing with Google and Alexa as a “master assistant”  and instead positioned Cortana as “the master of Microsoft.” As Noelle pointed out, Cortana is wonderful when it’s tasked to do things housed inside Microsoft’s properties, such as scheduling meetings with outlook. Additionally, I appreciate the focus Microsoft has around accessibility, which is clearly a motivation for Noelle personally.

After the first three keynotes, the breakout sessions began. The one downside about this conference is that there are about seven sessions going at once, and so it can be really hard to choose which session to attend. I decided to go see Mark C. Webster’s talk on “why conversational interfaces are designed to fail.”

This was one of the better talks I’ve heard in the voice space and the reason was that Mark really shot the room straight with the state of conversational interfaces. One of the key points that he made was that the metaphor construct we use for assistants as “people” might be leading to confusion and poor experiences among users. In previous computing interfaces, images allowed us to create metaphors (whether they be desktop icons or app icons) to communicate the intent of the icon. Voice on the other hand does not have really offer a similar construct.

The issue with creating the expectation that you can just, “speak to Alexa as you would a person,” is that it’s not really true. Alexa and Google Assistant exist today because the natural language processing engines that these assistants run on have advanced considerably in the past decade, allowing them to capture our speech with high accuracy. But, just because they can accurately capture what we’re saying does not mean that Alexa knows what to do with your input, and therefore leads to, “I’m sorry, I do not understand that.” That was the crux of Mark’s presentation – maybe we shouldn’t be setting the expectation that these are “conversational devices” quite yet.

The last session of the day that I saw was Brielle Nickoloff of Witlingo talking about the evolution of the Voice web. This was an awesome talk that included a really stellar demo of Buildlingo’s newest update. One of the key points from this talk was that as tools continue to surface (i.e. Buildlingo and Castlingo) that facilitate easier and faster audio content creation, the world of audio content creation begins to be democratized. Brielle did a great job drawing parallels of the voice web with the evolution of the internet in its various phases and how it progressively became easier and easier to share content on the web, to the point that anyone could quickly and easily share anything on sites like Twitter, Facebook and Youtube.

 

All-in-all, it was an awesome day. I learned a lot, met a ton of people, connected with old pals, and got a good understanding of where we are with voice technology in its various pockets. Onto day two!

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Conferences, Daily Updates, Future Ear Radio, hearables, VoiceFirst

Hearables On-The-Go (Future Ear Daily Update 7-18-19)

7-18-19 - Hearables On-the-Go

One week from now, at the VOICE Summit, I will be joining an all-star group of hearables experts on a panel titled, “How On-the-Go Hearables Expand Opportunities with Voice,” which will be moderated by Claire Mitchell, of VaynerMedia. After spending some time getting to know the fellow panelists, I can safely say that this is going to be an awesome discussion as each panelists brings a different type of expertise to the table.

Hearables On the Go Panel

Andrew Bellavia of Knowles Corp. knows more about the hardware side of hearables than anyone I know. Knowles Corp manufacturers the microphones and DSP chips that various Alexa devices are outfitted with. Along with his expertise in all things hearables-hardware, Andy wears top-of-the-line, Bluetooth hearing aids (Phonak Marvels) that he routinely communicates to Google Assistant with. He’s on the forefront of experimenting and experiencing the future of #VoiceFirst + hearables.

Rachel Batish of Audioburst will provide a unique perspective on how Audioburst is enabling a future where we can catalog, search and curate audio clips from various radio shows and podcasts. To put into simple terms what Audioburst is striving to be, it’s essentially Google, but for audio. Imagine a future where you’re driving in the car, and you use ask Audioburst to “read you updates on Netflix” and then get a curated feed of short podcast and radio clips where the broadcasters are specifically speaking about Netflix. That’s the type of perspective that Rachel will be able to provide.

Eric Seay of AudioUX co-founded a creative audio agency that aims to shape the world of audio and sonic branding. As we move into a world that is less reliant on screens and visual overlays, Eric believes that many of the visual cues that we’ve become familiar with will have audio counterparts. Logos and sonic logos. Social media likes and nano audio “likes.” Eric will surely offer interesting ways that we need to start thinking about the burgeoning world that is audio + voice-only.

Finally, Andreea Danielescu of Accenture Labs and startup, Antilipsi, works as a founder, engineer, researcher and architect around the full software stack. She’s done extensive work around experience design and gesture interaction with all different types of emerging technologies, including voice assistants. She’ll bring to the table real world experience with early implementations of hearable technologies that incorporate features like voice assistant access, sharing what limitations exist and how we can work to overcome them.

It’s sure to be an awesome discussion next Thursday. If you’re attending the Voice Summit, I hope that you’re able to join us, and if not, I’ll do my best to recap some of the key takeaways from our panel on all things hearables + voice.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio, Smart assistants

Alexa’s Customer Acquisition Cost (Future Ear Daily Update 7-17-19)

7-17-19 - Alexa's Customer Acquisition Cost

There were two headlines coming out of this year’s two-day Amazon Prime event that really caught my attention as I believe they tell a similar story about customer acquisition cost (CAC) and the long game Amazon is playing. For starters, according to an Amazon blog post, “Amazon welcomed more new Prime members on July 15 than any previous day, and almost as many on July 16 – making these the two biggest days ever for member signups.”

Tren Griffin, who writes the blog 25iq and is a long time Silicon Valley veteran, succinctly summarized the broader takeaway:

Tren Griffin - 7-17-19

On the surface, Prime Day appears to be a giant flash sale, but in reality, it’s a ploy for Amazon to sign up more Prime members. This is important to note, as Consumer Intelligence Research Partners found in 2018 that Prime members purchase $1,400 a year on Amazon goods, while non-Prime customers spend an average of $600. Amazon will easily and quickly make back whatever revenue it gave away during Prime Day with the new members it signed up, as those members’ value tends to more than double.

The second headline that caught my attention was that, according to Amazon, “Prime Day was also the biggest event ever for Amazon devices, when comparing two-day periods – top-selling deals worldwide were Echo Dot, Fire TV Stick with Alexa Voice Remote, and Fire TV Stick 4K with Alexa Voice Remote.”

As Brian Roemmele astutely points out, this wave of new users can be evidenced by the surge Alexa is having in the iOS app store rankings from 96 to 36 in less than 24 hours (I imagine this keeps climbing as more people receive their devices):

We don’t know for sure how many Alexa-enabled devices have been sold this Prime Day yet, but Voicebot published an article that points out multiple Alexa items being sold out and currently on back order, including the Echo Show 5, which is on back order until September. There was seemingly strong demand for Alexa devices during Prime Day once again this year.

What strikes me here is that, similar to Prime Day being a ploy to grow the Prime membership base under the guise of a flash sale, Amazon’s method of slashing of Alexa device prices on Prime Day serves a similar purpose with Alexa, as Alexa is a “membership” of sorts. What I mean is that once a consumer has bought their first Alexa or Google smart speaker, and then decides to add more devices down the line, they’ll likely stick with the same assistant as they’ll be augmenting their living spaces with more access points to the same assistant.

According to Voicebot’s Consumer Adoption report published in March of 2019, it was found that 40% of smart speakers owners have multiple devices, which is up from 34% the previous year. So, to Tren Griffin’s point, Amazon knows that while they’re forfeiting margin on the Alexa devices they’re slashing now, they’ll make up on the back end as more people begin buying additional devices.

Furthermore, the bigger picture is that Amazon wants to own the next generation of computing. Amazon missed the boat on mobile and it’s banking on voice assistants (a 10,000 person team-sized bet) as being the future. So, the CAC of Alexa users in the long run will probably be viewed as incredibly cheap in hindsight as the utility of Alexa rises, and fellow voice assistant providers find that the CAC of poaching Alexa customers becomes increasingly more expensive.

Amazon seems to be using the same playbook with Alexa that it did with Prime – “buy” their user base early, on-the-cheap and lock them in with increasing value. Traditional brick and mortar companies are finding today that trying to poach Prime Members is immensely challenging considering all the value Amazon has been baking into its Prime membership; something that none of Amazon’s retail and e-commerce competitors can seem to match as the cost to build a service to compete with Prime, aka CAC, is just too high today.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

 

Daily Updates, Future Ear Radio, Podcasts

“Voicebot First 100” Dave Joins the Voicebot Podcast for Episode 105 (Future Ear Daily Update 7-16-19)

7-16-19 - Voicebot First 100

A few weeks back, I was fortunate to be asked by the Voicebot podcast host, Bret Kinsella, to join him, Amy Stapleton and Pete Haas on the podcast to help review some of the most memorable interviews and moments that transpired on the Voicebot podcast across the first 100 episodes. The episode was recently published on all major podcasting platforms.

Amy co-founded the company Tellables, which is a studio that creates interactive story games designed to be told by voice assistants, and prior to Tellables, she worked at NASA for 14 years (so cool). Pete is a long time developer who worked in design and developer roles throughout the various phases of computing, as a web developer, then a mobile developer, and now a voice developer, having created more than 200 voice experiences in the past few years (also very cool). Needless to say, I was in pretty good company!

Voicebot pod pic

After recording our 90 minute chat, here are a few of my big takeaways from the first 100 episodes of the Voicebot podcast:

  1. What a time we live in where you can hear inventors speak candidly about the technology they’re ushering in, such as Adam Cheyer, one of the original founders of Siri. Can you imagine being able to listen to Einstein, Eddison, Ford, etc back in their heyday talk about what they’re working on?
  2. Along the same vein, it’s incredibly impressive the type of access that Bret is getting to leaders within the various companies he interviews. To Amy’s point during our chat, Bret does a great job ferreting out information, and from leaders within organizations that haven’t historically been too transparent, such as Amazon. It’s really valuable as a listener and an awesome resource for business development.
  3. The three episodes I picked as my favorites were the episodes with Amir Hirsh of Audioburst, Vijay Balasubramanyan of Pindrop Security, and Adam Cheyer – co-founder of Siri and Viv Labs (Samsung Bixby 2.0). I view the areas that all three of these companies are working in to be “big ideas” and each area as paramount to the future of voice technology’s success and proliferation.
  4. I love the compilation episodes that the Voicebot team conducts because they often serve as a “teaser” to future, full-length interviews from companies that were originally surfaced in the compilation episodes.

I hope you enjoy the discussion. It was certainly enjoyable to get called up to the big leagues and go on a podcast that I listen to so frequently and find so much value in.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

Introducing The Future Ear Roundtable (Future Ear Daily Update 7-15-19)

The Future Ear Roundtable

Today I am excited to announce a new aspect of Future Ear Radio called “The Future Ear Roundtable.” It’s a separate skill/action that needs to be enabled, but the concept is for the Future Ear Roundtable to serve as a companion to my daily Future Ear Radio flash briefing, where I can engage with Future Ear Radio listeners. I recorded a video to help illustrate what exactly it is that I’m launching and how one can participate:

Essentially, Future Ear Roundtable is my Alexa/Google-enabled version of, “let’s take some callers.” While it has been awesome uploading a daily flash briefing, it’s still a one-way street of communication. That’s what makes this companion skill so exciting to me is that I can now let the Future Ear Radio listeners not only engage with me, but potentially even engage with each other. For example, you might launch Future Ear Roundtable, hear my question along with the 3 responses that have already been published, and rather than respond to my question directly, you choose to more-so respond to one of the fellow respondents. That’s what I really want with this… to have a community of listeners sharing and engaging with one another.

To join in on the fun, simply:

  1. Enable the Future Ear Roundtable Alexa Skill or Google Action
  2. “Alexa/Google, launch The Future Ear Roundtable” to hear this week’s question
  3. Download the Castlingo app
  4. Use Castlingo to record your response to this week’s question and then publish it on the Future Ear Roundtable channel (see image below)
img_3608
The Future Ear Roundtable Channel hosted inside the Castlingo app

It’s that easy! I’m going to leave up the AirPods question that I originally posted for the next few days, but will be sharing on Twitter once I have updated the question and then will likely update the question on a specific day and time each week and standardize it as a weekly cadence. Hope to hear all your awesome responses on The Future Ear Roundtable!

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

audiology, Daily Updates, Future Ear Radio

What If Treating Hearing Loss Mitigates Cognitive Decline? (Part 2) (Future Ear Daily Update 7-12-19)

7-11-19 - What if treating hearing loss mitigates

There have been a number of studies conducted in recent years to try and understand if there is a link between hearing loss and cognitive decline, and the research continues to indicate that there is a correlation between the two. The question that researchers are now trying to answer is whether people with hearing loss can mitigate the slope of cognitive decline by treating their hearing loss.

The answer to this question is very tough to parse out because of how many other variables are involved, which makes it difficult for researchers to really isolate the equation down to hearing loss, hearing loss treatment (i.e. hearing aids), and cognitive decline. Since hearing loss often becomes more commonplace as we age, it could just be that both hearing loss and cognitive decline are byproducts of aging. That said, the two could be deeply linked, as suggested by Dr. Piers Dawes’ “cascade hypothesis” where, “long-term deprivation of auditory input may impact on cognition either directly, through impoverished input, or via effects of hearing loss on social isolation and depression.”

In other words, correlation does not necessarily equal causation, and therefore research that tries to get to the answer of this question must be thorough in effectively isolating the variables being measured together from the co-variables that might skew the data.

Today, I came across a paper published in ENT & Audiology news by Dr. Catherine Palmer, PhD Audiologist and Director of Audiology at the University of Pittsburgh, that revolved around research attempting to solve this question. In the paper, this quote here really jumped out at me:

“In a US population-based longitudinal cohort study, 2040 individuals over the age of 50 had cognitive performance measured every two years over 19 years, and new hearing aid use was identified along this time period. After controlling for a number of covariates (e.g. sex, age, education, marital status, wealth, smoking, drinking, physical activity, depression, etc.) the authors determined that hearing aid use had a mitigating effect on the trajectory of cognitive decline in later life. In other words, those who received hearing aids, regardless of many other covarying factors, had a less steep slope toward cognitive decline.”

These findings, along with Dr. Piers Dawes research that was published in January 2019, both indicate that treating hearing loss might change the trajectory of dementia to some extent and lessen the slope toward cognitive decline.

Back in May, I wrote an update about an interview I did with Dr. Nicholas Reed of Johns Hopkins for Oaktree TV about this very topic. During our conversation, Nick stated that along with Johns Hopkins’ study currently underway to try and answer this question, Johns Hopkins has conducted research that has found hearing loss leads to higher healthcare costs (outside of hearing loss related expenses), more frequent hospitalizations and increase in certain cognitive-related comorbidities. All the more reason why hearing loss is such a serious issue:

So, while there is not sufficient research to definitively say whether treating hearing loss can undoubtedly mitigate the potential of cognitive decline, one has to wonder, what if it does? Keep in mind that in the US healthcare system, hearing aids are considered “elective” status (same as plastic surgery) and are largely un-insured. Wouldn’t this finding help to strengthen the argument that these are not “nice-to-have” devices and actually “need-to-have” devices? Food for thought.

-Thanks for Reading-

Dave

Biometrics, Daily Updates, Future Ear Radio, hearables

“Hearables – The New Wearables” – Nick Hunn’s Famous White Paper 5 Years Later (Future Ear Daily Update 7-10-19)

7-10-19 - Hearables-the new wearables.jpg

Nick Hunn, the wireless technology analyst and CTO of WiFore Consulting, coined the term “hearables” in his now famous white paper, “Hearables – the new Wearables,” back in 2014. For today’s update, I thought it might be fun to look back at Nick’s initial piece to really appreciate some of his prescient foresight with predicting how our ear-worn devices would mature across the coming years.

For starters, one of the most brilliant insights that Nick shared was around the new Bluetooth standards that were being adopted at the time and the implications for battery life:

“The Hearing Aid industry’s trade body – EHIMA, has just signed a Memorandum of Understanding with the Bluetooth SIG to develop a new generation of the Bluetooth standard which will reduce the power consumption for wireless streaming to the point where this becomes possible, adding audio capabilities to Bluetooth Smart. Whilst the primary purpose of the work is to let hearing aids receive audio streams from mobile phones, music players and TVs, it will have the capability to add low power audio to a new generation of ear buds and headsets.”

To put this into perspective, the first “made-for-iPhone” hearing aid, The Linx, had just been unveiled by Resound at the end of 2013. Nick published his paper in April of 2014, so it may have been apparent for close observers that hearing aids were heading toward iPhone’s 2.4 GHz Bluetooth protocol (every hearing aid manufacturer ended up adopting it), but without a background like Nick’s, working in the broad field of wireless technology, it may have been hard to know about the way in which this new Bluetooth standard (initially called Bluetooth Smart and then later called Bluetooth Low Energy) would allow for more efficient power consumption.

Nick’s insight became more pronounced as Apple rolled out its AirPods in 2016 with its flagship W1 chip, which used ultra-low power Bluetooth allowing for 5 hours of audio streaming (2 hours of talk time). Flash-forward to today, and Apple has released its AirPods 2.0 that uses the H1 chip and Bluetooth 5.0, allowing for even more efficient power consumption.

It needs to be constantly reiterated that hearables were deemed unrealistic up until midway through the 2010’s because of how inefficient the power consumption was with previous Bluetooth standards. Batteries represent one component inside miniature devices that has historically not seen a whole lot of innovation, both in terms of size reduction and also by energy density, so it might not have been obvious to see that the work-around to this major roadblock was actually in the way that the power from the battery was extracted via new methods of Bluetooth signaling.

The other aspect of hearables that Nick absolutely nailed was that ear-worn devices would eventually become laden with biometric sensors:

“Few people realise that the ear is a remarkably good place to measure many vital signs. Unlike the wrist, the ear doesn’t move around much while you’re taking measurements, which can make it more reliable for things like heart rate, blood pressure, temperature and pulse oximetry. It can even provide a useful site for ECG measurement.”

Today, US hearing aid manufacturer, Starkey has incorporated one of Valencell’s heart rate monitors into its Livio AI hearing aids. This new integration unveiled at CES this year was made possible due to the miniaturization of ECG sensors to the point they can be fit onto a tiny, receiver-in-the-canal (RIC) hearing aid. To Nick’s point, there are significant advantages to recording biometric data in the ear rather than the wrist, so it should come as no surprise as future versions of AirPods and its competitors come equipped with various sensors over time.

Nick continues to write and share his insights, so if you’re not already following his work, it might be a good time to start reading up on Nick’s thinking about how our little ear-computers will continue to evolve.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

 

Daily Updates, Future Ear Radio, Podcasts

Podcast Previews (Future Ear Daily Update 7-9-19)

7-9-19 - Podcast Previews.jpg

Something that I’ve been thinking about a lot lately is podcast discovery. The reason I’m so keen on podcasting right now is that I see it as a killer use case for both hearables (passive content consumption as a reason to wear the devices longer and more frequently) and smart assistants (control, curate and mediate the information).

Last week, I wrote about the concept of podcast-to-podcast advertising as a potential solution to help creators of all sizes work together to monetize and grow their audiences. As I mentioned in that post, there are 750,000 podcasts out there and a total of more than 30 million episodes have been recorded, according to website, podcastinsights.com. So, there’s a big ocean of content out there that listeners need to wade through in order to find something they might like.

One of the other potential solutions that I’ve been thinking might work to help with the discovery dilemma is “podcast previews,” which would be controlled via a smart assistant. The way that I envision this operating is similar to a flash briefing, where you issue one voice command (Alexa, play my podcast previews), and then receive a stream of short clips from the podcasts that you subscribe to, in addition to podcast recommendations that you might like. This would seem to fit right in the wheelhouse of platforms like Spotify that are focusing heavily on podcasting and personalized curation.

You’d then be able to control the stream via your voice assistant, allowing you to skip ahead, play the full podcast, and ultimately subscribe to a podcast. In this scenario, podcast creators would be able to crop their own clip from the episode being previewed and submit it to the stream, the same way a flash briefing is submitted.

To any podcast creators reading this – what do you think? Would this make sense as a way to potentially expose more people to your podcast? I’d love to hear your thoughts, so be sure to reach out on twitter to get the conversation going!

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”