Daily Updates, Future Ear Radio, hearables, Podcasts, VoiceFirst

Beetle Moment Marketing Podcast Appearance (Future Ear Daily Update 8-13-19)

8-13-19 - Beetle moment

One of the best things about attending the Voice Summit was meeting so many sharp people working in and around the voice space. One of the people I was fortunate to meet and spend some time with was Emily Binder, founder of Beetle Moment Marketing. Emily’s marketing firm specializes in helping brands differentiate by leveraging emerging technologies, which includes all things voice.

She has an impressive portfolio of work, which includes the creation and management of Josh Brown and Ritzholtz Wealth Management’s flash briefing, “Market Moment” and Alexa skill/mini podcast, “The Compound Show.” (Josh, aka The Reformed Broker, is one of the most popular internet figures in the finance world, with over a million twitter followers). This is just one example of the type of work that she does on a regular basis for all types of clients.

Emily approached me at the Voice Summit about coming on her podcast to record an episode centered around hearables, which we recorded last week. It was a quick, 18-minute discussion about the evolution of the hearables’ landscape, where the technology is going, and some of the challenges that have to be navigated to get there. We also touched on Flash Briefings and shared the same sentiment about how much potential there is with this new medium, while simultaneously being a bit disappointed by Amazon not giving the Alexa-specific feature more prominence (it should be the star of Amazon’s smart speakers!).

Check out the episode, and be sure to engage on Twitter to let me know what you think!

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio, hearables, Voicebot, VoiceFirst

New Voicebot Post (Future Ear Daily Update 8-2-19)

7-31-19 - Two Key Takeaways

Today I published an article on Voicebot.ai around some key takeaways from Spotify and Apple’s earnings reports that pertain to all things Future Ear.

“Apple and Spotify each reported earnings on Tuesday and there were a few key points in each company’s earnings report that pertain to the world of audio, voice, and hearables. In particular, we can start to see how an aural attention economy is taking shape alongside the visual attention economy ushered in by the mobile era.”

Head on over to Voicebot to check out the post, and let me know what you think on Twitter!

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Conferences, Future Ear Radio, hearables, VoiceFirst

Empowering Our Aging Population with Voice Tech (Future Ear Daily Update 7-30-19)

7-30-19 - Empowering Our Aging Population

Yesterday’s update was based around Cathy Pearl’s fantastic talk from the Voice summit around democratizing voice technology and using it to empower disabled individuals that can benefit from this technology. Today, I wanted to highlight another cohort that stands to gain from voice tech and that’s our aging population. Prior to Cathy’s talk, I attended an equally awesome session led by Davis Park of Front Porch and Derek Holt of K4Connect. I live-tweeted this talk as well (I’m sorry if I overloaded your twitter feed while at the summit!):

To add some context here, K4Connect is a tech startup specifically geared toward building “smart” solutions for older adults. Front Porch, on the other hand, is a group of retirement facilities located in California that has been piloting a series of programs to implement Alexa-enabled devices into its residents’ homes. The two are now working together to expand Front Porch’s pilot to move into phase two, where K4Connect is helping to outfit Front Porch’s residents with IoT devices, such as connected lights and thermostats.

Image
K4Connect and Front Porch’s Pilot Program

From my perspective, this was one of the most important sessions of the entire Voice summit. The reason I say this is because it honed in on two key facts that have been reoccurring themes throughout FuturEar:

  • America’s population is getting considerably older due to the facts that we’re living longer and 10,000 baby boomers are turning 65 years old every day for a 20 year stretch (2011-2030).
  • The older our population gets, the higher the demand climbs for caregivers to look after our aging adults. It was stated in the presentation that we as a nation will need to recruit and retain 300,000 additional caregivers to meet the 2026 demand. Again, the demand will only continue to go up based on the first bullet point.

The takeaway from this talk, similar to Cathy Pearl’s, was that voice technology (namely, voice assistants and the IoT) can be implemented and utilized to offset the demand of the caregivers by empowering our older adults. One overlapping message from this talk and Cathy’s was that caregivers are largely burdened by menial tasks (turn on the light, close the blinds, change the TV channel), and the individuals who are being cared for are hyper-conscious of this. It gets exhausting for the caregiver as well as those receiving care, because they know how exhausting it is for the caregiver. Well, Siri/Alexa/Google do not get exhausted, they’re little AI bots, so who cares if you’re issuing hundreds of commands a day. That’s the beauty in this.

Image
Davis Park highlighting the success of Phase 2 of Front Porch’s Pilot

Following the talk, I spoke with Davis Park about their pilot and I asked him what the Front Porch residents are using their Alexa devices for. “It’s completely different based on the resident. For example, one woman said she loves it because she can now make the perfect hard-boiled egg,” Davis said. This was a total aha! moment for me, because sometimes we’re not appreciating the nuanced ways individuals are finding value in the oft cited use cases of voice assistants today (weather, timers, news, scores, etc) that sometimes get belittled. On the surface, sure, she’s finding value in being able to set a timer, but dig a little deeper and you’ll find in fact that the value is because she’s no longer overcooking her hard-boiled egg.

Image

The slide pictured above from the session illustrates why I see so much potential for voice technology, specifically for older adults. It’s becoming increasingly apparent through numerous research studies that loneliness and social isolation are severely detrimental to us as individuals, as well as to the broader economy.

The industry that I come from, the world of hearing aids and hearing loss, understands these co-morbidities all too well, as hearing loss is often correlated to social isolation. If your hearing is so diminished that you can no longer engage in social situations, you’re more likely to become withdrawn and become social isolated/lonely. 

This is ultimately why I think we’ll see voice assistants become integrated into this new generation of hearing aids. It kills two birds with one stone, as it augments one’s physical sound environment by providing amplification and the ability to hear more clearly, as well as serve as an access point to a digital assistant that can be used to communicate with one’s technology. One of the best solutions on the horizon for helping to circumvent the rising demand for caregivers might be “digital caregivers” in the form of Alexa/Google housed in hearing aids or other hearable devices.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio, VoiceFirst

Hearables Panel Recap (Future Ear Daily Update 7-26-19)

7-26-19 - Hearables panel recap

Yesterday, as I was leaving the Voice summit and boarding my plane back to St. Louis, I was thinking about what would be the best way for me to recap the hearables panel I participated on Thursday afternoon. When this panel was first organized by the Voice summit, it was slated to only consist of Andy Bellavia and myself, which I’m sure would have been a good panel discussion, but as the event loomed closer, the summit organizers added three more panelists and our moderator. I honestly don’t think we could have had a more well-rounded panel of backgrounds and perspectives.

Hearables Panel
From left to right: Eric Seay, Andy Bellavia, myself, Rachel Batish, Andreea Danielescu, Claire Mitchell

I decided that the best way to recap this panel would be to try and illustrate how I think each of the panelists’ backgrounds and area of focus will weave together as time progresses. When Claire asked me how I see the world of hearables taking shape across the next few years, I attempted to paint a mental picture for the audience. I haven’t seen any video clips yet from our talk, so I’m kind of going off my memory, but here’s the gist of how I responded and how I think each of my fellow panelists’ areas of focus will factor in:

“The first thing that we need to acknowledge is the behavioral shift that’s been occurring since December of 2016 when AirPods debuted. It has progressively become normalized to wear truly wireless earbuds for extended periods of time. The reason this behavior shift has been enabled is due to the hardware advancements coming out of Andy’s world. Much of the hearables’ innovation that transpired in the first half of this decade was largely around the components inside the devices, from the systems-on-the-chips to DSP chips to sensors to Bluetooth advancements. These innovations eventually ended up manifesting themselves in ways such as, better battery life or automatic Bluetooth pairing. In essence, the cultural behavior shift ushered in by AirPods that we’ve been witness to would not have been possible without the preceding component innovation that makes the behavioral shift feasible.

So, you asked, where do we go from here? One way to think about hearables’ role is how it impacts the “attention economy.” One of the biggest byproducts of the mobile era is how we’re constantly able to consume content. Facebook, Instagram, Snapchat, Twitter, etc. exchange free content for our attention; time is the currency of the attention economy. So, in a world where it has become socially acceptable and technologically feasible to wear ear-worn devices for extended periods throughout the day, it’s realistic to think that the attention economy begins to be divided between our eyes and our ears. We’re already seeing this with podcasting, but just as Instagram stories represent one specific type of visual content that drives the visual attention economy, podcasts represent one specific type of audio content that will drive the “aural attention economy.”

The attention economy is predicated on tools that enable easy content creation, leading to more content supply to be consumed. Therefore, tools that enable audio content creation are paramount to the near-term future of the aural attention economy. Supply generation, however, is only one part of the equation, we need to more intelligently surface and curate the audio content supply, so that people are discovering content that they want to listen to. Rachel’s company, Audioburst, is a perfect example of how we can better connect content to users. Through a tool like Audioburst, I can say, “give me an update on what’s going on with Tesla,” and rather than being fed a summary of a business insider article, I’m being fed radio and podcast clips where they are specifically talking about Tesla.

The other aspect of the emergence of hearables that needs to be solved is how we design for experiences that might be audio-only. Eric’s work around sonic branding and non-verbal audio cues is going to play a big role in the foreseeable future, because we’ll need to rely on a variety of audio cues that we intuitively understand. For example, if I receive an audio message, I’ll want to be alerted of that message from a cue that’s non-invasive. Or if I’m walking down the street and I’ve indicated that I’m hungry, I might hear McDonald’s sonic brand (ba-dah-ba-ba-baah) indicating that there’s a McDonald’s close by.

Creating and designing audio cues is a challenge in and of itself, but the implementation of said cues adds another level of complexity. As Andreea described from her work designing audio experiences for a variety of hearables, the challenge tends to stem around the contexts of the user. The way that I interact with my hearable will be far different if I’m really straining myself on a 5 mile run, compared to a leisurely walk. The experiences we have will need to be tailored to the contexts, so for example, when I’m running I might just want to say, “heartrate” to my hearable as an input and receive my heart rate readout, but when I’m walking around I might input a full sentence, “what’s my heartrate reading?” These are subtleties but enough poor experiences, however subtle they are, will ultimately lose the user’s interest.

So, we should see more component innovation coming out of Andy’s world, which will facilitate continual behavior shifts. Tools like Rachel’s company, Audioburst, will allow for more intelligent content aggregation and more reason to wear our devices for longer periods of time as we begin dividing our attention between our eyes and ears. Longer usage, means more opportunity for audio-augmentation and sonic branding & advertising, which will need to be carefully thought out by the UX folks like Andreea and Eric so as not to create poor experiences and drive users away.”

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

 

Conferences, Daily Updates, Future Ear Radio, VoiceFirst

Voice Summit Day One Recap (Future Ear Daily Update 7-24-19)

img_3651

Although the Voice Summit technically started on Monday, that day was considered “Day 0” and Tuesday marked Day One of the summit. The thing that makes this summit so impressive and valuable is that it’s packed full of speakers from start to finish. It led off with three back-to-back keynotes from three of the most influential people in the voice space.

I took to twitter yesterday to try and live-tweet each of the sessions as best as I could, so feel free to click into each embedded twitter thread for each session for my real-time thoughts.

First up, was Dave Isbitski, chief evangelist of Alexa. Dave talked a lot about the first 5 years of the Alexa journey, highlighting various phases along the way to where we are today with Alexa. We’ve moved to single-turn conversations to multi-turn, and as Dave detailed, the next phase is for multi-session, which means that Alexa will start to understand the user’s context and in time learn things about the user like preferences. This is all achieved through deep learning modules.

Dave also unveiled a new workflow tool called “Skill Flow Builder” that allows anyone, developer or non-developer, to easily input and adjust dialogue within skills. The use case that Dave highlighted for this was interactive story telling games. Just as I tweeted, this really harkens back to Brian Roemmele talking about the need for the technology to be simple enough to, “bring the creatives into the fold.” Skill Flow Builder does just that.

One of my favorite portions of Dave’s talk was around flash briefings and some creative ways that people are starting to use them, such as for internal corporate communications. Flash briefings continue to strike me as one of the most unique aspects to Alexa and something that we’re only just starting to scratch the surface with.

Next was Adam Cheyer who co-founded Siri and sold it to Apple, then moved onto Viv Labs, which was purchased by Samsung, where Adam now works. Adam heads up the Bixby division, and Bixby 2.0 is the first iteration of the voice assistant under Adam’s leadership. Obviously, when one of the founding fathers of voice assistants is presenting, you’re due for some interesting insight.

To round out the initial keynotes, we had the pleasure of Noelle LaCharite of Microsoft talking about Cortana’s makeover. I think Microsoft is smart to have pivoted Cortana away from competing with Google and Alexa as a “master assistant”  and instead positioned Cortana as “the master of Microsoft.” As Noelle pointed out, Cortana is wonderful when it’s tasked to do things housed inside Microsoft’s properties, such as scheduling meetings with outlook. Additionally, I appreciate the focus Microsoft has around accessibility, which is clearly a motivation for Noelle personally.

After the first three keynotes, the breakout sessions began. The one downside about this conference is that there are about seven sessions going at once, and so it can be really hard to choose which session to attend. I decided to go see Mark C. Webster’s talk on “why conversational interfaces are designed to fail.”

This was one of the better talks I’ve heard in the voice space and the reason was that Mark really shot the room straight with the state of conversational interfaces. One of the key points that he made was that the metaphor construct we use for assistants as “people” might be leading to confusion and poor experiences among users. In previous computing interfaces, images allowed us to create metaphors (whether they be desktop icons or app icons) to communicate the intent of the icon. Voice on the other hand does not have really offer a similar construct.

The issue with creating the expectation that you can just, “speak to Alexa as you would a person,” is that it’s not really true. Alexa and Google Assistant exist today because the natural language processing engines that these assistants run on have advanced considerably in the past decade, allowing them to capture our speech with high accuracy. But, just because they can accurately capture what we’re saying does not mean that Alexa knows what to do with your input, and therefore leads to, “I’m sorry, I do not understand that.” That was the crux of Mark’s presentation – maybe we shouldn’t be setting the expectation that these are “conversational devices” quite yet.

The last session of the day that I saw was Brielle Nickoloff of Witlingo talking about the evolution of the Voice web. This was an awesome talk that included a really stellar demo of Buildlingo’s newest update. One of the key points from this talk was that as tools continue to surface (i.e. Buildlingo and Castlingo) that facilitate easier and faster audio content creation, the world of audio content creation begins to be democratized. Brielle did a great job drawing parallels of the voice web with the evolution of the internet in its various phases and how it progressively became easier and easier to share content on the web, to the point that anyone could quickly and easily share anything on sites like Twitter, Facebook and Youtube.

 

All-in-all, it was an awesome day. I learned a lot, met a ton of people, connected with old pals, and got a good understanding of where we are with voice technology in its various pockets. Onto day two!

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Conferences, Daily Updates, Future Ear Radio, hearables, VoiceFirst

Hearables On-The-Go (Future Ear Daily Update 7-18-19)

7-18-19 - Hearables On-the-Go

One week from now, at the VOICE Summit, I will be joining an all-star group of hearables experts on a panel titled, “How On-the-Go Hearables Expand Opportunities with Voice,” which will be moderated by Claire Mitchell, of VaynerMedia. After spending some time getting to know the fellow panelists, I can safely say that this is going to be an awesome discussion as each panelists brings a different type of expertise to the table.

Hearables On the Go Panel

Andrew Bellavia of Knowles Corp. knows more about the hardware side of hearables than anyone I know. Knowles Corp manufacturers the microphones and DSP chips that various Alexa devices are outfitted with. Along with his expertise in all things hearables-hardware, Andy wears top-of-the-line, Bluetooth hearing aids (Phonak Marvels) that he routinely communicates to Google Assistant with. He’s on the forefront of experimenting and experiencing the future of #VoiceFirst + hearables.

Rachel Batish of Audioburst will provide a unique perspective on how Audioburst is enabling a future where we can catalog, search and curate audio clips from various radio shows and podcasts. To put into simple terms what Audioburst is striving to be, it’s essentially Google, but for audio. Imagine a future where you’re driving in the car, and you use ask Audioburst to “read you updates on Netflix” and then get a curated feed of short podcast and radio clips where the broadcasters are specifically speaking about Netflix. That’s the type of perspective that Rachel will be able to provide.

Eric Seay of AudioUX co-founded a creative audio agency that aims to shape the world of audio and sonic branding. As we move into a world that is less reliant on screens and visual overlays, Eric believes that many of the visual cues that we’ve become familiar with will have audio counterparts. Logos and sonic logos. Social media likes and nano audio “likes.” Eric will surely offer interesting ways that we need to start thinking about the burgeoning world that is audio + voice-only.

Finally, Andreea Danielescu of Accenture Labs and startup, Antilipsi, works as a founder, engineer, researcher and architect around the full software stack. She’s done extensive work around experience design and gesture interaction with all different types of emerging technologies, including voice assistants. She’ll bring to the table real world experience with early implementations of hearable technologies that incorporate features like voice assistant access, sharing what limitations exist and how we can work to overcome them.

It’s sure to be an awesome discussion next Thursday. If you’re attending the Voice Summit, I hope that you’re able to join us, and if not, I’ll do my best to recap some of the key takeaways from our panel on all things hearables + voice.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Conferences, hearables, VoiceFirst

The Alexa Conference 2019: From Phase One to Phase Two

alexa conf 2019

A Meeting of the Minds

Last week, I made my annual trek to Chatanooga, Tennessee to gather with a wide variety of Voice technology enthusiasts at the Alexa Conference. Along with the seismic growth of smart speakers and voice assistant adoption, the attendees grew quite dramatically too, as we went from roughly 200 people last year to more than 600 people this year. We outgrew last year’s venue, the very endearing Chattanooga Public Library, and moved to the city’s Marriott convention center. The conference’s growth was accompanied with an exhibit hall and sponsorships from entities as large as Amazon itself. We even had a startup competition between five startups, where my guest, Larry Guterman, won the competition with his amazing Sonic Cloud technology.

In other words, this year felt indicative that the Alexa Conference took a huge step forward. Cheers to Bradley Metrock and his team for literally building this conference from scratch into what it has become today and for bringing the community together. That’s what makes this conference so cool; it has a very communal feel to it. My favorite part is just getting to know all the different attendees and understand what everyone is working on.

This Year’s Theme

Phase One

Bret Kinsella, the editor of the de-facto news source for all things Voice, Voicebot.ai, presented the idea that we’ve moved into phase two of the technology. Phase one of Voice was all about introducing the technology to the masses and then increasing adoption and overall access to the technology. You could argue that this phase started in 2011 when Siri was introduced, but the bulk of the progress of phase one was post-2014 when Amazon rolled out the first Echo and introduced Alexa.

consumer adoption chart
Smart Speaker Adoption Rate by Activate Research

Since then, we’ve seen Google enter into the arena in a very considerable way that has culminated into the recent announcement that it would have one billion devices with Google Assistant enabled. We also saw smart speaker sales soar to ultimately represent the fastest adoption of any consumer technology product ever. If the name of the game for phase one was introducing the technology and growing the user base, then I’d say mission accomplished. On to the next phase of Voice.

Phase Two

According to Bret, phase two is about a wider variety of access (new devices), new segments that smart assistants are moving into, and increasing the frequency in which people use the technology. This next phase will revolve around habituation and specialization.

voice assistant share
From Bret Kinsela’s Talk at the Alexa Conference 2019

In a lot of different ways, the car is the embodiment of phase two. The car already represents the second most highly accessed type of device behind only the smartphone, but offers a massive pool of untapped access points through integrations and newer model cars with smart assistants built into the car’s console. It’s a perfect environment for using a voice interface as we need to be hands and eyes-free while driving. Finally, from a habituation standpoint, the car, similar to smart speakers, will serve the same role of “training wheels” for people to get used to the technology as they build the habit.

There were a number of panelists in the breakout sessions and general attendees that helped open my eyes to some of the unique ways that education, healthcare, business, and hospitality (among other areas) are all going to yield interesting integrations and contributions during this second phase. All of these segments offer new areas for specialization and opportunities for people to increasingly build the habit and get comfortable using smart assistants.

The Communal Phase Two

Metaphorically speaking, this year’s show felt like a transition from phase one to phase two too. As I already mentioned, the conference itself grew up, but so have all of the companies and concepts that were first emerging last year. Last year, we saw the first Alexa-driven, interactive content companies like Select a Story and Tellables starting to surface, which helped shine a light on what the future of story-telling might look like in this new medium.

This year we had the founder of Atari, Nolan Bushnell, delivering a keynote talk on the projects he and his colleague, Zai Ortiz, are building at their company, X2 Games. One of the main projects, St. Noire, is an interactive, murder-mystery board game that fuses Netflix-quality video content for your character (through an app on a TV) with an interactive element for the players having to decide certain decisions (issued through a smart speaker). The players’ decisions are what will ultimately impact the trajectory of the game and allow for the players to progress far enough to solve the mystery.  It was a phenomenal demo of a product that certainly made me think, “wow, this interactive story-telling concept sure is maturing fast.”

Witlingo now has a serious product on its hands with Castlingo (micro-Alexa content generated by the user). I feel like while podcasts represent long-form content akin to blogging, there seems to be a gap to fill for more micro-form audio content creation more akin to tweeting. I’m not sure if this gap will ultimately be filled by something like Castlingo or Flash Briefings, but it would be awesome if a company like Witlingo emerged as the Twitter for audio.

Companies like Soundhound continue to give me hope that white-label assistant offerings will thrive in the future, especially as brands will want to extend their brands to their assistants, and not have something bland and generic. Katie McMahon‘s demos of Hound never cease to amaze me either, and it’s newest feature, Query Glue, displays the furthest level of conversational AI that I’ve seen to date.

Magic + Co’s presence at the show indicated that digital agencies are beginning to take Voice very seriously and will be at the forefront of the creative ways brands and retailers integrate and use smart assistants and VUI. We also had folks from Vayner Media at this year’s conference which was just another example that some of the most cutting-edge agencies are thinking deeply about Voice.

Finally, there seemed to be transition to a higher phase on an individual level too. Brian Roemmele, the man who coined the term #VoiceFirst, continues to peel back the curtain on what he believes the long-term future of Voice looks like (check out his podcast interview with Bret Kinsella). Teri Fisher seemed to be on just about every panel and was teaching everyone how to produce different types of audio content. For example, he provided a workshop on how to create a Flash Briefing, which makes me believe we’ll see a lot of people from the show begin making their own audio content (myself included!).

the role of hearables
From my presentation at the Alexa Conference 2019

From a personal standpoint, I guess I’ve entered into my own phase two as well. Last year I attended the conference on a hunch that this technology would eventually impact my company and the industry I work in, and after realizing my hunch was right, I decided that I needed to start contributing in the area of expertise that I know best: hearables.

This year, I was really fortunate to have the opportunity to present on the research I’ve been compiling and writing about around why I believe hearables play a critical role in a VoiceFirst future. I went from sitting in a chair, watching and admiring people like Brian, Bret and Katie McMahon share their expertise last year, to being able to share some of my own knowledge this year to those same people, which was one of the coolest moments in my professional career. (Stay tuned, as I will be releasing my 45-minute talk into a series of blog posts where I break down each aspect of my presentation.)

For those of you reading this piece who haven’t been able to make this show but feel like this conference might be valuable but aren’t sure how, my advice to you is to just go. You’ll be amazed at how inclusive and communal the vibe is and I bet you’ll even walk away from it thinking differently about you and your business’ role as we enter into the 2020’s. If you do decide to go, be sure to reach out as I will certainly be in attendance next year and the years beyond.

-Thanks for Reading-

Dave

 

 

audiology, Biometrics, hearables, Smart assistants, VoiceFirst

Capping Off Year One with my AudiologyOnline Webinar

FuturEar Combo

A Year’s Work Condensed into One Hour

Last week, I presented a webinar through the continuing education website, AudiologyOnline, for a number of audiologists around the country. The same week the year prior, I launched this blog. So, for me, the webinar was basically a culmination of the past year’s blog posts, tweets and videos that I’ve generated, distilled into a one-hour presentation. By having to consolidate so many things I have learned into a single hour, it helped me to choose the things that I thought were most pertinent to the hearing healthcare professional.

If you’re interested, feel free to view the webinar using this link (you’ll need to register, though you can register for free and there’s no type of commitment): https://www.audiologyonline.com/audiology-ceus/course/connectivity-and-future-hearing-aid-31891  

Some of My Takeaways

Why This Time is Different

The most rewarding and fulfilling part of this process has been to see the way things have unfolded and the technological progress that has been made both with the hardware and software of the in-the-ear devices and also the rate at which the emerging use cases for said devices are maturing. During the first portion of my presentation, I laid out why I feel this time is different from any previous era where disruption might feel as if it’s on the doorstep, yet doesn’t come to pass, and that’s largely due to the fact that the underlying technology has matured so much of late.

I would argue that the single biggest reason why this time is different is due to the smartphone supply chain, or as I stated in my talk – The Peace Dividends of the Smartphone War (props to Chris Anderson for describing this phenomenon so eloquently). Through the massive, unending proliferation of smartphones around the world, the components that comprise the smartphone (which also comprise pretty much all consumer technology) have gotten incredibly cheap and accessible.

Due to these economies of scale, there is a ton of innovation occurring with each component (sensors, processors, batteries, computer chips, microphones, etc). This means more companies than ever, from various segments, are competing to set themselves apart in any way they can in their respective industries, and therefore are providing innovative breakthroughs for the rest of the industry to benefit from. So, hearing aids and hearables are benefiting from breakthroughs occurring in smart speakers and drones because much of the innovation can be reaped and applied across the whole consumer technology space, rather than just limited to one particular industry.

Learning from Apple

Another point that I really tried to hammer home is the fact that our “connected” in-the-ear devices are now considered “exotropic” meaning that they appreciate in value over time. Connectivity enables the ability for the device to enhance itself, through software/firmware updates and app integration, even after the point-of-sale; much like a smartphone. So, in a similar fashion to our hearing aids and hearables reaping the innovation from other consumer technology innovation occurring elsewhere, connectivity does a similar thing – it enables network effects.

If you study Apple and examine why the iPhone was so successful, you’ll see that its success was largely predicated on the iOS app store, which served as a marketplace that connected developers with users. The more customers (users) there were, the more incentive there was to come and sell your goods as a merchant (developers) in the marketplace (app store). Therefore the marketplace grew and grew as the two sides constantly incentivized one another to grow, which compounded the growth.

That phenomenon I just described is called two-sided network effects and we’re beginning to see the same type of network effects take hold with our body-worn computers. That’s why a decent portion of my talk was spent around the Apple Watch. Wearables, hearables or smart hearing aids – they’re all effectively the same thing: a body-worn computer. Much of the innovation and use cases beginning to surface from the Apple Watch can be applied to our ear-worn computers too. Therefore, Apple Watch users and hearable users comprise the same user-base to an extent (they’re all body computers), which means that developers creating new functionality and utility for the Apple Watch might indirectly (or directly) be developing applications for our in-the-ear devices too. The utility and value of our smart hearing aids and hearables will just continue to rise, long after the patient has purchased their device, making for a much stronger value proposition.

Smart Assistant Usage will be Big

One of the most exciting use cases that I think is on the cusp of breaking through in a big way in this industry is smart assistant integration into the hearing aids (already happening in hearables). I’ve attended multiple conferences dedicated to this technology and have posted a number of blogs on smart assistants and the Voice user interface so, I don’t want to rehash every reason why I think this is going to be monumental for the product offering of this industry, but the main takeaway is this: the group that is adopting this new user interface the fastest is the same cohort that makes up the largest contingent of hearing aid wearers – the older adults. The reason for this fast adoption, I believe, is because there are few limitations to speaking and issuing commands/controlling your technology with your voice. This is why Voice is so unique; It’s conducive to the full age spectrum from kids to older adults, while something like the mobile interface isn’t particularly conducive to older adults who might have poor eyesight, dexterity or mobility.

This user interface and the smart assistants that mediate the commands are incredibly primitive today relative to what they’ll mature to become. Jeff Bezos famously quipped in 2016 in regard to this technology that, “It’s the first inning. It might even be the first guy’s up at bat.” Even in the technology’s infancy, the adoption of smart speakers among the older cohort is surprising and leads one to believe that they’re beginning to grow a dependency on smart assistant mediated voice commands, rather than tap, touch and swiping on their phones. Once this becomes integrated into hearing aids, patients will be able to conduct many of the same functions that you or I do with our phones, simply by asking their smart assistant to do that for them. One’s hearing aid serving the role (to an extent) of their smartphone further strengthens the value proposition of the device.

Biometric Sensors

If there’s one set of use cases that I think can rival the overall utility of Voice, it would be the implementation of biometric sensors into ear-worn devices. To be perfectly honest, I am startled how quickly this is already beginning to happen, with Starkey making the first move with the introduction of a gyroscope and accelerometer into its Livio AI hearing aid allowing for motion tracking. These sensors support the use cases of fall detection and fitness tracking. If “big data” was the buzz of the past decade, then “small data”, or personal data, will be the buzz of the next 10 years. Life insurance companies like John Hancock are introducing policies built around fitness data, converting this feature from a “nice to have” to a “need to have” for those that need to be wearing an-all day data recorder. That’s exactly the role the hearing aid is shaping up to serve – a data collector.

The type of data being recorded is really only limited to the type of sensors that are embedded into the device, and we’ll soon see the introduction of PPG sensors, as Valencell and Sonion plan to release a commercially available sensor small enough to fit into a RIC hearing available in 2019 for OEMs to implement into their offerings. These light-based sensors are currently built into the Apple Watch and provide the ability to track your hear rate. There have been a multitude of folks who have cited their Apple Watch for saving their life, as they were alerted to abnormal spikes in their resting heart rates, which were discovered to be life-threatening abnormalities in their cardiac activity. So, we’re talking about hearing aids acting as data collectors and preventative health tools that might alert the hearing aid wearer to a life-threatening condition.

As these type of sensors continue to shrink in size and become more capable, we’re likely to see more types of data that can be harvested, such as blood pressure and other cardiac data from the likes of an EKG sensor. We could potentially even see a sensor that’s capable of gathering glucose levels in a non-invasive way, which would be a game-changer for the 100 million people with diabetes or pre-diabetes. We’re truly at the tip of iceberg with this aspect of the devices, and this would mean that the hearing healthcare professional is a necessary component (fitting the “data collector”) for the cardiologist or physician that needs their patient’s health data monitored.

More to Come

This is just some of what’s happened across the past year. One year! I could write another 1500 words on interesting developments that have occurred this year, but these are my favorites. There is seemingly so much more to come with this technology and as these devices continue their computerized transformation into looking like something more akin to the iPhone, there’s no telling what other use cases might emerge. As the movie Field of Dreams so famously put it, “If you build it, they will come.” Well, the user base of all our body-worn computers continues to grow and further enticing the developers to come make their next big pay day. I can’t wait to see what’s to come in year two and I fully plan on ramping up my coverage of all the trends converging around the ear. So stay tuned and thank you to everyone who has supported me and read this blog over this first year (seriously, every bit of support means a lot to me).

-Thanks for Reading-

Dave

audiology, Biometrics, hearables, Live-Language Translation, News Updates, Smart assistants, VoiceFirst

Hearing Aid Use Cases are Beginning to Grow

 

The Next Frontier

In my first post back in 2017, I wrote that the inspiration for creating this blog was to provide an ongoing account of what happens after we connect our ears to the internet (via our smartphones). What new applications and functionality might emerge when an audio device serves as an extension of one’s smartphone? What new hardware possibilities can be implemented in light of the fact that the audio device is now “connected?” This week, Starkey moved the ball forward with changing the narrative and design around what a hearing aid can be with the debut of its new Livio AI hearing aid.

Livio AI embodies the transition to a multi-purpose device, akin to our hearables, with new hardware in the form of embedded sensors not seen in hearing aids to date, and companion apps to allow for more user control and increased functionality. Much like Resound firing the first shot in the race to create connected hearing aids with the first “Made for iPhone” hearing aid, Starkey has fired the first shot in what I believe will be the next frontier, which is the race to create the most compelling multi-purpose hearing aid.

With the OTC changes fast approaching, I’m of the mind that one way hearing healthcare professionals will be able to differentiate in this new environment is by offering exceptional service and guidance around unlocking all the value possible from these multi-purpose hearing aids. This spans the whole patient experience, from the way the device is programmed and fit, to educating the patient around how to use the new features. Let’s take a look what one of the first forays into this arena looks like by breaking down the Livio AI hearing aid.

Livio AI’s Thrive App

Thrive is a companion app that can be downloaded to use with Livio AI, and I think it’s interesting for a number of reasons. For starters, what I find useful about this app is that it’s Starkey’s attempt to combat the potential link of cognitive decline and hearing loss in our aging population. It does this by “gamifying” two sets of metrics that roll into your 200 point “Thrive” score that’s meant to be achieved regularly.

Thrive Score.JPG

The first set of metrics is geared toward measuring your body activity, comprised around data collected through sensors to gauge your daily movement. By embedding a gyroscope and accelerometer into the hearing aid, Livio AI is able to track your movement, so that it can monitor some of the same type of metrics as an Apple Watch or Fitbit. Each day, your goal is to reach 100 “Body” points by moving, exercising and standing up throughout the day.Body

The next bucket of metrics being collected is entirely unique to this hearing aid and is based around the way in which you wear your hearing aids. This “brain” category measures the daily duration the user wears the hearing aid, the amount of time spent “engaging” other people (which is important for maintaining a healthy mind), and the various acoustic environments that the user experiences each day.

Brain Image.JPG

So, through gamification, the hearing aid wearer is encouraged to live a healthy lifestyle and use their hearing aids throughout the day in various acoustic settings, engaging in stimulating conversation. To me, this will serve as a really good tool for the audiologist to ensure that the patient is wearing the hearing aid to its fullest. Additionally, for those who are caring for an elderly loved one, this can be a very effective way to track how active your loved one’s lifestyle is and whether they’re actually wearing their hearing aids. That’s the real sweet spot here, as you can quickly pull up their Thrive score history to get a sense of what your aging loved one is doing.

Healthkit SDK Integration

 

Another very subtle thing about the Thrive app that has some serious future applications is that fact that Starkey has integrated Thrive’s data into Apple’s Healthkit SDK. This is one of the only third-party device integrations that I know of to be integrated into this SDK at this point. The image above is a side-by-side comparison of what Apple’s Health app looks like with and without Apple Watch integration. As you can see, the image on the right displays the biometric data that was recorded from my Watch and sent to my Health app. Livio AI’s data will be displayed in the same fashion.

So, what? Well, as I wrote about previously, the underlying reason this is a big deal, is that Apple has designed its Health app with future applications in mind. In essence, Apple appears to be aiming to make the data easily transferable, in an encrypted manner (HIPAA-friendly), across Apple-certified devices. So, it’s completely conceivable that you’d be able to share the biometric data being ported into your Health app (i.e. Livio AI data) and share it with a medical professional.

For an audiologist, this would mean that you’d be able to remotely view the data, which might help to understand why a patient is having a poor experience with their hearing aids (they’re not even wearing them). Down the line, if hearing aids like Livio were to have more sophisticated sensors embedded, such as a PPG sensor to monitor blood pressure, or a sensor that can monitor your body temperature (as the tympanic membrane radiates body heat), you’d be able to transfer a whole host of biometric data to your physician to help them assess what might be wrong with you if you’re feeling ill. As a hearing healthcare professional, there’s a possibility that in the near future, you will be dispensing a device that is not only invaluable to your patient but to their physician as well.

Increased Intelligence

Beyond the fitness and brain activity tracking, there are some other cool use cases that come packed with this hearing aid. There’s a language translation feature that includes 27 languages, which is done in real-time through the Thrive app and is powered through the cloud (so you’ll need to have internet access to use). This seems to draw from the Starkey-Bragi partnership which was formed a few years ago, which was a good indication that Starkey was looking to venture down the path of making a feature-rich hearing aid with multiple uses.

Another aspect of the smartphone that Livio AI leverages is the smartphone’s GPS. This allows the user to use their smartphone to locate their hearing aids if they have gone missing. Additionally, the user can set “memories” to adjust their hearing aid settings based on the acoustic environment they’re in. If there’s a local coffee shop or venue that the user frequents, where they’ll want their hearing aids to have a boost or turned down in some fashion, “memories” will automatically adjust the settings based on the pre-determined GPS location.

If you “pop the hood” of the device and take a look inside, you’ll see that the components comprising the hearing aid have been significantly upgraded too. Livio AI boasts triple the computing power and double the local memory capacity as the previous line of Starkey hearing aids. This should come as no surprise, as the most impressive innovation happening with ear-worn devices is what’s happening with the components inside the devices, due to the economies of scale and massive proliferation of smartphones. This increase in computing power and memory capacity is yet another example of the, “peace dividends of the smartphone war.” This type of computing power allows for a level of machine learning (similar to Widex’s Evoke) to adjust to different sound environments based on all the acoustic data that Starkey’s cloud is processing.

The Race is On

As I mentioned at the beginning of this post, Starkey has initiated a new phase of hearing aid technology and my hope is that it spurs the other four manufacturers to follow suit, in the same way that everyone followed Resound’s lead with bringing to market “connected” hearing aids. Starkey CTO, Achin Bohwmik, believes that traditional sensors and AI will do to the hearing aid what Apple did to the phone, and I don’t disagree.

As I pointed out in a previous post, the last ten years of computing was centered around porting the web to the apps in our smartphone. The next wave of computing appears to be a process of offloading and unbundling the “jobs” that our smartphone apps represent, to a combination of wearables and voice computing. I believe the ear will play a central role in this next wave of computing, largely due to the fact that it serves as a perfect position for an ear-worn computer with biometric sensors equipped that doubles as a home to our smart assistant(s) which will mediate our voice commands. This is the dawn of a brand new day and I can’t help but feel very optimistic about the future of this industry and hearing healthcare professionals who embrace these new offerings. In the end however, it’s the patient who will benefit the most and that’s a good thing when so many people could and should be treating their hearing loss.

-Thanks for Reading-

Dave

Conferences, hearables, Smart assistants, VoiceFirst

The State of Smart Assistants + Healthcare

 

Last week, I was fortunate to travel to Boston to attend the Voice of Healthcare Summit at Harvard Medical School. My motivation for attending this conference was to better understand how smart assistants are currently being implemented into the various segments of our healthcare system and to learn what’s on the horizon in the coming years. If you’ve been following my blog or twitter feed, then you’ll know that I am envisioning a near-term future where smart assistants become integrated into our in-the-ear devices (both hearables and bluetooth hearing aids). Once that integration becomes commonplace, I imagine that we’ll see a number of really interesting and unique health-specific use cases that leverage the combination of the smartphone, sensors embedded on the in-the-ear device, and smart assistants.

 

Bradley Metrock, Matt Cybulsky and the rest of the summit team that put on this event truly knocked it out of the park, as the speaker set and the attendees included a wide array of different backgrounds and perspectives, which resulted in some very interesting talks and discussions. Based on what I gathered from the summit, smart assistants will yield different types of value to three groups: patients, remote caregivers, and clinicians and their staff.

Patients

At this point in time, none of our mainstream smart assistants are HIPAA-compliant, limiting the types of skills and actions that be developed specific to healthcare. Companies like Orbita are working around this limitation by essentially taking the same building blocks required to create of voice skills and then building secure voice skills from scratch in its platform. Developers who want to create skills/actions for Alexa or Google that use HIPAA data, however, will have to wait until the smart assistant platforms have become HIPAA-compliant, which could happen this year or next.

It’s easy to imagine the upside that will come with HIPAA-compliant assistants, as that would allow for the smart assistant to retrieve one’s medical data. If I had a chronic condition that required me to take five separate medications, Alexa could audibly remind me to take each of the five, by name, and respond to any questions I might have regarding any of the five medications. If I am telling Alexa of a side effect I’m having, Alexa might even be able to identify which of the five medications are possibly causing that side-effect and loop in my physician for her input. As Brian Roemmele has pointed out repeatedly, the future ahead for our smart assistants is routed through each of our own personalized, contextual information, and until these assistants are HIPAA-compliant, the assistant has to operate at a more general level than a personalized one.

That’s not to say there isn’t value in generalized skills or skills that don’t use data that falls under the HIPAA umbrella and therefore can be personalized. Devin Nadar from Boston Children’s Hospital walked us through their KidsMD skill, which ultimately allows for parents to ask general questions about their children’s illness, recovery, symptoms, etc and then have the peace of mind that the answers they’re receiving are being sourced and vetted by Boston Children’s Hospital; it’s not just random responses being retrieved from the internet. Cigna’s Rowena Track showed how their skill allows for you to check things such as your HSA-balance or urgent care wait times.

Care Givers and “Care Assistants”

By 2029, 18% of America will be above the age of 65 years old and the average US life expectancy rate is already climbing above 80. That number will likely continue to climb which brings us to the question, “how are we going to take care of our aging population?”  As Laurie Orlov, industry analyst and writer of the popular Aging In Place blog, so eloquently stated during her talk, “The beneficiaries of smart assistants will be disabled and elderly people…and everyone else.” So, based on that sentiment and the fact that the demand to support our aging population is rising, enter into the equation what John Loughnane of CCA described as, “care assistants.”

Triangulation Pic.jpg
From Laurie Orlov’s “Technology for Older Adults: 2018 Voice First — What’s Now and Next” Presentation at the VOH Summit 2018

As Laurie’s slide above illustrates, smart assistants or “care assistants” in this scenario, help to triangulate the relationship between the doctor, the patient and those who are taking care of the patient, whether that be care givers or family. These “care assistants” can effectively be programmed with helpful responses around medication cadence, what the patient can or can’t do and for how long they’re restricted, what they can eat, when to change bandages and how to do so. In essence, the “care assistant” serves as an extension to the care giver and the trust they provide, allowing for more self-sufficiency and therefore, less of a burden on the care giver.

As I have written about before, the beauty of smart assistants is that even today in their infancy and primitive state, smart assistants can empower disabled and elderly people in ways that no previous interface has before. This matters from a fiscal standpoint too, as Nate Treloar, President of Orbita, pointed out that social isolation costs Medicare $6.7 billion per year. Smart assistants act as a tether to our collective social fabric for these groups and multiple doctors at the summit cited disabled or elderly patients who described their experience of using a smart assistant as “life changing.” What might seem trivial to you or I, like being able to send a message with your voice, might be truly groundbreaking to someone who has never had that type of control.

The Clinician and the System

The last group that stands to gain from this integration would be the doctor and those working in the healthcare system. According to the annals of Internal Medicine, for every hour that a physician spends with a patient, they must spend two hours on related administration work. That’s terribly inefficient and something that I’m sure drives physicians insane. The drudgery of clerical work seems to be ripe for smart assistants to provide efficiencies. Dictating notes, being able to quickly retrieve past medical information, share said medical information across systems, etc. Less time doing clerical work and more time helping people.

Boston Children’s Hospital uses an internal system called ALICE and by layering voice onto this system, admins, nurses and other staff can very quickly retrieve vital information such as:

  • “Who is the respiratory therapist for bed 5?”
  • “Which beds are free on the unit?”
  • “What’s the phone number of the MSICU Pharmacist?”
  • “Who is the Neuro-surgery attending?”

And boom, you quickly get the answer to any of these. That’s removing friction in a setting where time might really be of the essence. As Dr. Teri Fisher, host of the VoiceFirst Health podcast, pointed out during his presentation, our smart assistants can be used to reduce the strain on the overall system by playing the role of triage nurse, admin assistant, healthcare guide and so on.

 What Lies Ahead

It’s always important with smart assistants and Voice to simultaneously temper current expectations while remaining optimistic about the future. Jeff Bezos joked in 2016 that, “not only are we in the first inning of this technology, we might even be at the first batter.” It’s early, but as Bret Kinsela of VoiceBot displayed during his talk, smart speakers represent the fastest adoption of any consumer technology product ever:

Fastest Adoption
From Bret Kinsela’s “Voice Assistant Market Adoption” presentation at the VOH Summit 2018

The same goes for how smart assistants are being integrated into our healthcare system. Much like Bezos’ joke, very little of this is even HIPAA-compliant yet. With that being said, you still have companies and hospitals the size Cigna and Boston Children’s Hospital putting forth resources to start building out their offerings in an impending VoiceFirst world. We might not be able to offer true, personalized engagement with the assistant yet, but there’s still lots of value that can be derived at the general level.

As this space matures, so too will the level of which we can unlock efficiencies within our healthcare system across the board. Patients of all ages and medical conditions will be more empowered to receive information, prompts and reminders to better manage their conditions. This means that those taking care of the patients are less burdened too, as they can offload the information aspect of their care giving to the “care assistant.” This then frees up the system as a whole, as there are less general inquiries (and down the line, personal inquiries), meaning less patients who need to come in and can be served at home. Finally, the clinicians can be more efficient too, as they can offload clerical work to the assistant and better retrieve data and information on a patient-to-patient basis, and also more efficiently communicate with their patient, even remotely.

As smart assistants become more integral to our healthcare system, my belief is that on-body access to the assistant will be desired. Patients, caregivers, clinicians and medical staff all have their own reasons for wanting their assistant right there with them at all times. What better a place than a discreet, in-the-ear device that allows for one-to-one communication with the assistant?

-Thanks for Reading-

Dave