Biometrics, Daily Updates, Future Ear Radio, hearables

“Hearables – The New Wearables” – Nick Hunn’s Famous White Paper 5 Years Later (Future Ear Daily Update 7-10-19)

7-10-19 - Hearables-the new wearables.jpg

Nick Hunn, the wireless technology analyst and CTO of WiFore Consulting, coined the term “hearables” in his now famous white paper, “Hearables – the new Wearables,” back in 2014. For today’s update, I thought it might be fun to look back at Nick’s initial piece to really appreciate some of his prescient foresight with predicting how our ear-worn devices would mature across the coming years.

For starters, one of the most brilliant insights that Nick shared was around the new Bluetooth standards that were being adopted at the time and the implications for battery life:

“The Hearing Aid industry’s trade body – EHIMA, has just signed a Memorandum of Understanding with the Bluetooth SIG to develop a new generation of the Bluetooth standard which will reduce the power consumption for wireless streaming to the point where this becomes possible, adding audio capabilities to Bluetooth Smart. Whilst the primary purpose of the work is to let hearing aids receive audio streams from mobile phones, music players and TVs, it will have the capability to add low power audio to a new generation of ear buds and headsets.”

To put this into perspective, the first “made-for-iPhone” hearing aid, The Linx, had just been unveiled by Resound at the end of 2013. Nick published his paper in April of 2014, so it may have been apparent for close observers that hearing aids were heading toward iPhone’s 2.4 GHz Bluetooth protocol (every hearing aid manufacturer ended up adopting it), but without a background like Nick’s, working in the broad field of wireless technology, it may have been hard to know about the way in which this new Bluetooth standard (initially called Bluetooth Smart and then later called Bluetooth Low Energy) would allow for more efficient power consumption.

Nick’s insight became more pronounced as Apple rolled out its AirPods in 2016 with its flagship W1 chip, which used ultra-low power Bluetooth allowing for 5 hours of audio streaming (2 hours of talk time). Flash-forward to today, and Apple has released its AirPods 2.0 that uses the H1 chip and Bluetooth 5.0, allowing for even more efficient power consumption.

It needs to be constantly reiterated that hearables were deemed unrealistic up until midway through the 2010’s because of how inefficient the power consumption was with previous Bluetooth standards. Batteries represent one component inside miniature devices that has historically not seen a whole lot of innovation, both in terms of size reduction and also by energy density, so it might not have been obvious to see that the work-around to this major roadblock was actually in the way that the power from the battery was extracted via new methods of Bluetooth signaling.

The other aspect of hearables that Nick absolutely nailed was that ear-worn devices would eventually become laden with biometric sensors:

“Few people realise that the ear is a remarkably good place to measure many vital signs. Unlike the wrist, the ear doesn’t move around much while you’re taking measurements, which can make it more reliable for things like heart rate, blood pressure, temperature and pulse oximetry. It can even provide a useful site for ECG measurement.”

Today, US hearing aid manufacturer, Starkey has incorporated one of Valencell’s heart rate monitors into its Livio AI hearing aids. This new integration unveiled at CES this year was made possible due to the miniaturization of ECG sensors to the point they can be fit onto a tiny, receiver-in-the-canal (RIC) hearing aid. To Nick’s point, there are significant advantages to recording biometric data in the ear rather than the wrist, so it should come as no surprise as future versions of AirPods and its competitors come equipped with various sensors over time.

Nick continues to write and share his insights, so if you’re not already following his work, it might be a good time to start reading up on Nick’s thinking about how our little ear-computers will continue to evolve.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

 

Future Ear Radio, hearables, Podcasts

“The Future of Hearables” My Podcast Episode with Nick Myers (Future Ear Daily Update 7-8-19)

7-8-19 - The Future of Hearables

I recently joined Nick Myers, CEO of Redfox AI, on his podcast “The Artificial Podcast” to discuss the future of hearable technology. You can check out our discussion in the link below or in any of the following podcasting platforms (Apple podcasts, Breaker, Google podcasts, Spotify, Overcast).

Here’s a few of the highlights of what we talked about:

  • How I get immersed in the VoiceFirst world, which stemmed from the emergence of Bluetooth hearing aids and the ways they can support new use cases
  • The bullish case for Smart Assistant-enabled hearing aids
  • The rise of AirPods – why they succeeded when previous, more ambitious hearables failed initially
    • Apple’s near-field VoiceFirst play and why AirPods are so critical to the company’s future
  • The new norm of wearing ear-worn devices for extended periods of time
  • The two phases of FuturEar.co and my motivation behind launching Future Ear Radio and the daily blog post
  • Conversational smart assistants and opening the app economy up to our assistants
    • The next phase of VoiceFirst and where we really start to see Alexa, Google Assistant, Bixby become “assistants”
  • Why the blend of assistants, the Voice User Interface and the IoT is going to be so critical to our aging population in the US
  • The increasing ease of conducing audio and why flash briefings and podcasting will continue to explode in popularity

I hope you enjoy the conversation! Feel free to engage with Nick and I on Twitter about our conversation and if there are particular portions of the conversation that resonate with you, be sure to let us know!

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Future Ear Radio, hearables

Voice Messaging and Audio Engagement (Future Ear Daily Update 5-21-19)

Audio Social Engagement

Anchor Voice Messaging

One of the persistent thoughts that runs through my head as it pertains to the emerging voice market and all the affiliated hardware, is that the building blocks are coming together for a new type of social media experience, one that is predominately built around audio. Today’s audio content however is pretty static. There’s no real native aspect of engagement, which we know is a fundamental pillar of today’s social media. Twitter, for better or worse, would not be Twitter if not for replies, retweets and liking tweets. The same goes for Instagram. So, to me, in order to see this idea of an audio-based social medium take hold, the content needs to become dynamic and engage-able by the audience.

Anchor is at the forefront of changing the way we interact with audio. As a podcasting platform, which was recently acquired by Spotify, Anchor presents a platform that is designed from the ground up to help podcasters create and disseminate their content out. Now, it would appear that engagement is becoming a focus for the company as it makes the (rather hidden) voice messaging feature much more broadly accessible.

Listeners can send an audio message, no matter the device or browser, which can then be implemented by the podcaster into future episodes. I believe this is only the start in terms of what can be done with engagement. Here are some examples of how we could see engagement being built out for an audio-based social medium:

  • Leave a review via your voice – Giving fellow listeners a chance to hear a review would be pretty cool. Maybe it then gets transcribed into a text review as well.
  • “Let’s take some callers” – blending some of the best aspects of radio with podcasting. Each week, designating a portion of your show to “calls” (voice messages) from your listeners. Maybe you pose a question to the listeners and ask for them to send in their responses, and each week you read off the answers from the previous episode’s question. I think this alone might berth a whole new set of podcasting formats.
    • The next-level step here would be to say “Alexa/Google …let’s send in a voice message…record it, send it, and then resume playing the episode.
  • Audio snippets – I believe this exists to an extent today for the podcaster to share snippets of their episodes, but I would hope that feature gets opened up to to the listeners too. Being able to grab your favorite clip of a podcast and share it out very quickly on the various legacy social channels would be awesome. It would be cool for the podcaster to know what portions are resonating most with people too.

These are just a few ideas and I’m super curious to know what others think about this concept of audio-engagement and other directions we can go with this. Tweet at me and let me know what type of audio-engagement features you’d like to see built out!

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

To listen on your Google Assistant device, enable the skill here: https://assistant.google.com/services/a/uid/00000059c8644238

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio, hearables

Future Ear Daily Update: 4-25-19

Nuheara Is Still Standing

I’ve recently written quite a bit about the changing landscape of hearables and how we’re seeing companies like Samsung and Amazon offering, or poised to start offering, in-the-ear devices, while former players like Bragi have exited from selling hardware. In the midst of all this change, it should not be forgotten that one of the original hearables companies, Australian-based Nuheara, is alive and well and I wanted to use today’s update to highlight their intended path of success.

I caught up with co-founder of the company, David Cannington, for this week’s episode of Oaktree TV (the weekly video series I produce for my company) to talk about how Nuheara is navigating the hearables’ waters that are seeing new predators enter and old foes go belly up. David told me that the company started out as a consumer electronics company, but has since entered into the hearing healthcare space as it was a sweet spot area for the company’s technology.

This is an interesting shift, as it moves Nuheara’s products further away from offerings like AirPods/Galaxy Buds/Pixel Buds/”Alexa Pods”, and more toward med-tech, like hearing aids. While hearing aids cater to the full spectrum of hearing loss, a significant portion of the people with mild to moderate hearing loss do not opt to purchase and wear hearing aids, typically due to the cost or stigma associated with hearing aids. This mild-moderate portion of the hearing loss spectrum is what Nuheara aims to cater to with its IQBuds Boost and the forthcoming IQBuds Max.

Nuheara’s IQBuds Boost use a proprietary calibration software called “EarID” which the company partnered with the National Acoustics Lab to create. The idea behind EarID is that when you go to set up your IQBuds Boost, you start by establishing your “EarID” through a hearing test in the devices’ companion app. Hearing loss is incredibly variable and different to each specific person, hence the name, EarID. Some people might hear low frequencies perfectly fine, but have a hard time hearing high or mid-range frequencies, other’s it might be the opposite. So, by taking EarID, you identify the way the device needs to amplify sound specific to you.

EarID is at the heart of Nuheara’s ecosystem that it is beginning to build out, starting with a proprietary TV streaming adapter that pairs with the IQBuds Boost so that you can listen to your TV through your EarID profile. The idea here is that Nuheara will roll out a slew of companion devices that pair with your Boost and stream audio that is then filtered through your Boost with your EarID. The sounds of life, tailored specifically to you.

As Nuheara moves toward the intersection between consumer electronics and med-tech, it will be interesting to see if they’re successful capturing the market in the middle. As we move toward a future where everyone tethers their ears for prolonged periods of time to the internet, it stands to reason that a wide variety of in-the-ear device options will emerge that focus on different features and functionality. Nuheara is targeting the folks that might need hearing aids, but don’t necessarily want “hearing aids” and instead want a form factor and price point that is more aligned with something from the consumer electronics space.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

To listen on your Google Assistant device, enable the skill here: https://assistant.google.com/services/a/uid/00000059c8644238

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Conferences, hearables, VoiceFirst

The Alexa Conference 2019: From Phase One to Phase Two

alexa conf 2019

A Meeting of the Minds

Last week, I made my annual trek to Chatanooga, Tennessee to gather with a wide variety of Voice technology enthusiasts at the Alexa Conference. Along with the seismic growth of smart speakers and voice assistant adoption, the attendees grew quite dramatically too, as we went from roughly 200 people last year to more than 600 people this year. We outgrew last year’s venue, the very endearing Chattanooga Public Library, and moved to the city’s Marriott convention center. The conference’s growth was accompanied with an exhibit hall and sponsorships from entities as large as Amazon itself. We even had a startup competition between five startups, where my guest, Larry Guterman, won the competition with his amazing Sonic Cloud technology.

In other words, this year felt indicative that the Alexa Conference took a huge step forward. Cheers to Bradley Metrock and his team for literally building this conference from scratch into what it has become today and for bringing the community together. That’s what makes this conference so cool; it has a very communal feel to it. My favorite part is just getting to know all the different attendees and understand what everyone is working on.

This Year’s Theme

Phase One

Bret Kinsella, the editor of the de-facto news source for all things Voice, Voicebot.ai, presented the idea that we’ve moved into phase two of the technology. Phase one of Voice was all about introducing the technology to the masses and then increasing adoption and overall access to the technology. You could argue that this phase started in 2011 when Siri was introduced, but the bulk of the progress of phase one was post-2014 when Amazon rolled out the first Echo and introduced Alexa.

consumer adoption chart
Smart Speaker Adoption Rate by Activate Research

Since then, we’ve seen Google enter into the arena in a very considerable way that has culminated into the recent announcement that it would have one billion devices with Google Assistant enabled. We also saw smart speaker sales soar to ultimately represent the fastest adoption of any consumer technology product ever. If the name of the game for phase one was introducing the technology and growing the user base, then I’d say mission accomplished. On to the next phase of Voice.

Phase Two

According to Bret, phase two is about a wider variety of access (new devices), new segments that smart assistants are moving into, and increasing the frequency in which people use the technology. This next phase will revolve around habituation and specialization.

voice assistant share
From Bret Kinsela’s Talk at the Alexa Conference 2019

In a lot of different ways, the car is the embodiment of phase two. The car already represents the second most highly accessed type of device behind only the smartphone, but offers a massive pool of untapped access points through integrations and newer model cars with smart assistants built into the car’s console. It’s a perfect environment for using a voice interface as we need to be hands and eyes-free while driving. Finally, from a habituation standpoint, the car, similar to smart speakers, will serve the same role of “training wheels” for people to get used to the technology as they build the habit.

There were a number of panelists in the breakout sessions and general attendees that helped open my eyes to some of the unique ways that education, healthcare, business, and hospitality (among other areas) are all going to yield interesting integrations and contributions during this second phase. All of these segments offer new areas for specialization and opportunities for people to increasingly build the habit and get comfortable using smart assistants.

The Communal Phase Two

Metaphorically speaking, this year’s show felt like a transition from phase one to phase two too. As I already mentioned, the conference itself grew up, but so have all of the companies and concepts that were first emerging last year. Last year, we saw the first Alexa-driven, interactive content companies like Select a Story and Tellables starting to surface, which helped shine a light on what the future of story-telling might look like in this new medium.

This year we had the founder of Atari, Nolan Bushnell, delivering a keynote talk on the projects he and his colleague, Zai Ortiz, are building at their company, X2 Games. One of the main projects, St. Noire, is an interactive, murder-mystery board game that fuses Netflix-quality video content for your character (through an app on a TV) with an interactive element for the players having to decide certain decisions (issued through a smart speaker). The players’ decisions are what will ultimately impact the trajectory of the game and allow for the players to progress far enough to solve the mystery.  It was a phenomenal demo of a product that certainly made me think, “wow, this interactive story-telling concept sure is maturing fast.”

Witlingo now has a serious product on its hands with Castlingo (micro-Alexa content generated by the user). I feel like while podcasts represent long-form content akin to blogging, there seems to be a gap to fill for more micro-form audio content creation more akin to tweeting. I’m not sure if this gap will ultimately be filled by something like Castlingo or Flash Briefings, but it would be awesome if a company like Witlingo emerged as the Twitter for audio.

Companies like Soundhound continue to give me hope that white-label assistant offerings will thrive in the future, especially as brands will want to extend their brands to their assistants, and not have something bland and generic. Katie McMahon‘s demos of Hound never cease to amaze me either, and it’s newest feature, Query Glue, displays the furthest level of conversational AI that I’ve seen to date.

Magic + Co’s presence at the show indicated that digital agencies are beginning to take Voice very seriously and will be at the forefront of the creative ways brands and retailers integrate and use smart assistants and VUI. We also had folks from Vayner Media at this year’s conference which was just another example that some of the most cutting-edge agencies are thinking deeply about Voice.

Finally, there seemed to be transition to a higher phase on an individual level too. Brian Roemmele, the man who coined the term #VoiceFirst, continues to peel back the curtain on what he believes the long-term future of Voice looks like (check out his podcast interview with Bret Kinsella). Teri Fisher seemed to be on just about every panel and was teaching everyone how to produce different types of audio content. For example, he provided a workshop on how to create a Flash Briefing, which makes me believe we’ll see a lot of people from the show begin making their own audio content (myself included!).

the role of hearables
From my presentation at the Alexa Conference 2019

From a personal standpoint, I guess I’ve entered into my own phase two as well. Last year I attended the conference on a hunch that this technology would eventually impact my company and the industry I work in, and after realizing my hunch was right, I decided that I needed to start contributing in the area of expertise that I know best: hearables.

This year, I was really fortunate to have the opportunity to present on the research I’ve been compiling and writing about around why I believe hearables play a critical role in a VoiceFirst future. I went from sitting in a chair, watching and admiring people like Brian, Bret and Katie McMahon share their expertise last year, to being able to share some of my own knowledge this year to those same people, which was one of the coolest moments in my professional career. (Stay tuned, as I will be releasing my 45-minute talk into a series of blog posts where I break down each aspect of my presentation.)

For those of you reading this piece who haven’t been able to make this show but feel like this conference might be valuable but aren’t sure how, my advice to you is to just go. You’ll be amazed at how inclusive and communal the vibe is and I bet you’ll even walk away from it thinking differently about you and your business’ role as we enter into the 2020’s. If you do decide to go, be sure to reach out as I will certainly be in attendance next year and the years beyond.

-Thanks for Reading-

Dave

 

 

audiology, hearables

Jobs-to-be-Done and the Golden Circle

Jobs to be Done

It’s about the Job, not the Product

There are two concepts that I’ve been thinking about lately to apply back to FuturEar. The first is the framework known as “Jobs-to-be-Done.” I’ve touched on it briefly in previous posts in regards to how it applies to voice technology and the mobile app economy, but it’s a framework that can be applied to just about anything and is worth expanding on because I think it’s going to increasingly impact consumer’s decision-making process when it comes to choosing which type of ear-worn device(s) and software solutions they purchase and wear. This will ring especially true as said devices become more capable and can be worn for longer periods of time as their feature-set broadens and the underlying technology continues to mature.

“Jobs-to-be-Done” is the idea that every product and service, physical or digital, can be “hired” by a consumer to complete a job they’re looking to accomplish. Using this framework, it’s essential to first understand the job it is that the consumer is looking to accomplish and then work backward to figure out which product or service to hire. Clay Christensen, who developed this framework, uses milk shakes as his example:

“A new researcher then spent a long day in a restaurant seeking to understand the jobs that customers were trying to get done when they hired a milk shake. He chronicled when each milk shake was bought, what other products the customers purchased, whether these consumers were alone or with a group, whether they consumed the shake on the premises or drove off with it, and so on. He was surprised to find that 40 percent of all milk shakes were purchased in the early morning. Most often, these early-morning customers were alone; they did not buy anything else; and they consumed their shakes in their cars.

The researcher then returned to interview the morning customers as they left the restaurant, shake in hand, in an effort to understand what caused them to hire a milk shake. Most bought it to do a similar job: They faced a long, boring commute and needed something to make the drive more interesting. They weren’t yet hungry but knew that they would be by 10 a.m.; they wanted to consume something now that would stave off hunger until noon. And they faced constraints: They were in a hurry, they were wearing work clothes, and they had (at most) one free hand.”

The essence of this framework is understanding that while people consume milk shakes all the time, they do so for different reasons. Some people love them as a way to stave off hunger and boredom during a long drive; others like to enjoy them as a tasty treat after a long day. You’re hiring the same product for two different jobs. Therefore, if you’re hiring a milk shake to combat boredom during a long drive, you’re choosing between other foods that might serve the same purpose (chips, sunflower seeds, etc), but if your hiring it for a tasty treat, your competing against things like chocolate or cookies. The job is what impacts the buying behavior for the product; not the other way around.

Ben Thompson, who writes daily business + technology articles for his website Stratechery, recently wrote about this framework through the lens of Uber and the emerging electric scooter economy. As he points out, Ubers and Bird or Lime scooters can be hired for a similar job, which is to get you from point A to point B in short distances. This means that for quick trips, Ubers and scooters are competing with one another, as well as walking, bikes and other forms of micro-transportation. Uber is a product that can be hired for multiple jobs (short trips, long trips, group trips, etc), while you’d only hire a scooter for one of those jobs (short trips).

The Golden Circle

The second concept I’ve been thinking about is the “Golden Circle” that Simon Sinek outlines during his famous “It Starts with Why” Ted Talk. (If you’ve never seen this Ted Talk, I highly encourage you to watch the full thing as it’s very succinct and powerful):

Simon uses the Golden Circle to illustrate why a few companies and leaders are incredibly effective at inspiring people, while others are not. The Golden Circle is comprised of three rings with “why” being at the core, “how” in the middle, and “what” in the outer ring. The vast majority of companies and leaders start from the outside and work their way in when communicating their message or value proposition – their message reads what > how > why.  “Here’s our new car (what), it get’s great gas mileage and has leather seats (how), people love it, do you want to buy our car (why)?” According to Simon, the problem with this flow is that people do not buy what you do, they buy why you do it. He argues that the message’s flow show be inverted to go why > how > what.

Simon uses Apple as an example of a company that works from the inside-out. “Everything we do, we believe in challenging the status quo. We believe in thinking differently (why). The way we challenge the status quo is by making our products beautifully designed, simple to use and user-friendly (how). We just happen to make great computers. Want to buy one (what)?”

What’s so powerful about Apple’s approach of working inside-out is that it effectively doesn’t matter what they’re selling because people are buying the why; people identify with the Apple brand of “thinking differently.” It’s why a computer company like Apple was able to introduce MP3 players, phones, watches, and headphones and we bought them in droves because people associated the new offerings with the brand; these were challenges to the status quo of each new product category. Meanwhile, Dell tried to sell MP3 players at the same time Apple was selling the iPod, but no one bought them. Why? Because people associated Dell with what they sell (computers) and so it felt weird to purchase a different type of product from them.

A Provision of Knowledgeable Assistance

Hearing Care Professional Golden Circle

Hearing care professionals can think of these two concepts in conjunction. There’s a lot of product innovation occurring within the hearing care space right now: new types of devices, improved hardware, new features and functionality, hearing aid and hearable companion apps, and other hearing-centric apps. This innovation will translate to new products that can be hired for new and existing jobs, and therefore broaden the scope of the suite of jobs that you as a hearing care professional can service. This also means that the traditional product for hire, hearing aids, is now competing with new solutions that might be better suited for specific jobs.

This is why I believe the value and the ultimate “why” of the hearing care professional is aligned with servicing the jobs that relate to hearing care, and the products that are hired are just a means to an end. To me, it’s not about what you’re selling, but rather why you’re selling those solutions – to provide people with a higher quality of life. They’re tired of not being a part of the dinner conversations they once loved, worried that their job is in jeopardy because they struggle to hear on business calls, or maybe their spouse is fed up with them having to blast the sound of the TV making it uncomfortable for them to watch TV together. They’re not coming to you to buy hearing aids, they’re coming to you because they have specific jobs that they need help with. Therefore, If new products are surfacing that might be better suited for a specific job, they should be factored into the decision making process.

As the set of solutions to enhance a patient’s quality of life continues to improve and grow over time, it will increase the demand for an expert to sort through those options and properly match solutions to the jobs they’re best suited for. In my opinion, this means that the hearing health professional needs to extend their expertise and knowledge to include additional products for hire, so long as the professional is confident that the product is capable of certain jobs. Just as Apple was able to introduce a suite a products beyond computers, the hearing care professional has the opportunity to be perceived as someone who improves a patient’s quality of life, regardless if that’s via hearing aids, cochlear implants, hearables or even apps. The differentiating value of the professional will increasingly be about serving as a provision of knowledgeable assistance through their education and expertise of all things hearing care related.

-Thanks for Reading-

Dave

audiology, Biometrics, hearables, Smart assistants, VoiceFirst

Capping Off Year One with my AudiologyOnline Webinar

FuturEar Combo

A Year’s Work Condensed into One Hour

Last week, I presented a webinar through the continuing education website, AudiologyOnline, for a number of audiologists around the country. The same week the year prior, I launched this blog. So, for me, the webinar was basically a culmination of the past year’s blog posts, tweets and videos that I’ve generated, distilled into a one-hour presentation. By having to consolidate so many things I have learned into a single hour, it helped me to choose the things that I thought were most pertinent to the hearing healthcare professional.

If you’re interested, feel free to view the webinar using this link (you’ll need to register, though you can register for free and there’s no type of commitment): https://www.audiologyonline.com/audiology-ceus/course/connectivity-and-future-hearing-aid-31891  

Some of My Takeaways

Why This Time is Different

The most rewarding and fulfilling part of this process has been to see the way things have unfolded and the technological progress that has been made both with the hardware and software of the in-the-ear devices and also the rate at which the emerging use cases for said devices are maturing. During the first portion of my presentation, I laid out why I feel this time is different from any previous era where disruption might feel as if it’s on the doorstep, yet doesn’t come to pass, and that’s largely due to the fact that the underlying technology has matured so much of late.

I would argue that the single biggest reason why this time is different is due to the smartphone supply chain, or as I stated in my talk – The Peace Dividends of the Smartphone War (props to Chris Anderson for describing this phenomenon so eloquently). Through the massive, unending proliferation of smartphones around the world, the components that comprise the smartphone (which also comprise pretty much all consumer technology) have gotten incredibly cheap and accessible.

Due to these economies of scale, there is a ton of innovation occurring with each component (sensors, processors, batteries, computer chips, microphones, etc). This means more companies than ever, from various segments, are competing to set themselves apart in any way they can in their respective industries, and therefore are providing innovative breakthroughs for the rest of the industry to benefit from. So, hearing aids and hearables are benefiting from breakthroughs occurring in smart speakers and drones because much of the innovation can be reaped and applied across the whole consumer technology space, rather than just limited to one particular industry.

Learning from Apple

Another point that I really tried to hammer home is the fact that our “connected” in-the-ear devices are now considered “exotropic” meaning that they appreciate in value over time. Connectivity enables the ability for the device to enhance itself, through software/firmware updates and app integration, even after the point-of-sale; much like a smartphone. So, in a similar fashion to our hearing aids and hearables reaping the innovation from other consumer technology innovation occurring elsewhere, connectivity does a similar thing – it enables network effects.

If you study Apple and examine why the iPhone was so successful, you’ll see that its success was largely predicated on the iOS app store, which served as a marketplace that connected developers with users. The more customers (users) there were, the more incentive there was to come and sell your goods as a merchant (developers) in the marketplace (app store). Therefore the marketplace grew and grew as the two sides constantly incentivized one another to grow, which compounded the growth.

That phenomenon I just described is called two-sided network effects and we’re beginning to see the same type of network effects take hold with our body-worn computers. That’s why a decent portion of my talk was spent around the Apple Watch. Wearables, hearables or smart hearing aids – they’re all effectively the same thing: a body-worn computer. Much of the innovation and use cases beginning to surface from the Apple Watch can be applied to our ear-worn computers too. Therefore, Apple Watch users and hearable users comprise the same user-base to an extent (they’re all body computers), which means that developers creating new functionality and utility for the Apple Watch might indirectly (or directly) be developing applications for our in-the-ear devices too. The utility and value of our smart hearing aids and hearables will just continue to rise, long after the patient has purchased their device, making for a much stronger value proposition.

Smart Assistant Usage will be Big

One of the most exciting use cases that I think is on the cusp of breaking through in a big way in this industry is smart assistant integration into the hearing aids (already happening in hearables). I’ve attended multiple conferences dedicated to this technology and have posted a number of blogs on smart assistants and the Voice user interface so, I don’t want to rehash every reason why I think this is going to be monumental for the product offering of this industry, but the main takeaway is this: the group that is adopting this new user interface the fastest is the same cohort that makes up the largest contingent of hearing aid wearers – the older adults. The reason for this fast adoption, I believe, is because there are few limitations to speaking and issuing commands/controlling your technology with your voice. This is why Voice is so unique; It’s conducive to the full age spectrum from kids to older adults, while something like the mobile interface isn’t particularly conducive to older adults who might have poor eyesight, dexterity or mobility.

This user interface and the smart assistants that mediate the commands are incredibly primitive today relative to what they’ll mature to become. Jeff Bezos famously quipped in 2016 in regard to this technology that, “It’s the first inning. It might even be the first guy’s up at bat.” Even in the technology’s infancy, the adoption of smart speakers among the older cohort is surprising and leads one to believe that they’re beginning to grow a dependency on smart assistant mediated voice commands, rather than tap, touch and swiping on their phones. Once this becomes integrated into hearing aids, patients will be able to conduct many of the same functions that you or I do with our phones, simply by asking their smart assistant to do that for them. One’s hearing aid serving the role (to an extent) of their smartphone further strengthens the value proposition of the device.

Biometric Sensors

If there’s one set of use cases that I think can rival the overall utility of Voice, it would be the implementation of biometric sensors into ear-worn devices. To be perfectly honest, I am startled how quickly this is already beginning to happen, with Starkey making the first move with the introduction of a gyroscope and accelerometer into its Livio AI hearing aid allowing for motion tracking. These sensors support the use cases of fall detection and fitness tracking. If “big data” was the buzz of the past decade, then “small data”, or personal data, will be the buzz of the next 10 years. Life insurance companies like John Hancock are introducing policies built around fitness data, converting this feature from a “nice to have” to a “need to have” for those that need to be wearing an-all day data recorder. That’s exactly the role the hearing aid is shaping up to serve – a data collector.

The type of data being recorded is really only limited to the type of sensors that are embedded into the device, and we’ll soon see the introduction of PPG sensors, as Valencell and Sonion plan to release a commercially available sensor small enough to fit into a RIC hearing available in 2019 for OEMs to implement into their offerings. These light-based sensors are currently built into the Apple Watch and provide the ability to track your hear rate. There have been a multitude of folks who have cited their Apple Watch for saving their life, as they were alerted to abnormal spikes in their resting heart rates, which were discovered to be life-threatening abnormalities in their cardiac activity. So, we’re talking about hearing aids acting as data collectors and preventative health tools that might alert the hearing aid wearer to a life-threatening condition.

As these type of sensors continue to shrink in size and become more capable, we’re likely to see more types of data that can be harvested, such as blood pressure and other cardiac data from the likes of an EKG sensor. We could potentially even see a sensor that’s capable of gathering glucose levels in a non-invasive way, which would be a game-changer for the 100 million people with diabetes or pre-diabetes. We’re truly at the tip of iceberg with this aspect of the devices, and this would mean that the hearing healthcare professional is a necessary component (fitting the “data collector”) for the cardiologist or physician that needs their patient’s health data monitored.

More to Come

This is just some of what’s happened across the past year. One year! I could write another 1500 words on interesting developments that have occurred this year, but these are my favorites. There is seemingly so much more to come with this technology and as these devices continue their computerized transformation into looking like something more akin to the iPhone, there’s no telling what other use cases might emerge. As the movie Field of Dreams so famously put it, “If you build it, they will come.” Well, the user base of all our body-worn computers continues to grow and further enticing the developers to come make their next big pay day. I can’t wait to see what’s to come in year two and I fully plan on ramping up my coverage of all the trends converging around the ear. So stay tuned and thank you to everyone who has supported me and read this blog over this first year (seriously, every bit of support means a lot to me).

-Thanks for Reading-

Dave

audiology, Biometrics, hearables, Live-Language Translation, News Updates, Smart assistants, VoiceFirst

Hearing Aid Use Cases are Beginning to Grow

 

The Next Frontier

In my first post back in 2017, I wrote that the inspiration for creating this blog was to provide an ongoing account of what happens after we connect our ears to the internet (via our smartphones). What new applications and functionality might emerge when an audio device serves as an extension of one’s smartphone? What new hardware possibilities can be implemented in light of the fact that the audio device is now “connected?” This week, Starkey moved the ball forward with changing the narrative and design around what a hearing aid can be with the debut of its new Livio AI hearing aid.

Livio AI embodies the transition to a multi-purpose device, akin to our hearables, with new hardware in the form of embedded sensors not seen in hearing aids to date, and companion apps to allow for more user control and increased functionality. Much like Resound firing the first shot in the race to create connected hearing aids with the first “Made for iPhone” hearing aid, Starkey has fired the first shot in what I believe will be the next frontier, which is the race to create the most compelling multi-purpose hearing aid.

With the OTC changes fast approaching, I’m of the mind that one way hearing healthcare professionals will be able to differentiate in this new environment is by offering exceptional service and guidance around unlocking all the value possible from these multi-purpose hearing aids. This spans the whole patient experience, from the way the device is programmed and fit, to educating the patient around how to use the new features. Let’s take a look what one of the first forays into this arena looks like by breaking down the Livio AI hearing aid.

Livio AI’s Thrive App

Thrive is a companion app that can be downloaded to use with Livio AI, and I think it’s interesting for a number of reasons. For starters, what I find useful about this app is that it’s Starkey’s attempt to combat the potential link of cognitive decline and hearing loss in our aging population. It does this by “gamifying” two sets of metrics that roll into your 200 point “Thrive” score that’s meant to be achieved regularly.

Thrive Score.JPG

The first set of metrics is geared toward measuring your body activity, comprised around data collected through sensors to gauge your daily movement. By embedding a gyroscope and accelerometer into the hearing aid, Livio AI is able to track your movement, so that it can monitor some of the same type of metrics as an Apple Watch or Fitbit. Each day, your goal is to reach 100 “Body” points by moving, exercising and standing up throughout the day.Body

The next bucket of metrics being collected is entirely unique to this hearing aid and is based around the way in which you wear your hearing aids. This “brain” category measures the daily duration the user wears the hearing aid, the amount of time spent “engaging” other people (which is important for maintaining a healthy mind), and the various acoustic environments that the user experiences each day.

Brain Image.JPG

So, through gamification, the hearing aid wearer is encouraged to live a healthy lifestyle and use their hearing aids throughout the day in various acoustic settings, engaging in stimulating conversation. To me, this will serve as a really good tool for the audiologist to ensure that the patient is wearing the hearing aid to its fullest. Additionally, for those who are caring for an elderly loved one, this can be a very effective way to track how active your loved one’s lifestyle is and whether they’re actually wearing their hearing aids. That’s the real sweet spot here, as you can quickly pull up their Thrive score history to get a sense of what your aging loved one is doing.

Healthkit SDK Integration

 

Another very subtle thing about the Thrive app that has some serious future applications is that fact that Starkey has integrated Thrive’s data into Apple’s Healthkit SDK. This is one of the only third-party device integrations that I know of to be integrated into this SDK at this point. The image above is a side-by-side comparison of what Apple’s Health app looks like with and without Apple Watch integration. As you can see, the image on the right displays the biometric data that was recorded from my Watch and sent to my Health app. Livio AI’s data will be displayed in the same fashion.

So, what? Well, as I wrote about previously, the underlying reason this is a big deal, is that Apple has designed its Health app with future applications in mind. In essence, Apple appears to be aiming to make the data easily transferable, in an encrypted manner (HIPAA-friendly), across Apple-certified devices. So, it’s completely conceivable that you’d be able to share the biometric data being ported into your Health app (i.e. Livio AI data) and share it with a medical professional.

For an audiologist, this would mean that you’d be able to remotely view the data, which might help to understand why a patient is having a poor experience with their hearing aids (they’re not even wearing them). Down the line, if hearing aids like Livio were to have more sophisticated sensors embedded, such as a PPG sensor to monitor blood pressure, or a sensor that can monitor your body temperature (as the tympanic membrane radiates body heat), you’d be able to transfer a whole host of biometric data to your physician to help them assess what might be wrong with you if you’re feeling ill. As a hearing healthcare professional, there’s a possibility that in the near future, you will be dispensing a device that is not only invaluable to your patient but to their physician as well.

Increased Intelligence

Beyond the fitness and brain activity tracking, there are some other cool use cases that come packed with this hearing aid. There’s a language translation feature that includes 27 languages, which is done in real-time through the Thrive app and is powered through the cloud (so you’ll need to have internet access to use). This seems to draw from the Starkey-Bragi partnership which was formed a few years ago, which was a good indication that Starkey was looking to venture down the path of making a feature-rich hearing aid with multiple uses.

Another aspect of the smartphone that Livio AI leverages is the smartphone’s GPS. This allows the user to use their smartphone to locate their hearing aids if they have gone missing. Additionally, the user can set “memories” to adjust their hearing aid settings based on the acoustic environment they’re in. If there’s a local coffee shop or venue that the user frequents, where they’ll want their hearing aids to have a boost or turned down in some fashion, “memories” will automatically adjust the settings based on the pre-determined GPS location.

If you “pop the hood” of the device and take a look inside, you’ll see that the components comprising the hearing aid have been significantly upgraded too. Livio AI boasts triple the computing power and double the local memory capacity as the previous line of Starkey hearing aids. This should come as no surprise, as the most impressive innovation happening with ear-worn devices is what’s happening with the components inside the devices, due to the economies of scale and massive proliferation of smartphones. This increase in computing power and memory capacity is yet another example of the, “peace dividends of the smartphone war.” This type of computing power allows for a level of machine learning (similar to Widex’s Evoke) to adjust to different sound environments based on all the acoustic data that Starkey’s cloud is processing.

The Race is On

As I mentioned at the beginning of this post, Starkey has initiated a new phase of hearing aid technology and my hope is that it spurs the other four manufacturers to follow suit, in the same way that everyone followed Resound’s lead with bringing to market “connected” hearing aids. Starkey CTO, Achin Bohwmik, believes that traditional sensors and AI will do to the hearing aid what Apple did to the phone, and I don’t disagree.

As I pointed out in a previous post, the last ten years of computing was centered around porting the web to the apps in our smartphone. The next wave of computing appears to be a process of offloading and unbundling the “jobs” that our smartphone apps represent, to a combination of wearables and voice computing. I believe the ear will play a central role in this next wave of computing, largely due to the fact that it serves as a perfect position for an ear-worn computer with biometric sensors equipped that doubles as a home to our smart assistant(s) which will mediate our voice commands. This is the dawn of a brand new day and I can’t help but feel very optimistic about the future of this industry and hearing healthcare professionals who embrace these new offerings. In the end however, it’s the patient who will benefit the most and that’s a good thing when so many people could and should be treating their hearing loss.

-Thanks for Reading-

Dave

Conferences, hearables, Smart assistants, VoiceFirst

The State of Smart Assistants + Healthcare

 

Last week, I was fortunate to travel to Boston to attend the Voice of Healthcare Summit at Harvard Medical School. My motivation for attending this conference was to better understand how smart assistants are currently being implemented into the various segments of our healthcare system and to learn what’s on the horizon in the coming years. If you’ve been following my blog or twitter feed, then you’ll know that I am envisioning a near-term future where smart assistants become integrated into our in-the-ear devices (both hearables and bluetooth hearing aids). Once that integration becomes commonplace, I imagine that we’ll see a number of really interesting and unique health-specific use cases that leverage the combination of the smartphone, sensors embedded on the in-the-ear device, and smart assistants.

 

Bradley Metrock, Matt Cybulsky and the rest of the summit team that put on this event truly knocked it out of the park, as the speaker set and the attendees included a wide array of different backgrounds and perspectives, which resulted in some very interesting talks and discussions. Based on what I gathered from the summit, smart assistants will yield different types of value to three groups: patients, remote caregivers, and clinicians and their staff.

Patients

At this point in time, none of our mainstream smart assistants are HIPAA-compliant, limiting the types of skills and actions that be developed specific to healthcare. Companies like Orbita are working around this limitation by essentially taking the same building blocks required to create of voice skills and then building secure voice skills from scratch in its platform. Developers who want to create skills/actions for Alexa or Google that use HIPAA data, however, will have to wait until the smart assistant platforms have become HIPAA-compliant, which could happen this year or next.

It’s easy to imagine the upside that will come with HIPAA-compliant assistants, as that would allow for the smart assistant to retrieve one’s medical data. If I had a chronic condition that required me to take five separate medications, Alexa could audibly remind me to take each of the five, by name, and respond to any questions I might have regarding any of the five medications. If I am telling Alexa of a side effect I’m having, Alexa might even be able to identify which of the five medications are possibly causing that side-effect and loop in my physician for her input. As Brian Roemmele has pointed out repeatedly, the future ahead for our smart assistants is routed through each of our own personalized, contextual information, and until these assistants are HIPAA-compliant, the assistant has to operate at a more general level than a personalized one.

That’s not to say there isn’t value in generalized skills or skills that don’t use data that falls under the HIPAA umbrella and therefore can be personalized. Devin Nadar from Boston Children’s Hospital walked us through their KidsMD skill, which ultimately allows for parents to ask general questions about their children’s illness, recovery, symptoms, etc and then have the peace of mind that the answers they’re receiving are being sourced and vetted by Boston Children’s Hospital; it’s not just random responses being retrieved from the internet. Cigna’s Rowena Track showed how their skill allows for you to check things such as your HSA-balance or urgent care wait times.

Care Givers and “Care Assistants”

By 2029, 18% of America will be above the age of 65 years old and the average US life expectancy rate is already climbing above 80. That number will likely continue to climb which brings us to the question, “how are we going to take care of our aging population?”  As Laurie Orlov, industry analyst and writer of the popular Aging In Place blog, so eloquently stated during her talk, “The beneficiaries of smart assistants will be disabled and elderly people…and everyone else.” So, based on that sentiment and the fact that the demand to support our aging population is rising, enter into the equation what John Loughnane of CCA described as, “care assistants.”

Triangulation Pic.jpg
From Laurie Orlov’s “Technology for Older Adults: 2018 Voice First — What’s Now and Next” Presentation at the VOH Summit 2018

As Laurie’s slide above illustrates, smart assistants or “care assistants” in this scenario, help to triangulate the relationship between the doctor, the patient and those who are taking care of the patient, whether that be care givers or family. These “care assistants” can effectively be programmed with helpful responses around medication cadence, what the patient can or can’t do and for how long they’re restricted, what they can eat, when to change bandages and how to do so. In essence, the “care assistant” serves as an extension to the care giver and the trust they provide, allowing for more self-sufficiency and therefore, less of a burden on the care giver.

As I have written about before, the beauty of smart assistants is that even today in their infancy and primitive state, smart assistants can empower disabled and elderly people in ways that no previous interface has before. This matters from a fiscal standpoint too, as Nate Treloar, President of Orbita, pointed out that social isolation costs Medicare $6.7 billion per year. Smart assistants act as a tether to our collective social fabric for these groups and multiple doctors at the summit cited disabled or elderly patients who described their experience of using a smart assistant as “life changing.” What might seem trivial to you or I, like being able to send a message with your voice, might be truly groundbreaking to someone who has never had that type of control.

The Clinician and the System

The last group that stands to gain from this integration would be the doctor and those working in the healthcare system. According to the annals of Internal Medicine, for every hour that a physician spends with a patient, they must spend two hours on related administration work. That’s terribly inefficient and something that I’m sure drives physicians insane. The drudgery of clerical work seems to be ripe for smart assistants to provide efficiencies. Dictating notes, being able to quickly retrieve past medical information, share said medical information across systems, etc. Less time doing clerical work and more time helping people.

Boston Children’s Hospital uses an internal system called ALICE and by layering voice onto this system, admins, nurses and other staff can very quickly retrieve vital information such as:

  • “Who is the respiratory therapist for bed 5?”
  • “Which beds are free on the unit?”
  • “What’s the phone number of the MSICU Pharmacist?”
  • “Who is the Neuro-surgery attending?”

And boom, you quickly get the answer to any of these. That’s removing friction in a setting where time might really be of the essence. As Dr. Teri Fisher, host of the VoiceFirst Health podcast, pointed out during his presentation, our smart assistants can be used to reduce the strain on the overall system by playing the role of triage nurse, admin assistant, healthcare guide and so on.

 What Lies Ahead

It’s always important with smart assistants and Voice to simultaneously temper current expectations while remaining optimistic about the future. Jeff Bezos joked in 2016 that, “not only are we in the first inning of this technology, we might even be at the first batter.” It’s early, but as Bret Kinsela of VoiceBot displayed during his talk, smart speakers represent the fastest adoption of any consumer technology product ever:

Fastest Adoption
From Bret Kinsela’s “Voice Assistant Market Adoption” presentation at the VOH Summit 2018

The same goes for how smart assistants are being integrated into our healthcare system. Much like Bezos’ joke, very little of this is even HIPAA-compliant yet. With that being said, you still have companies and hospitals the size Cigna and Boston Children’s Hospital putting forth resources to start building out their offerings in an impending VoiceFirst world. We might not be able to offer true, personalized engagement with the assistant yet, but there’s still lots of value that can be derived at the general level.

As this space matures, so too will the level of which we can unlock efficiencies within our healthcare system across the board. Patients of all ages and medical conditions will be more empowered to receive information, prompts and reminders to better manage their conditions. This means that those taking care of the patients are less burdened too, as they can offload the information aspect of their care giving to the “care assistant.” This then frees up the system as a whole, as there are less general inquiries (and down the line, personal inquiries), meaning less patients who need to come in and can be served at home. Finally, the clinicians can be more efficient too, as they can offload clerical work to the assistant and better retrieve data and information on a patient-to-patient basis, and also more efficiently communicate with their patient, even remotely.

As smart assistants become more integral to our healthcare system, my belief is that on-body access to the assistant will be desired. Patients, caregivers, clinicians and medical staff all have their own reasons for wanting their assistant right there with them at all times. What better a place than a discreet, in-the-ear device that allows for one-to-one communication with the assistant?

-Thanks for Reading-

Dave

hearables, Smart assistants, VoiceFirst

10 Years after the App Store, what’s Next?

What’s Next?

As we celebrate the 10 year anniversary of the App store this week, it seems only natural that we begin wondering what the next 10 years will look like. What modalities, devices, interfaces and platforms will rise to the top of our collective preferences? There’s clearly an abundance of buzzwords that are thrown around these days that indicate a potential direction things may go, but the area that I want to focus on is the Voice interface. This includes smart assistants and all the devices they’re housed in.

Gartner’s L2 recently published the chart below, which might seem to pour some water on the momentum that has been touted around the whole Voice movement:

Before I go into why this chart probably doesn’t matter in the grand scheme of things, there were some solid responses as to why these trend lines are so disparate. Here’s what Katie McMahon, the VP of SoundHound, had to say:

Katie McMahon Tweet

One of the primary reasons the app economy took off was due to two-sided network effects predicated on developer buy-in based on huge monetary incentive. Of course there was an explosion of new applications and things you could do with your smartphone, as there was a ton of money to be made to develop those apps. This was a modern day gold rush. The same financial incentive around developing voice skills doesn’t yet exist.

Here’s a good point Chris Geison, senior Voice UX researcher at AnswerLab, made around one of the current, major pitfalls of Voice skill/action discovery:

Chris Geison Tweet

So, based on Chris and AnswerLabs’ research, a third of users don’t really know that an “app-like economy” exists for their smart speakers. That’s rather startling, given that it was reported by Voicebot at the end of June that there are now 50 million smart speaker users in the US. Is it really possible that tens of millions of people don’t fully understand the capabilities and the companion ecosystem that comes with the smart speaker that they own? It would appear so, as the majority of users are using their smart speakers for native functionality that doesn’t require a downloaded skill as illustrated by this awesome chart from Voicebot’s Smart Speaker Consumer Adoption Report 2018:

As you can see from the chart above, only 46.5% of respondents from this survey have used a skill/action.

Jobs to be Done

In order to understand how we move forward and what’s necessary to do so, it’s important to look at how we use our phones today. As I wrote about in a previous post, each computer interface evolution has been a progression of reducing friction, or time spent doing a mechanical task. Today’s dominant consumer interface – mobile –  is interfaced with Apps. Apps represent jobs that need doing, whether that be a tool to get us from A to B (maps), filling time when you’re bored (games/social media/video), exercising or relaxing the mind(sudoku/chess/books/music/podcasts), etc. Every single app on your phone is a tool for you to execute the job you’re trying to accomplish.

User Interface Shift
From Brian Roemmele’s Read Multiplex 9/27/17

So, if we’re looking to reduce friction as we enter into a new era of computing interaction, we should note that the majority of friction with mobile is primarily consolidated around the mechanical process of pulling out your phone, digging through and toggling between your apps to achieve the job needing to be done. That mechanical process is the friction that needs to be removed.

Workflow + Siri Shortcuts

I was initially underwhelmed by Apple’s WWDC this year because I felt that Siri had once again been relegated to the backseat of Apple’s agenda, which would be increasingly negligent given how aggressive Amazon, Google and the others have been moving into this area. What I didn’t fully understand was how crucial Apple’s Workflow acquisition was back in 2017 and how it might apply to Siri.

Siri Shortcuts ultimately represent a way in which users can program “shortcuts” between apps, so that they can execute a string of commands together into a “workflow” via a voice command. The real beauty of this is that each shortcut can be made public (hello, developers) and Siri will proactively suggest shortcuts for you based on what Siri learns about your preferences and contextual behavior. Power-users empowering mainstream-users with their shortcuts, as suggested by Siri. Remember, context is king with our smart assistants.

Brian Roemmele expanded on this acquisition and the announced integration of Workflow with Siri on Rene Ritchie’s Vector podcast this week. Brian said something in this podcast that really jumped out at me (~38 min mark):

“Imagine every single app on the app store. Now deconstruct all those apps into Jobs to be Done, or intents, or taxonomies. And then imagine, with something like crayons, you can start connecting these things artistically any way you want… Imagine you can do that without mechanically doing it.”

This cuts right to the core of what I think the foreseeable future looks like. Siri Shortcuts powered by Workflow take the role of those crayons. If we’ve extracted out all the utility and jobs that each app represents and put them together into one big pile, we can start to combine various elements of different apps to result in increased efficiencies. This to me really screams “removing mechanical friction.” When I can speak one command and have my smart assistant knock out the work I’m currently doing when I’m digging, tapping and toggling through my apps, that’s significant increases in efficiency

  • “Start my morning routine” – starts my morning playlist, compares Lyft and Uber and displays the cheaper (or quicker, depending on what I prefer) commute, orders my coffee from Starbucks, and queue’s up three stories I want to read on my way to work.
  • “When’s a good time to go to DC” – pulls together things like airfare, AirBnB listings, events that might be going on at the time like concerts or sports games surfaced from Ticketmaster/SeatGeek/Songkick, weather trends, etc.

The options are up to one’s imagination and this interface really does begin to resemble a conversational dialogue as the jobs that need to be done become increasingly more self-programmed by the smart assistant over time.

All Together Now

Apple isn’t the only one deploying this strategy; Google’s developer conference featured a strikingly similar approach to unbundling apps called Slices and App Actions. It would appear that the theme here heading into the next 10 years is to find ways to create efficiencies by leveraging our smart assistants to do the grunt work for us. Amazon’s skill ecosystem is currently plagued by discovery issues as highlighted above, but the recent deployment of CanFulfillIntentRequest for developers will hopefully allow for easier discovering of skills and functionality for mainstream users. The hope is that all the new voice skills and the jobs that they do can be surfaced much more proactively. That’s why I don’t fixate on the amount of skills created to this point, because the way in which we effectively access those skills hasn’t really matured yet.

What’s totally uncertain is whether the companies that sit behind the assistants will play nice with each other. In an ideal world, our assistants would specialize in their own domains and work together. It would be nice to be able to use Siri on my phone, which would work with Alexa when I’m needing something from the Amazon empire or control an IoT-Alexa based device. It would be great if Siri and Google Assistant communicated in the background so that all my gmail and calendar context was available for Siri to access.

Access Point

It’s possible that we’ll continue to have “silos” of skills and apps, and therefore silos of contextual data, if the platforms aren’t playing nice together. Regardless, within each platform the great unbundling seems to be underway. As we move towards a truly conversational interface where we’re conversing with our assistants to accomplish our jobs to be done, we then should think about where we’re accessing the assistant.

I’m of the mind that as we depend on our smart assistants more and more, we’ll want access to our assistants at all times. Therefore, I believe that we’ll engage with smart assistants across multiple different devices, but with continuity, all throughout the day. I may be conversing with my assistants in my home via smart speakers or IoT devices, in my car on the way to work, and in my smart-assistant integrated hearables or hearing aids throughout the course of my day while I’m on-the-go.

While the past 10 years was all about consolidating and porting the web to our mobile devices via apps, the next 10 might be about unlocking new efficiencies and further reducing friction by unbundling the apps and allowing our smart assistants to operate in the background doing our grunt work and accomplishing for us the jobs we need done. It’s not as if smartphones and tablets are going to go away, on the contrary, but its how we use them and derive utility from them that will fundamentally change.

-Thanks for Reading-

Dave