Daily Updates, Future Ear Radio

Future Ear Daily Update: 3-18-19

To listen to the broadcast with your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

and then say, “Alexa, launch Future Ear Radio.”

Voicebot Podcast Episode 86: Vijay Balasubramaniyan CEO of Pindrop

I usually try to carve out some time each weekend to catch up on some podcasts that I missed. This weekend, I wanted to circle back to this specific Voicebot episode because I had heard this was such a great conversation between Bret and Vijay. Suffice to say, I was not disappointed as this was a fascinating discussion that helped me understand the challenges and opportunities of security in a voice-centric world.

Link to Podcast episode: https://www.stitcher.com/podcast/bret-kinsella/the-voicebot-podcast/e/59179098

Pindrop is a security company built for the voice era. The company started out by providing a fraud-detection solution used by banks and enterprise companies to identify fraud through voice biometrics. Pindrop uses 1,380 different “voice parameters” that allows for them to very accurately identify the user. The company raised $90 million back in December, which should help finance the company as it begins to set its sights on the consumer market by providing its technology to smart speaker and smart assistant providers that want to use voice biometrics for authentication and security purposes.

Here’s a quote from around the 54 minute mark as Vijay describes what Pindrop will help to facilitate, “What I would love for Pindrop to be able to do is accompany a consumer through an entire journey as voice starts taking over different parts of his life. I’d love to be able to lock my door just with my voice. I’d like to be able to drive my car and open it with my voice and then access my Salesforce data, or whatever else I need to access, with a single sign on with my voice. I’d like to go up to an airport kiosk and pull up my reservation with my voice and when I get to my hotel room, I open my room with my voice. And when I get into my room, I pull up my Netflix on the voice-activated TV, just with my voice.”

As we move into a #VoiceFirst future, a layer of security, provided by companies like Pindrop is going to be extremely important. Allowing people to securely authenticate themselves for voice purchases, controlling IoT devices remotely, approving changes to one’s account, and so forth, will make unloading a number of tasks currently handled through our mobile phones to a voice user interface that much more compelling. Imagine having a smart assistant residing in your hearable, verifying and authenticating you at all times through your voice (or physical!) biometrics. No more having to remember and type in passwords.

-Thanks for Reading-

Dave

Daily Updates, Future Ear Radio

Future Ear Daily Update: 3-15-19

To listen to the broadcast with your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

and then say, “Alexa, launch Future Ear Radio.”

Bose Hearing Aids

The Hearing Tracker team was able to get a first look from the FDA on the new, not yet released, Bose hearing aids. Guess what? They look identical to Bose hearphones.

Bose Hearing Aid Hearphones
Image sourced from Hearing Tracker: Left: Bose Hearing Aid (Source – FDA Filing), Right: Bose Hearphones (Source – Hearphones User Manual)

 

I mean seriously, these appear to be identical. So, considering that the hearphones are a personal sound amplification device (PSAP), my guess is that the hearing aid is more-or-less an enhanced version of the hearphones, with more amplification and a self-assessment test taken by the user through the Bose Hear app.

Here’s the thing though, will people with hearing loss flock to a form factor like this for an all-day ambient solution? I think the neckband form makes sense for situational amplification, i.e. keeping the device around your neck at all times, but only putting the earbuds in during challenging listening situations. This is effectively what the hearphones were intended to do – allow the user to use them as a music listening device and a situational amplification device. I think the form factor makes sense in that capacity, but I struggle with the idea that people are going to wear this type of form factor for extended periods of time like they would with a receiver-in-the canal hearing aid.

The hearing healthcare space is likely to see a number of new suppliers coming from all directions as the OTC legislation is set to take effect in 2020 allowing for over-the-counter sales of hearing aids. We’re likely to see a lot of new takes on what a hearing aid could potentially look like, but the consumer and the market as a whole will be the real judge of what’s practical and what’s not.

Bose is certainly on the move and introducing a flutter of new products. Along with the line of new amplification ear-worn devices, they’ve also introduced Bose Frames, which are sunglasses with a speaker built into the frame. What’s interesting with Frames, is that Bose has created its own platform for developers to build augmented audio applications for (audio AR). Priti Moudgill, one of the founders of the wearable company Peripherii, wrote a great piece on Frames for Fash Nerd this week that’s worth checking out to learn more.

Bose AR Glasses
Image sourced from Fash Nerd: Bose Frames – Alto & Rondo

-Thanks for Reading-

Dave

Daily Updates, Future Ear Radio, Uncategorized

Future Ear Daily Update 3-14-19

To listen to the broadcast with your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

and then say, “Alexa, launch Future Ear Radio.”

“Alexa IS the killer app” – Dave Isbitski

I’m a huge fan of Twitter, which I’m sure is obvious to the people who know me and follow @Oaktree_Dave on there. The reason I’m such a fan is because it’s like you get this front row seat to this marketplace of idea exchange, and if you know where to look, there’s some fascinating ideas and threads of discussion taking place. Dave Isbitski, the chief evangelist of Alexa at Amazon, had a real thought-provoking thread the other day that I wanted to highlight here.

It started here, with Techmeme claiming that across the 80,000 Alexa skills that have been developed, there’s no “killer app.”

Screenshots

Dave’s response:

Killer App

To which Marie Lescaille triggered this thread from Dave:

Thread

This has been a re-occurring theme among some of the brightest thinkers in the Voice First space – this idea that we’re moving away from a world that revolves around apps. Instead, Alexa, Google Assistant and Siri are aspiring to be the interface to everything. One example would be the interface to pertinent information we’re seeking.

I’ve written about this through the framework of jobs-to-be-done and how voice represents this opportunity to break down all our apps into bits and pieces, so that our smart assistants can go and piece together any combination of the “jobs” that our apps represent.

“Alexa, help me book a vacation,” which sends Alexa off to start researching every consideration that goes into that query. This includes the best times to take off work based on my work calendar, airfare, AirBnB options, where my favorite bands are touring, temperatures in each possible destination, places my friends went and enjoyed, top restaurants, etc. All of that information is currently isolated in various apps, so by breaking everything down and allowing Alexa to parse through it all and quickly weave it together into something actionable is really compelling.

That’s just the informational aspect to the potential of voice. Consider what Dave said around communication between businesses and their customers. “A real conversation, with their customers, every day, in the moment.” Smart speakers and voice assistants aren’t just marketing tools. This is all something entirely different and way more profound and while it might be fair to say that there’s not really a “killer application” with smart assistants yet, it misses the point that the assistant itself will ultimately be the killer application. An application that’s actually an interface for us to interact with the internet, businesses, and each other.

-Thanks for Reading-

Dave

ps. Google Action still being built out. Should be available on Google early next week.

Daily Updates, Future Ear Radio

Future Ear Daily Update 3-13-19

To listen to the broadcast with your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

and then say, “Alexa, launch Future Ear Radio.”

“If I Were to Start a Business Today, I’d build it Around Alexa or Google Home” – Mark Cuban

In a recent podcast that Mark Cuban did with Recode, he was asked the following question, “I was curious with all the investing that you do, what are three areas that you’re really excited about investing in, besides gambling … currently?”

To which Mark answered, “So, if I we’re going to start a business today, I’d build it around Alexa and Google Home. If I was 15 or 20 or 25, and you know, back in the day when I was working as a bartender and started a company, I would learn … because Alexa skills and scripting Alexa skills is really, really easy. But everybody thinks it’s really, really hard. And so that disconnect is a great opportunity. And so I told my kids, other kids, learn how to script, and just go get your neighbors and set up all of these Alexa tools and you’ll make $25, $30, $40 an hour.”

I’m not sure everyone building in the voice ecosystem would agree with the details of what Mark is saying, particularly around scripting. However, I think he’s right from a high-level about there being a rather low barrier of entry to get started in the voice space, and there is seemingly a large abundance of opportunity to find a niche within the voice economy to build a business around.

The podcast was recorded in front of a live audience at SXSW, and throughout the podcast, there are a number of questions for Mark from the audience around Alexa, Google Assistant and AI more broadly. It’s definitely an interesting conversation, with a very tech-literate audience that ask pretty challenging questions. So, if you’re curious to know what Mark Cuban thinks about the future of smart assistants and the like, give it a listen.

-Thanks for Reading-

Dave

 

Daily Updates, Future Ear Radio

Future Ear Daily Update: 3-12-19

To listen to the broadcast with your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

and then say, “Alexa, launch Future Ear Radio.”

One in Four U.S. Consumers Own a Smart Speaker Today

According to Voicebot’s recent Smart Speaker Consumer Adoption report, the number of US adults who own a smart speaker rose from 47.3 million at the beginning of 2018 to 66.4 million by the end of the year. This represents 26.2% of all US adults.

There were a number of interesting stats from the downloadable report, including the fact that Google and other smart speakers OEMS are gaining significant ground on Amazon in smart speaker share.

I continue to be blown away at how quickly smart speakers and smart assistants are being adopted by consumers. The technology is in its infancy relative to the potential of where things can go. So, to see such a large swath of the US adopting them (it’s growing like crazy internationally too), really says something about its current usability and the value it provides people (even if people are largely using their smart speakers for the same three things: music, random questions and weather).

So, as more and more people adopt smart speakers and become comfortable with communicating with their smart assistants via their smart speakers, it should be rather seamless to transition that access point to a hearable that houses a smart assistant. Sometimes the consumer will want a far-field hardware access point, but for the most part, what matters is that the consumer can quickly access and communicate with their smart assistant. So, what better of access point than an in-the-ear device that puts the assistant right in your ear?

-Thanks for Reading-

Dave

p.s. –  I’m in the process of creating a Google Action so that you can listen to this via Google Assistant as well. 

Daily Updates, Future Ear Radio

Future Ear Daily Update: 3-11-19

Advanced Bionics’ Cochlear Implant to Use Sonova’s SWORD chip

Advanced Bionics new Nadia CI Connect cochlear implant will have universal Bluetooth streaming capabilities thanks to the Sonova Wireless One Radio Digital (SWORD) chip. Sonova is in the process of implementing SWORD into all the new devices under its umbrella, including all the new Phonak hearing aids, such as the Phonak Marvel.

Bluetooth hearing aids have been around since 2014 when Resound introduced the Linx, but it’s only recently since we began seeing more universally compatible hearing aids that work with non-Apple handsets. That same trend now applies to cochlear implants too. The Nadia CI Connect represents the first universally compatible cochlear implant processor with direct streaming to any type of smartphone. While Cochlear’s Nucleus 7 Sound Processor is made-for-iPhone compatible, it requires an additional accessory, the True Wireless Phone Clip, for Android streaming.

Slowly but surely, universal, direct Bluetooth streaming from one’s hearing aids or cochlear implants to any type of smartphone is becoming a key, standard capability.

To listen to the broadcast with your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

and then say, “Alexa, launch Future Ear Radio.”

-Thanks for Reading-

Dave

Daily Updates, Future Ear Radio

Introducing Future Ear Radio

FuturEar-Radio-logo-2

When I first started this blog back in 2017, my whole goal was to establish some of the macro-trends that were converging around the ear. Smart assistant integration, biometric monitoring, live-language translation, the breakthroughs transpiring inside the devices, and all the other topics that I touched on across the first 20 posts on this blog.

During this initial phase of FuturEar, all of these trends were sort of incubating, so in my mind, it made sense to write from a macro-level to paint the big picture of where things were headed. I wrote posts around concepts like network effects and jobs-to-be-done to help frame how these trends would evolve. The purpose for the first stage of FuturEar was to create a foundation of concepts that I could later begin layering on updates of the progress around each area, once the trends began to mature.

Over the past 18 months, the transition from “dumb” to “smart” ear-worn devices has gradually accelerated and, alongside it, we’ve seen the emergence and maturation of all kinds of exciting new use cases that our hearables will support. Now that things are rapidly advancing with both the hardware and the software (use cases), it seems that it’s time to transition from a macro-style blog with long-form posts, to a more micro-style blog, with shorter, daily posts. Which brings me to FuturEar Radio.

FuturEar Goes Micro

In a two-part post I wrote for Voicebot.ai, I detailed out why I believe we’re on the cusp of a, “Cambrian explosion of audio content.” One of the biggest takeaways from the mobile computing era was that it democratized content creation. The vast majority of us have a super computer in our pockets capable of capturing, uploading and sharing content via Instagram, Snapchat, Twitter, Facebook, YouTube, etc. We’re starting to see the same type of building blocks and easy-to-use, free tools for audio content creation emerge. So, I’m going to join in on the fun and start creating audio content on my own.

Each day, Monday-Friday, I’ll be using Castlingo to upload a 77-second Flash Briefing called “Future Ear Radio.” This will be me surfacing and highlighting the most interesting thing I found on the internet for the day that pertains to all the different trends that have been established on FuturEar. To access my casts, simply go into your Alexa or Google Assistant app and enabled “Future Ear Radio”…add it to your Flash Briefing too so that it blends into your day seamlessly! To add it to your Flash Briefing, search for Future Ear Flash Briefing or ask your assistant to add it your Flash Briefing for you.

Also, to go along with the flash briefing, I’ll be publishing a short blog post in tandem with each broadcast, so that there will be a textual and audio element to each day’s news.

This is my first foray into audio content creation and I wanted to use a broad title like “Future Ear Radio” because I think down the line I’ll want to create other types of audio content that fit under the Future Ear Radio umbrella. Stay tuned on that front and for now, go check out my debut broadcast!

I look forward to having you follow along with me into this next stage of FututEar. Now that all the trends converging around the ear are starting to really ramp up, so too will the blog. This should be fun.

-Thanks for Reading-

Dave

 

Conferences, hearables, VoiceFirst

The Alexa Conference 2019: From Phase One to Phase Two

alexa conf 2019

A Meeting of the Minds

Last week, I made my annual trek to Chatanooga, Tennessee to gather with a wide variety of Voice technology enthusiasts at the Alexa Conference. Along with the seismic growth of smart speakers and voice assistant adoption, the attendees grew quite dramatically too, as we went from roughly 200 people last year to more than 600 people this year. We outgrew last year’s venue, the very endearing Chattanooga Public Library, and moved to the city’s Marriott convention center. The conference’s growth was accompanied with an exhibit hall and sponsorships from entities as large as Amazon itself. We even had a startup competition between five startups, where my guest, Larry Guterman, won the competition with his amazing Sonic Cloud technology.

In other words, this year felt indicative that the Alexa Conference took a huge step forward. Cheers to Bradley Metrock and his team for literally building this conference from scratch into what it has become today and for bringing the community together. That’s what makes this conference so cool; it has a very communal feel to it. My favorite part is just getting to know all the different attendees and understand what everyone is working on.

This Year’s Theme

Phase One

Bret Kinsella, the editor of the de-facto news source for all things Voice, Voicebot.ai, presented the idea that we’ve moved into phase two of the technology. Phase one of Voice was all about introducing the technology to the masses and then increasing adoption and overall access to the technology. You could argue that this phase started in 2011 when Siri was introduced, but the bulk of the progress of phase one was post-2014 when Amazon rolled out the first Echo and introduced Alexa.

consumer adoption chart
Smart Speaker Adoption Rate by Activate Research

Since then, we’ve seen Google enter into the arena in a very considerable way that has culminated into the recent announcement that it would have one billion devices with Google Assistant enabled. We also saw smart speaker sales soar to ultimately represent the fastest adoption of any consumer technology product ever. If the name of the game for phase one was introducing the technology and growing the user base, then I’d say mission accomplished. On to the next phase of Voice.

Phase Two

According to Bret, phase two is about a wider variety of access (new devices), new segments that smart assistants are moving into, and increasing the frequency in which people use the technology. This next phase will revolve around habituation and specialization.

voice assistant share
From Bret Kinsela’s Talk at the Alexa Conference 2019

In a lot of different ways, the car is the embodiment of phase two. The car already represents the second most highly accessed type of device behind only the smartphone, but offers a massive pool of untapped access points through integrations and newer model cars with smart assistants built into the car’s console. It’s a perfect environment for using a voice interface as we need to be hands and eyes-free while driving. Finally, from a habituation standpoint, the car, similar to smart speakers, will serve the same role of “training wheels” for people to get used to the technology as they build the habit.

There were a number of panelists in the breakout sessions and general attendees that helped open my eyes to some of the unique ways that education, healthcare, business, and hospitality (among other areas) are all going to yield interesting integrations and contributions during this second phase. All of these segments offer new areas for specialization and opportunities for people to increasingly build the habit and get comfortable using smart assistants.

The Communal Phase Two

Metaphorically speaking, this year’s show felt like a transition from phase one to phase two too. As I already mentioned, the conference itself grew up, but so have all of the companies and concepts that were first emerging last year. Last year, we saw the first Alexa-driven, interactive content companies like Select a Story and Tellables starting to surface, which helped shine a light on what the future of story-telling might look like in this new medium.

This year we had the founder of Atari, Nolan Bushnell, delivering a keynote talk on the projects he and his colleague, Zai Ortiz, are building at their company, X2 Games. One of the main projects, St. Noire, is an interactive, murder-mystery board game that fuses Netflix-quality video content for your character (through an app on a TV) with an interactive element for the players having to decide certain decisions (issued through a smart speaker). The players’ decisions are what will ultimately impact the trajectory of the game and allow for the players to progress far enough to solve the mystery.  It was a phenomenal demo of a product that certainly made me think, “wow, this interactive story-telling concept sure is maturing fast.”

Witlingo now has a serious product on its hands with Castlingo (micro-Alexa content generated by the user). I feel like while podcasts represent long-form content akin to blogging, there seems to be a gap to fill for more micro-form audio content creation more akin to tweeting. I’m not sure if this gap will ultimately be filled by something like Castlingo or Flash Briefings, but it would be awesome if a company like Witlingo emerged as the Twitter for audio.

Companies like Soundhound continue to give me hope that white-label assistant offerings will thrive in the future, especially as brands will want to extend their brands to their assistants, and not have something bland and generic. Katie McMahon‘s demos of Hound never cease to amaze me either, and it’s newest feature, Query Glue, displays the furthest level of conversational AI that I’ve seen to date.

Magic + Co’s presence at the show indicated that digital agencies are beginning to take Voice very seriously and will be at the forefront of the creative ways brands and retailers integrate and use smart assistants and VUI. We also had folks from Vayner Media at this year’s conference which was just another example that some of the most cutting-edge agencies are thinking deeply about Voice.

Finally, there seemed to be transition to a higher phase on an individual level too. Brian Roemmele, the man who coined the term #VoiceFirst, continues to peel back the curtain on what he believes the long-term future of Voice looks like (check out his podcast interview with Bret Kinsella). Teri Fisher seemed to be on just about every panel and was teaching everyone how to produce different types of audio content. For example, he provided a workshop on how to create a Flash Briefing, which makes me believe we’ll see a lot of people from the show begin making their own audio content (myself included!).

the role of hearables
From my presentation at the Alexa Conference 2019

From a personal standpoint, I guess I’ve entered into my own phase two as well. Last year I attended the conference on a hunch that this technology would eventually impact my company and the industry I work in, and after realizing my hunch was right, I decided that I needed to start contributing in the area of expertise that I know best: hearables.

This year, I was really fortunate to have the opportunity to present on the research I’ve been compiling and writing about around why I believe hearables play a critical role in a VoiceFirst future. I went from sitting in a chair, watching and admiring people like Brian, Bret and Katie McMahon share their expertise last year, to being able to share some of my own knowledge this year to those same people, which was one of the coolest moments in my professional career. (Stay tuned, as I will be releasing my 45-minute talk into a series of blog posts where I break down each aspect of my presentation.)

For those of you reading this piece who haven’t been able to make this show but feel like this conference might be valuable but aren’t sure how, my advice to you is to just go. You’ll be amazed at how inclusive and communal the vibe is and I bet you’ll even walk away from it thinking differently about you and your business’ role as we enter into the 2020’s. If you do decide to go, be sure to reach out as I will certainly be in attendance next year and the years beyond.

-Thanks for Reading-

Dave

 

 

audiology, hearables

Jobs-to-be-Done and the Golden Circle

Jobs to be Done

It’s about the Job, not the Product

There are two concepts that I’ve been thinking about lately to apply back to FuturEar. The first is the framework known as “Jobs-to-be-Done.” I’ve touched on it briefly in previous posts in regards to how it applies to Voice technology and the mobile app economy, but it’s a framework that can be applied to just about anything and is worth expanding on because I think it’s going to increasingly impact consumer’s decision-making process when it comes to choosing which type of ear-worn device(s) and software solutions they purchase and wear. This will ring especially true as said devices become more capable and can be worn for longer periods of time as their feature-set broadens and the underlying technology continues to mature.

“Jobs-to-be-Done” is the idea that every product and service, physical or digital, can be “hired” by a consumer to complete a job they’re looking to accomplish. Using this framework, it’s essential to first understand the job it is that the consumer is looking to accomplish and then work backward to figure out which product or service to hire. Clay Christensen, who developed this framework, uses milk shakes as his example:

“A new researcher then spent a long day in a restaurant seeking to understand the jobs that customers were trying to get done when they hired a milk shake. He chronicled when each milk shake was bought, what other products the customers purchased, whether these consumers were alone or with a group, whether they consumed the shake on the premises or drove off with it, and so on. He was surprised to find that 40 percent of all milk shakes were purchased in the early morning. Most often, these early-morning customers were alone; they did not buy anything else; and they consumed their shakes in their cars.

The researcher then returned to interview the morning customers as they left the restaurant, shake in hand, in an effort to understand what caused them to hire a milk shake. Most bought it to do a similar job: They faced a long, boring commute and needed something to make the drive more interesting. They weren’t yet hungry but knew that they would be by 10 a.m.; they wanted to consume something now that would stave off hunger until noon. And they faced constraints: They were in a hurry, they were wearing work clothes, and they had (at most) one free hand.”

The essence of this framework is understanding that while people consume milk shakes all the time, they do so for different reasons. Some people love them as a way to stave off hunger and boredom during a long drive; others like to enjoy them as a tasty treat after a long day. You’re hiring the same product for two different jobs. Therefore, if you’re hiring a milk shake to combat boredom during a long drive, you’re choosing between other foods that might serve the same purpose (chips, sunflower seeds, etc), but if your hiring it for a tasty treat, your competing against things like chocolate or cookies. The job is what impacts the buying behavior for the product; not the other way around.

Ben Thompson, who writes daily business + technology articles for his website Stratechery, recently wrote about this framework through the lens of Uber and the emerging electric scooter economy. As he points out, Ubers and Bird or Lime scooters can be hired for a similar job, which is to get you from point A to point B in short distances. This means that for quick trips, Ubers and scooters are competing with one another, as well as walking, bikes and other forms of micro-transportation. Uber is a product that can be hired for multiple jobs (short trips, long trips, group trips, etc), while you’d only hire a scooter for one of those jobs (short trips).

The Golden Circle

The second concept I’ve been thinking about is the “Golden Circle” that Simon Sinek outlines during his famous “It Starts with Why” Ted Talk. (If you’ve never seen this Ted Talk, I highly encourage you to watch the full thing as it’s very succinct and powerful):

Simon uses the Golden Circle to illustrate why a few companies and leaders are incredibly effective at inspiring people, while others are not. The Golden Circle is comprised of three rings with “why” being at the core, “how” in the middle, and “what” in the outer ring. The vast majority of companies and leaders start from the outside and work their way in when communicating their message or value proposition – their message reads what > how > why.  “Here’s our new car (what), it get’s great gas mileage and has leather seats (how), people love it, do you want to buy our car (why)?” According to Simon, the problem with this flow is that people do not buy what you do, they buy why you do it. He argues that the message’s flow show be inverted to go why > how > what.

Simon uses Apple as an example of a company that works from the inside-out. “Everything we do, we believe in challenging the status quo. We believe in thinking differently (why). The way we challenge the status quo is by making our products beautifully designed, simple to use and user-friendly (how). We just happen to make great computers. Want to buy one (what)?”

What’s so powerful about Apple’s approach of working inside-out is that it effectively doesn’t matter what they’re selling because people are buying the why; people identify with the Apple brand of “thinking differently.” It’s why a computer company like Apple was able to introduce MP3 players, phones, watches, and headphones and we bought them in droves because people associated the new offerings with the brand; these were challenges to the status quo of each new product category. Meanwhile, Dell tried to sell MP3 players at the same time Apple was selling the iPod, but no one bought them. Why? Because people associated Dell with what they sell (computers) and so it felt weird to purchase a different type of product from them.

A Provision of Knowledgeable Assistance

Hearing Care Professional Golden Circle

Hearing care professionals can think of these two concepts in conjunction. There’s a lot of product innovation occurring within the hearing care space right now: new types of devices, improved hardware, new features and functionality, hearing aid and hearable companion apps, and other hearing-centric apps. This innovation will translate to new products that can be hired for new and existing jobs, and therefore broaden the scope of the suite of jobs that you as a hearing care professional can service. This also means that the traditional product for hire, hearing aids, is now competing with new solutions that might be better suited for specific jobs.

This is why I believe the value and the ultimate “why” of the hearing care professional is aligned with servicing the jobs that relate to hearing care, and the products that are hired are just a means to an end. To me, it’s not about what you’re selling, but rather why you’re selling those solutions – to provide people with a higher quality of life. They’re tired of not being a part of the dinner conversations they once loved, worried that their job is in jeopardy because they struggle to hear on business calls, or maybe their spouse is fed up with them having to blast the sound of the TV making it uncomfortable for them to watch TV together. They’re not coming to you to buy hearing aids, they’re coming to you because they have specific jobs that they need help with. Therefore, If new products are surfacing that might be better suited for a specific job, they should be factored into the decision making process.

As the set of solutions to enhance a patient’s quality of life continues to improve and grow over time, it will increase the demand for an expert to sort through those options and properly match solutions to the jobs they’re best suited for. In my opinion, this means that the hearing health professional needs to extend their expertise and knowledge to include additional products for hire, so long as the professional is confident that the product is capable of certain jobs. Just as Apple was able to introduce a suite a products beyond computers, the hearing care professional has the opportunity to be perceived as someone who improves a patient’s quality of life, regardless if that’s via hearing aids, cochlear implants, hearables or even apps. The differentiating value of the professional will increasingly be about serving as a provision of knowledgeable assistance through their education and expertise of all things hearing care related.

-Thanks for Reading-

Dave

audiology, Biometrics, hearables, Smart assistants, VoiceFirst

Capping Off Year One with my AudiologyOnline Webinar

FuturEar Combo

A Year’s Work Condensed into One Hour

Last week, I presented a webinar through the continuing education website, AudiologyOnline, for a number of audiologists around the country. The same week the year prior, I launched this blog. So, for me, the webinar was basically a culmination of the past year’s blog posts, tweets and videos that I’ve generated, distilled into a one-hour presentation. By having to consolidate so many things I have learned into a single hour, it helped me to choose the things that I thought were most pertinent to the hearing healthcare professional.

If you’re interested, feel free to view the webinar using this link (you’ll need to register, though you can register for free and there’s no type of commitment): https://www.audiologyonline.com/audiology-ceus/course/connectivity-and-future-hearing-aid-31891  

Some of My Takeaways

Why This Time is Different

The most rewarding and fulfilling part of this process has been to see the way things have unfolded and the technological progress that has been made both with the hardware and software of the in-the-ear devices and also the rate at which the emerging use cases for said devices are maturing. During the first portion of my presentation, I laid out why I feel this time is different from any previous era where disruption might feel as if it’s on the doorstep, yet doesn’t come to pass, and that’s largely due to the fact that the underlying technology has matured so much of late.

I would argue that the single biggest reason why this time is different is due to the smartphone supply chain, or as I stated in my talk – The Peace Dividends of the Smartphone War (props to Chris Anderson for describing this phenomenon so eloquently). Through the massive, unending proliferation of smartphones around the world, the components that comprise the smartphone (which also comprise pretty much all consumer technology) have gotten incredibly cheap and accessible.

Due to these economies of scale, there is a ton of innovation occurring with each component (sensors, processors, batteries, computer chips, microphones, etc). This means more companies than ever, from various segments, are competing to set themselves apart in any way they can in their respective industries, and therefore are providing innovative breakthroughs for the rest of the industry to benefit from. So, hearing aids and hearables are benefiting from breakthroughs occurring in smart speakers and drones because much of the innovation can be reaped and applied across the whole consumer technology space, rather than just limited to one particular industry.

Learning from Apple

Another point that I really tried to hammer home is the fact that our “connected” in-the-ear devices are now considered “exotropic” meaning that they appreciate in value over time. Connectivity enables the ability for the device to enhance itself, through software/firmware updates and app integration, even after the point-of-sale; much like a smartphone. So, in a similar fashion to our hearing aids and hearables reaping the innovation from other consumer technology innovation occurring elsewhere, connectivity does a similar thing – it enables network effects.

If you study Apple and examine why the iPhone was so successful, you’ll see that its success was largely predicated on the iOS app store, which served as a marketplace that connected developers with users. The more customers (users) there were, the more incentive there was to come and sell your goods as a merchant (developers) in the marketplace (app store). Therefore the marketplace grew and grew as the two sides constantly incentivized one another to grow, which compounded the growth.

That phenomenon I just described is called two-sided network effects and we’re beginning to see the same type of network effects take hold with our body-worn computers. That’s why a decent portion of my talk was spent around the Apple Watch. Wearables, hearables or smart hearing aids – they’re all effectively the same thing: a body-worn computer. Much of the innovation and use cases beginning to surface from the Apple Watch can be applied to our ear-worn computers too. Therefore, Apple Watch users and hearable users comprise the same user-base to an extent (they’re all body computers), which means that developers creating new functionality and utility for the Apple Watch might indirectly (or directly) be developing applications for our in-the-ear devices too. The utility and value of our smart hearing aids and hearables will just continue to rise, long after the patient has purchased their device, making for a much stronger value proposition.

Smart Assistant Usage will be Big

One of the most exciting use cases that I think is on the cusp of breaking through in a big way in this industry is smart assistant integration into the hearing aids (already happening in hearables). I’ve attended multiple conferences dedicated to this technology and have posted a number of blogs on smart assistants and the Voice user interface so, I don’t want to rehash every reason why I think this is going to be monumental for the product offering of this industry, but the main takeaway is this: the group that is adopting this new user interface the fastest is the same cohort that makes up the largest contingent of hearing aid wearers – the older adults. The reason for this fast adoption, I believe, is because there are few limitations to speaking and issuing commands/controlling your technology with your voice. This is why Voice is so unique; It’s conducive to the full age spectrum from kids to older adults, while something like the mobile interface isn’t particularly conducive to older adults who might have poor eyesight, dexterity or mobility.

This user interface and the smart assistants that mediate the commands are incredibly primitive today relative to what they’ll mature to become. Jeff Bezos famously quipped in 2016 in regard to this technology that, “It’s the first inning. It might even be the first guy’s up at bat.” Even in the technology’s infancy, the adoption of smart speakers among the older cohort is surprising and leads one to believe that they’re beginning to grow a dependency on smart assistant mediated voice commands, rather than tap, touch and swiping on their phones. Once this becomes integrated into hearing aids, patients will be able to conduct many of the same functions that you or I do with our phones, simply by asking their smart assistant to do that for them. One’s hearing aid serving the role (to an extent) of their smartphone further strengthens the value proposition of the device.

Biometric Sensors

If there’s one set of use cases that I think can rival the overall utility of Voice, it would be the implementation of biometric sensors into ear-worn devices. To be perfectly honest, I am startled how quickly this is already beginning to happen, with Starkey making the first move with the introduction of a gyroscope and accelerometer into its Livio AI hearing aid allowing for motion tracking. These sensors support the use cases of fall detection and fitness tracking. If “big data” was the buzz of the past decade, then “small data”, or personal data, will be the buzz of the next 10 years. Life insurance companies like John Hancock are introducing policies built around fitness data, converting this feature from a “nice to have” to a “need to have” for those that need to be wearing an-all day data recorder. That’s exactly the role the hearing aid is shaping up to serve – a data collector.

The type of data being recorded is really only limited to the type of sensors that are embedded into the device, and we’ll soon see the introduction of PPG sensors, as Valencell and Sonion plan to release a commercially available sensor small enough to fit into a RIC hearing available in 2019 for OEMs to implement into their offerings. These light-based sensors are currently built into the Apple Watch and provide the ability to track your hear rate. There have been a multitude of folks who have cited their Apple Watch for saving their life, as they were alerted to abnormal spikes in their resting heart rates, which were discovered to be life-threatening abnormalities in their cardiac activity. So, we’re talking about hearing aids acting as data collectors and preventative health tools that might alert the hearing aid wearer to a life-threatening condition.

As these type of sensors continue to shrink in size and become more capable, we’re likely to see more types of data that can be harvested, such as blood pressure and other cardiac data from the likes of an EKG sensor. We could potentially even see a sensor that’s capable of gathering glucose levels in a non-invasive way, which would be a game-changer for the 100 million people with diabetes or pre-diabetes. We’re truly at the tip of iceberg with this aspect of the devices, and this would mean that the hearing healthcare professional is a necessary component (fitting the “data collector”) for the cardiologist or physician that needs their patient’s health data monitored.

More to Come

This is just some of what’s happened across the past year. One year! I could write another 1500 words on interesting developments that have occurred this year, but these are my favorites. There is seemingly so much more to come with this technology and as these devices continue their computerized transformation into looking like something more akin to the iPhone, there’s no telling what other use cases might emerge. As the movie Field of Dreams so famously put it, “If you build it, they will come.” Well, the user base of all our body-worn computers continues to grow and further enticing the developers to come make their next big pay day. I can’t wait to see what’s to come in year two and I fully plan on ramping up my coverage of all the trends converging around the ear. So stay tuned and thank you to everyone who has supported me and read this blog over this first year (seriously, every bit of support means a lot to me).

-Thanks for Reading-

Dave