Daily Updates, Future Ear Radio

Monetizing in Today’s World of Voice (Future Ear Daily Update 5-16-19)

Image result for voice monetization
Image: Fast Company

I came across an article by Cory Treffiletti in MediaPost that posed the question, “can advertisers monetize voice?” Cory boiled the current possibilities down to two opportunities – voice search and custom skills. With search, advertisers will need to create more conversational content rather than traditional text content, as the way those results are retrieved is audibly. This is a challenge in itself because of just how new and different this equation is than the incumbent method that we’ve grown accustomed to.

Additionally, as Cory points out, there’s another challenge, “You will also need to find a way to track results from voice search because cookies, UTM codes and other tracking won’t quite work.” The concept of search still may apply, but the methods and rules are so different that we’re going to have to rethink the wheel, which is where we need to lean on the design community.

As far as custom skills go, there are hundreds of thousands of registered developers that are creating voice skills to sell to businesses or as consumer-oriented skills, and some have been quite successful, such as Nick Schwab. Companies like VoiceXP have taken the platform approach, to enable virtually any business, of any size, to create it’s own custom skills and create a voice web presence.

In talking with VoiceXP CEO, Bob Stolzberg, for the Harvard Business review article I wrote about businesses using smart speakers as a channel to communicate with their customers, he pointed out that creating a skill is only half the battle. The other half is creating consumer awareness. Just as Cory pointed out in the article with the Pringles example, companies need to take it a step further and really drive awareness toward the skill. Leveraging traditional marketing methods to make people aware of the new marketing channels.

It’s a really well thought article and while the article focused on monetization, Cory did a very good job of concisely articulating what makes voice simultaneously so exciting and promising, while also being quite challenging:

Voice is an interface.  It is a UI.  In fact, you could consider voice to eventually become something like an operating system in that it gives you a means to access the tools that are important to you, but it is not a tool in and of itself.

It’s also much larger than that.  Voice is a way to interact or engage with technology and consumers.  It is not a media format to directly monetize. You don’t see ads embedded in Windows or the Apple operating systems, so why would you expect to hear ads embedded in a voice UI?

This is spot on. It’s easy to forget that voice represents a multitude of things. It is akin to Windows and Apple’s OS in that it is the active environment where you interact with the technology and access its utility. The assistants that serve as the UI play the part of mediator and facilitator, and given the context of what you need from your assistant, can lead to an environment where advertising is appropriate. This is no different than being advertised to through all the apps on your phone, whether it be Yelp, Facebook, Pandora, etc. We enter in and out of different apps (environments), which we knowingly and expect to be advertised within.

Sometimes, however, our smart assistants become the “app.” One minute I might be asking Alexa for general inquiries, such as the weather or to play a podcast, and the next I’m asking to shop. When I’m shopping, is it suddenly appropriate for Alexa to advertise to me based on the context? If I’m indicating that I want particular type of item, with no brand specificity, does that prompt Alexa to start suggesting different (paid?) results based on what it knows about me? The same goes for Google, when I ask it to tell me about the best restaurants in a given proximity, is it going to have paid results fed to me first?

The fact of the matter is that voice is a new type of web, it’s a new type of computing, it’s a new user interface, and it’s seemingly on the path to being a full blown OS. There are a few ways to monetize it today, but like the dawn of the internet, we’re only seeing a glimpse of what’s possible that will only become more apparent as the technology as a whole matures over time.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

To listen on your Google Assistant device, enable the skill here: https://assistant.google.com/services/a/uid/00000059c8644238

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

Google Ups the Ante at Google I/O: Local Assistant (Future Ear Daily Update: 5-15-19)

Google Assistant 2.0 – Speedier & More Secure

Image result for Google assistant 2.0
Image: Digital Trends

Google released a number of very interesting updates around Google Assistant at this year’s Google I/O, such as Duplex on the web which I wrote about last week. Another key revelation was the upgraded Google Assistant dubbed “Assistant 2.0” that will be made available with the release of Android OS, Q. As you can see in the video below, Assistant 2.0 is handling command after command in near real-time.

As Bob O’Donnell wrote about in his Tech.pinions article yesterday, the underlying reason for this upgrade in speed is that Google has moved the assistant from the cloud to on-the-device. This was made possible due to improvements with the compression of the algorithms processing the spoken commands, which Google cited a 10x improvement from when those commands are processed in the cloud. The end result is near-zero latency and exactly the type of step forward in terms of friction reduction necessary to compel users to interact with their devices via voice rather than tap/touch/swipe.

The other notable aspect of moving the processing away from the cloud and onto the device itself is that it helps to alleviate the privacy concerns surrounding voice assistants. As it stands now, when voice commands get sent to the cloud, they typically are logged, stored, and sometimes analyzed by teams inside Amazon and Google to enhance their machine learning and NLP modules.This has caused quite the controversy as publications like the Bloomberg have stoked fears in the public that big brother is spying on them (although, this article by Katherine Prescott does a very good job relaying what’s really going on).

Regardless, by localizing the processing to the smartphone, the majority of the commands fielded by the assistant no longer get sent to the cloud, and therefore, no longer can be assessed by teams inside the smart assistant providers. The commands that do get sent to the cloud, do so via a new technique Google announced called federated learning, which anonymizes the data and combines it with other people’s data, in an effort to continue training the learning modules.

Ultimately, Google I/O was a shot across Apple’s bow. Apple’s big theme across the past few years has been, “privacy, privacy, privacy.” Well, Google made privacy a focal point of this year’s developer conference, with Assistant 2.0 being one of the clearest examples. Additionally, Google is starting to paint a picture of how our assistants can be used from a utility standpoint with the massive reduction in latency in Google Assistant, along with the introduction of Duplex on the web. Apple has not yet shown Siri’s capacity to do anything near what Google is doing with Google Assistant from a utility standpoint.

The past ten years were all about shrinking down our jobs-to-be-done into apps on a single, pocket-sized super computer – our smartphone. Google is making the case that the next ten years might very well be about utilizing our assistants to now do those jobs for us by interconnecting all the bits and data stored on the smartphone and its apps, so that we don’t have to spend the time and effort communicating with our phones by tapping and swiping, but rather just speak to the phone and tell it what to do.

Abra Kadabra! Your wish is my command.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

To listen on your Google Assistant device, enable the skill here: https://assistant.google.com/services/a/uid/00000059c8644238

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

The Future of Hearing Technology (Future Ear Daily Update 5-14-19)

The Future of Hearing Technology

This past weekend, I was invited by the organization, Speech-Language & Audiology Canada, to present at its annual conference held in Montreal on the topic of what the future of hearing technology will look like across the 2020’s. I presented alongside Dr. Gurjit Singh who is a senior research audiologist and program manager at hearing aid manufacturer, Sonova. The two of us presented for 35 minutes each, which was then followed by a fireside chat and Q&A from the audience of hearing healthcare professionals. The session was moderated by clinical audiologist, Remington Shandro.

Myself, Remington and Gurjit during the fireside chat

My presentation was broken into three portions that all circulated around a central theme: all in-the-ear devices are becoming computerized. Nearly everything is trending toward being a hearable. The first potion focused on understanding the question, “why now?” During this part, I laid out a series of consumer tech trends, hearing aid trends, and the innovation occurring within the devices that is making for certain advancements feasible. For example, pointing out the standardization of Bluetooth hearing aids across the past five years, and then extrapolating on the various new use cases that said standardization will allow for and the feasibility of multi-functional in-the-ear devices.

5-13-19
Thanks to Bluetooth standardization, we’re beginning to see multi-functional hearing aids enter the market and be widely adopted

The second portion begins to look at two sets of use cases that I believe will increasingly enhance the value proposition of the devices – voice computing and biometric sensor tech. If you’ve been following my blog or my twitter feed, then you know I am passionate about voice computing, and so anytime I present on hearables, I always try to make the audience aware of the significance of having our smart assistants right in our ear-worn devices. I typically use the Jobs-to-be-Done framework to illustrate the point that much of what we rely on our smartphones for, and before that our laptops and PCs, will migrate to our voice assistants.

JTDB
Duplex on the web is a very good example of how this shift to offloading our “grunt work” to our smart assistants will look as our smart assistants mature as a technology.

In addition to focusing to voice computing, I also spend time talking about biometric sensors and the idea of converting one’s hearable into a biometric data collector and preventative health tool. Building off the facts I previously laid out around the underlying technology trends, I touch on the idea that certain bio sensors have only recently become miniature enough to be fit onto an in-the-ear device. These sensors allow for all types of data to be captured, which today can detect health risks such as atrial fibrillation and pulmonary embolisms, or whether a person has fallen down (which is a huge driver of hospital visits for older adults). As these sensors continue to shrink in size and become capable of capturing a wider variety of data, we’ll likely view the devices as tools to help keep us healthy by alerting us of potential dangers happening inside our bodies.

Combing it together
Combining smart assistants with our biometric sensors to assess what’s going on with our data and ultimately share it with our physicians (or the physicians’s smart assistant counterpart).

Finally, the last portion combines this all together to answer the question, “why does this convergence toward hearables matter?” For hearing healthcare professionals who by-and-large deal with older adults (age tends to be one of the leading indicators of hearing loss), we’re witnessing an evolution with the devices to becoming something so much more profound than an amplification tool. When it comes to voice computing, I point out the fact that older adults are one of the fast growing groups adopting this technology. Adults ages 55-75 years old are among the early adopters – when have we ever seen something like that? And it makes sense! Voice is so great because it’s conducive to the entire age spectrum – it’s natural and not limiting like mobile might be for someone who has poor vision or dexterity.

Older Adult Voice Adoption
Adults 55-75 years old are among some of fastest adopters of smart speakers

In addition, the transformation into a preventative health tool could turn out to be essential to our aging population. Every day, 10,000 US baby boomers turn 65 years old, and will continue to until 2030. Using a combination of AI and sensor-laden hardware, such as a hearable, will help to serve as a guardian to one’s health. These devices can do so much more than even a few years ago, and the trends are not slowing down. We’ll continue to see advancements with the components housed in the devices, and as the tech giants all aim their sights at the ear, we’ll likely see Apple, Amazon, Google, etc drive a lot of innovation that is duplicated by other manufacturers, and then ultimately reaped by the consumer.

It was awesome having the opportunity to bring all the concepts I write about to life. I can’t thank the SAC team enough for inviting me out to present on my thoughts around hearable technology. These are fast-changing times, so it’s great to share some of the trends that I follow closely, with busy professionals who will ultimately be impacted by the changing nature of these devices. I firmly believe that these devices are going to only become more and more compelling to the end user as these new use cases and all the technology that goes into the devices, mature.

Oh, and by the way, Montreal is really cool too. 10/10 will go back!

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

To listen on your Google Assistant device, enable the skill here: https://assistant.google.com/services/a/uid/00000059c8644238

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

Google Ups the Ante at Google I/O: Duplex (Future Ear Daily Update: 5-9-19)

Duplex

Image result for google duplex

Last year at Google I/O, Google introduced Google Duplex and everyone’s collective jaw dropped. It was sci-fi brought to life – need a reservation or an appointment? Just ask your Google Assistant to do so, and voilah! Your assistant will literally call the restaurant for you, and use strikingly human sounding voice, and book the reservation with the person who answered the phone.

Google’s Duplex technology is impressive to the point where it’s actually a little jarring. As cool as it is to just ask your Google Assistant to book you a table for four at your favorite restaurant, and then five minutes later get an email from Open Table confirming the reservation, it also opens up a whole ethical debate around whether the person on the other line should be informed that she’s talking to an AI bot.

This year, however, Google introduced an update to Duplex that makes the application considerably more useful, without adding any fuel to the ethical debate fire, by focusing Duplex on helping to reduce the time spent booking all types of things online:

What makes Google Duplex and Google Assistant so powerful is Google’s ability to connect all of its properties together for the user, which results in a much more sophisticated level of assistance and a lot more utility for the user. If you use Gmail, it can read through your emails and discern information and context that way. If you use Google Maps, it will know your history of the addresses you’ve entered. If you use Chrome as your browser, Google can access things you’ve auto-filled or saved in your browser. Google calendar? Yep, it will access that.

This is the paradox with smart assistants. If we want them to continue to progress and become increasingly useful, it should be understood that it’s a trade-off. The more data that we share with our assistants (and the companies that sit behind the assistants), the more utility we can derive from them. Sharing our data with Google is what allows for someone to simply say, “rent me a car for my trip,” and have their Google Assistant navigate the process of booking that car for them and understanding what we would want.

Google has upped the ante on what’s possible with smart assistants with Duplex on the web, and is differentiating Google Assistant on the basis of utility. Google is leveraging all of its legacy properties and fusing them together to create the ultimate productivity tool with Google Assistant. It’s going to be very interesting to see how Apple and Amazon each respond to the advances that Google rolled out at this year’s developer conference.

-Thanks for Reading-

Dave

 

Daily Updates, Future Ear Radio

Future Ear Daily Update: 5-8-19

Google’s Accessibility Initiatives

Image result for google accessibility

Google held it’s annual developer conference yesterday, Google I/O, and had a flurry of announcements that I will touch upon in later updates. Today, I want to focus on two of the initiatives Google announced around accessibility.

Live Caption

Google has introduced a new feature in its new Android operating system, Q, called Live Caption (I think they were interchanging the name “Live Relay” too). The feature is due out later this year and according to CEO, Sundar Pitchi, “Live Caption makes all content, no matter it’s origin, more accessible to everyone. You can turn on captions for a web video, podcast, even on a video shot on your phone.”

Being able to caption virtually any video on an Android phone that’s running Q will be hugely valuable to the Deaf and hard-of-hearing community. It’s also really convenient for anyone that is in an environment where they want to watch a video and not play the audio. A shout out to KR Liu for her cameo in the video and her collaboration with Google in bringing this feature to life! She and the folks at Doppler Labs were pioneers in the hearables space, and it should come as no surprise when the Doppler alumni pop up here and there with contributions like this. Amazing stuff.

Project Euphonia

Project Euphonia is another initiative for Google to use its machine learning technology to help train its speech recognition systems for people who have speech impairments. Google is training this particular speech recognition model by people who have had strokes, Multiple Sclerosis, stutters, or any other impairment such as the individual in the video, Dimitri Kanevsky, who is a research scientist at Google and has a speech impairment of his own.

Dimitri alone has recorded 15,000 phrases to help train the model to better understand speech that isn’t traditionally being inputted into the training model. According to Dimitri, his goal and Google’s with Euphonia, is to, “make all voice interactive devices be able to understand any person speaking to it.” This is really important work as it will be crucial to ensure that the #VoiceFirst world we’re trending toward is as inclusive to as many people as possible.

In addition, this project is aiming to bring those who cannot speak into the fold as well. Creating models that can be trained by those with ALS themselves, to recognize facial cues or non-speech utterances (like grunts and hums), which then trigger sounds from companion computers, such as a cheer or a boo. As Dmitri points out, to understand and be understood is absolutely unbelievable.

This is tech for good. Apple’s been doing a lot of great work around accessibility too, and in light of all the tech-backlash, if these companies want to compete for positive PR by re-purposing their technology to empower those who need it most…well, then that’s fine by me!

-Thanks for Reading-

Dave

 

Daily Updates, Future Ear Radio

Future Ear Daily Update 5-7-19

Microsoft Cortana & The Enterprise Assistant

Microsoft CEO Satya Nadella put the company’s enterprise smart assistant, Cortana, front and center during yesterday’s Microsoft Build developer conference. It appears that Microsoft is making great progress with Cortana, aided by the Semantic Machines acquisition from last year that has helped Cortana get a whole lot more conversational. Watch the Cortana demo in the clip from Nadella’s keynote to understand what I’m talking about.

This is equal parts brilliant and exciting from Microsoft as it continues to double-down on its enterprise properties and works to integrate its new, conversational-capable Cortana into its software, starting with a big emphasis on Microsoft Outlook. In my Daily Update from two weeks ago, I pondered what our smart assistants will look like in the Enterprise space, and extrapolated that one of the most obvious areas would be for companies like Microsoft to enhance all of its Office and Microsoft 365 properties by baking Cortana into the software. It’s amazing to see that two weeks later, we’re beginning to see exactly this, as the demo displays a 30-turn conversation between the user and Cortana, as the user manages her calendar through Cortana on-the-fly.

To me, the biggest takeaway from what Microsoft displayed with Cortana is the fact that we’re moving toward a multi-assistant future. Microsoft is really good at enterprise software, so rather than trying to purpose Cortana toward being a general-use assistant, like Alexa and Google Assistant, Microsoft instead made Cortana a specialized assistant, capable of handling queries that are pertinent to Microsoft’s software, very, very well. I think this is likely what we’ll see moving forward. Companies taking all the assets it has under its umbrella, and then conversationally-enabling those properties with a specialized smart assistant interface.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

To listen on your Google Assistant device, enable the skill here: https://assistant.google.com/services/a/uid/00000059c8644238

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

Future Ear Daily Update: 5-6-19

How Smart Assistants will Impact Each Provider’s Revenue: Amazon

Amazon
Source: Visual Capitalist

I came across these awesome charts a little while ago from Visual Capitalist and I thought it might be interesting to write about how I see smart assistants impacting each of the major providers’ revenue (excluding Samsung and the Chinese assistant providers).

Let’s start with Amazon. Whenever writing about Amazon and Bezos’ vision for Alexa, it should always be noted that Amazon has 10,000 people working on Alexa. Think about that number for a minute. Clearly Alexa is set to play an important role in company’s future, but what is that role? One possibility, is that Alexa fits into the theme across all of Amazon’s offerings, which is that Amazon always ends up as, “The Tax Man.” I didn’t come up with this analogy, Ben Thompson did back in 2016 with a post that still resonates with me today.

Let me explain, starting with the e-commerce portion of the business. In 2017, More than 50% of all units sold on Amazon.com came from third-party sellers, and the “marketplace fees” it “taxes” the third-party sellers (commissions, shipping and fulfillment) accounted for 18% of its total sales in 2017. In essence, if you want to be a merchant in Amazon’s gigantic marketplace, Amazon takes a cut on every transaction from the merchant, for facilitating the marketplace.

On the consumer side, Amazon Prime could be considered a tax as well. As Amazon continues to capture more and more of the total number of e-commerce transactions, and U.S. retail continues to trend toward e-commerce and away from physical stores, Amazon is effectively collecting a $99/year tax on users who prefer e-commerce to traditional brick & mortar retail.

Along the same lines, the scale of AWS allows for network effects to compound to the point where Amazon’s offering is so appealing for businesses that most companies don’t really think twice about having Amazon service its infrastructure needs. As a business owner, would you rather build out your computing infrastructure yourself, pay the AWS “tax”, or pay one of AWS competitor’s tax, such as Microsoft Azure? It’s becoming increasingly obvious that the answer to that question for the vast majority of businesses, is either AWS or a competitor. Amazon has made it so that it doesn’t make much operational or financial sense to try and build out your own, so you’re better off paying the AWS tax.

Sellers can push more units on Amazon than any other marketplace, but you have to pay the tax man. Customers can get two-day, free shipping on the majority of items sold through Amazon.com bundled with video and audio content, but you have to pay the $99 annual Prime “tax.” You can have all your computing infrastructure needs established and managed by AWS, but you have to pay the tax. As Ben puts it in his piece, “Amazon has created a bunch of primitives, gotten out of the way, and taken a nice skim of the top.”

So what does Amazon “tax” with Alexa? I don’t know what Amazon’s grand plan for Alexa is, but the most obvious area to me is with the combination of Alexa and Amazon Pay. Shoppers can link their Amazon Pay account to their Alexa, so that anything that is purchased through Alexa (whether that be for Amazon goods, or if Alexa is brokering an exchange between the user and some other merchant on any given platform), Amazon facilitates the transaction and therefore reaps the payment fees. For merchants, they can enable payments to be made through Amazon Pay, and then take it a step further and allow for purchases of their goods to be made through Alexa-linked Amazon Pay accounts.

I think this is what’s flying under the radar with Alexa. It’s not as much about Amazon encouraging more buying from consumers on Amazon.com (although Amazon definitely wants that too), but more importantly, Amazon attempting to put Alexa right in the middle of any type of voice commerce transaction. This would effectively mean that Amazon is taxing any transaction that was brokered by Alexa, by fusing its payment offering for shoppers and merchants with its “master assistant” Alexa.

Stay tuned, as I will break down Apple and Siri next.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

Future Ear Daily Update: 5-3-19

Article in Harvard Business Review

Harvard Business Review

Today, I am pumped to share the article I wrote for the Harvard Business Review on the ways that virtually any type and sized business can start to implement voice into their overall business strategy. Smart speakers are currently the most prevalent hardware for businesses to engage their customers through. 26.2% of American adults have a smart speaker – 66.4 million people – and nearly 30 million people uses their smart speaker daily. This is the beginning of a new era in computing and we’ve never seen a consumer device more rapidly adopted than what we’re seeing with smart speakers.

Here’s the thing – smart speakers and our phones are just the start in terms of the hardware we use to interact with our smart assistants.  As the consumer behavior and ways that people interact with their technology progressively moves toward having our smart assistants “unburden” us by alleviating us of the friction of digging and tapping through our phones, so too will we want our smart assistants more readily available. That’s why we’ll see Alexa and all her friends move from the home & phone, into our cars, hospitals, offices, classrooms, hotels, and yes, our mini ear-computers.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

Future Ear Daily Update: 5-2-19

What If Treating Hearing Loss Delays Cognitive Decline?

For this week’s Oaktree TV, I interviewed Dr. Nick Reed of Johns Hopkins to talk about the research that his team is working on around the effects of age-related hearing loss through a public health lens. It was one of the most fascinating interviews I’ve conducted, as the implications from his team’s research are pretty vast. So, for today’s update, I thought I’d break out some of the key talking points from our discussion.

What are the geriatric outcomes that come with age related hearing loss?

As an audiologist himself, Nick points out that much of the focus of audiology is around the immediate outcomes – hearing loss & communication, and how the patient is satisfied with hearing aids. What Nick and his team are doing is they’re examining the longer term outcomes, such as cognitive decline, dementia, depression, social isolation, and loneliness. The things that gerontologists tend to focus on.

The Johns Hopkins research team studying this has found that, in general, people with hearing loss have a faster rate of cognitive decline compared to those without hearing loss (factoring in a number of controls such as age, race, sex, hypertension, etc). In one study, the team followed a group of 600 older adults over a 12 year period, and at baseline none of these adults has dementia. The team found that over the 12 year period, those with mild hearing loss had a 2X risk of developing dementia, moderate hearing loss 3X risk, and those with severe hearing loss had a 5X risk.

In a separate study, the team studied 150,000 individuals, half with hearing loss and half without, across a 10 year period and matched those with hearing loss to those without. What they found is that those with hearing loss tended to spend on average about $22,000 more on healthcare spending. In addition, those with hearing loss had a 44% higher risk of having a 30-day readmission, 46% higher rate of hospitalization, and a 17% higher risk of emergency department visits.

So, through these studies, the Johns Hopkins team has found that hearing loss tends to equate to higher healthcare costs, more hospitalization, and increases the risk of comorbidities, such as cognitive decline and dementia. Some of these finds were probably suspected among some in the hearing healthcare community, and the Johns Hopkins team validates those suspicions.

What is the impact of treating hearing loss on the outcomes listed above?

As Nick mentions, this is the million dollar question. As Nick lays out to me in the video, the reason this is such a tough question to answer lies with the secondary data and the fact that people with hearing aids usually have the means to buy hearing aids and are more conscious of their health. In other words, these are probably the type of people that are being proactive with their health, which would impact something such as cognitive decline.

In order to answer this million dollar question, Nick tells me that what they need are large scale randomized trials. Fortunately, Johns Hopkins has such a study currently underway to understand whether or not treating hearing loss delays cognitive decline. The two-sided study compares one side, where the individuals are using hearing aids, to the other side, where the individual is not getting a placebo, but rather, sees a nurse and focus on age-related outcomes such as staying active, reduce smoke and improving diet. So the team is comparing two sides that are both receiving beneficial treatment, the question though is whether one side is more effective than the other.

As we wait for the results of the Johns Hopkins study around this important question, Nick cites other studies, such as Piers Dawes research, that add evidence that treating hearing loss with hearing aids does in fact delay cognitive decline. What Dawes’ team found, was that for individuals with a declining slope of cognitive ability, once intervened with hearing aids, the angle changed in its trajectory to not be nearly as steep as before. According to Dawes’ team, “one reason for why hearing aids would delay the rate of cognitive decline is the ‘cascade hypothesis,’ in which hearing aids may reduce depression, promote cognitively stimulating social engagement, promote greater physical activity and/or self-efficacy, all of which protect cognitive function.”

To me, if more and more evidence were to pile up to say that hearing aids have the benefit of reducing the rate of cognitive decline, that would truly be a game-changer. I believe this would change the status of a hearing aid from a “nice to have” from a “need to have” and therefore be viewed as such by our healthcare system, strengthening the argument that these devices warrant insurance coverage by moving it out of the “elective” status it’s currently confined to. Additionally, I think that this type of finding would alter how general physicians’ view hearing aids, and be viewed by the GP as one of few “remedies” available to prescribe to older adults who are showing signs of cognitive decline. This would likely equate to more referrals.

We know that hearing loss leads to higher healthcare costs, more frequent hospitalization and an increase in certain cognitive-related comorbidities. Teams like Nick’s are working to research if hearing aids are in fact one of the best solutions available to reduce all three, which would be a huge boon to older adults with hearing loss, the hearing healthcare professional, and the healthcare system as a whole.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

Future Ear Daily Update: 5-1-19

New Voicebot Article Out Today

Voicebot May 1 Article

I wrote an article for Voicebot today on Apple’s earnings report issued yesterday, and how wearables and services represent the future of the company. In my opinion the next 5-10 years of the Apple narrative will be all about how Apple profits off its user base through services, and how that user base is increasingly hard for Apple competitors to poach due to the lock-in effect from Apple’s wearables. Siri would be the ultimate layer that Apple could apply across the board to really lock people in. Be sure to check it out and let me know what you think on Twitter!

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”