Daily Updates, Future Ear Radio, hearables

The Hearables & Hearing Aid Convergence (Future Ear Daily Update 8-14-19)

Andy B Screenshot

For today’s update, I want to hone in and expand on this short, but powerful twitter thread by Andy Bellavia of Knowles Corp pertaining to the convergence of hearing aids and hearables. I think Andy is right on the money with his analysis, as the two worlds of hearing aids and consumer hearables are beginning to converge and blend together. As he points out, the two each have their own sets of advantages, and can steal innovation from one another, which ultimately might make them resemble each other.

For me, the fact of the matter is that nearly all ear-worn devices will ultimately transform into hearables, which really just means that all these devices will become progressively computerized in time (remember, a hearable is an ear-worn computer). However, just because they’ll all be computerized, doesn’t mean there won’t be room for the devices to specialize.

For example, there’s a variety of different types of tablets that are tailored to different end-users. I might want an iPad Pro because I need the professional software baked into the device, while my dad wants an Amazon Fire tablet to read and watch media, and my sister wants a LeapFrog Kids Tablet for her daughter. All of these are tablets, have similar form factors, user-interfaces, and baseline functionality, but they’re also differentiated by various aspects in the functionality, hardware and software tailored to the targeted end user.

Along the same vein, we’ll likely see a similar pattern with hearables. My mom might want sophisticated Bluetooth hearing aids, I might want to wear AirPods, and my brother might want to wear Bose QC35 II. Today, all of these can stream audio from a smartphone and serve as a hub to communicate with a smart assistant, while differentiating around things like amplification, form factor, and active noise cancellation. It’s early days for hearables, so we should expect there to be aspects that become universal and ubiquitous among all devices, while specialized aspects in the hardware and software are simultaneously emerging or being expanded upon.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio, hearables, Podcasts, VoiceFirst

Beetle Moment Marketing Podcast Appearance (Future Ear Daily Update 8-13-19)

8-13-19 - Beetle moment

One of the best things about attending the Voice Summit was meeting so many sharp people working in and around the voice space. One of the people I was fortunate to meet and spend some time with was Emily Binder, founder of Beetle Moment Marketing. Emily’s marketing firm specializes in helping brands differentiate by leveraging emerging technologies, which includes all things voice.

She has an impressive portfolio of work, which includes the creation and management of Josh Brown and Ritzholtz Wealth Management’s flash briefing, “Market Moment” and Alexa skill/mini podcast, “The Compound Show.” (Josh, aka The Reformed Broker, is one of the most popular internet figures in the finance world, with over a million twitter followers). This is just one example of the type of work that she does on a regular basis for all types of clients.

Emily approached me at the Voice Summit about coming on her podcast to record an episode centered around hearables, which we recorded last week. It was a quick, 18-minute discussion about the evolution of the hearables’ landscape, where the technology is going, and some of the challenges that have to be navigated to get there. We also touched on Flash Briefings and shared the same sentiment about how much potential there is with this new medium, while simultaneously being a bit disappointed by Amazon not giving the Alexa-specific feature more prominence (it should be the star of Amazon’s smart speakers!).

Check out the episode, and be sure to engage on Twitter to let me know what you think!

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio, hearables, Live-Language Translation

Hearables & Live-Language Translation (Future Ear Daily Update 8-12-19)

8-12-19 - Hearables & live language

Last Friday, Chris Smith published an article in Wareable breaking down the state of live-language translation. Chris reached out to me a month ago to gather my thoughts on the exciting, hearables-specific use case, and I thought Chris did an awesome job weaving some of my thoughts into the broader piece. Also, very cool to see my buddy Andy Bellavia interviewed as well, providing some perspective into the hardware-side of things, and what’s necessary from that side to push this all forward. Give it a read to get a better idea of how far along this use case has progressed and what’s required to bring the Babel fish to life.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio, hearables, Smart assistants

Podcasting + Search (Future Ear Daily Update 8-9-19)

8-9-19 - Podcasting + Search

A few months ago, I wrote a two-piece article for Voicebot titled, “The Cambrian Explosion of Audio Content.”(part 1, 2) In the articles, I laid out the development of all the necessary “ingredients” required to be combined to create this explosion. We’re seeing significant movement in the financial markets around podcasting, largely fueled by Spotify’s slew of podcast-centric acquisitions made this years. AirPods are increasingly at the core of Apple’s growth strategy and Voicebot just released an article announcing that the smart speaker install base has now reached 76 million. Hardware that is tailored to audio content consumption continues proliferating at scale. Tools designed to create audio content continue to emerge and mature as well, continually reducing the barrier of entry for content creators.

One of the remaining ingredients needed to make this explosion go atomic is intelligent search and discovery. Voicebot reported yesterday that Google will begin adding Podcasts to its search results. This is the beginning of the formation of one of the last pieces of the audio content puzzle to make this all go boom. Initially, Google’s foray into podcast search will be no different than the way it displays search results for the variety of other types of content it surfaces. Where this appears to be headed, however, is where things start to get very interesting.

In the blog post Google published announcing this new aspect to its search engine, it mentioned that later this year this feature will be coming to Google Assistant. This is a really big deal as the implications go beyond “searching” for podcasts, but rather sets the stage for Google Assistant to eventually work on the user’s behalf to intelligently surface podcast recommendations. In the two-part piece I wrote, I mentioned this as being the long-term hope for podcast discovery:

This is the same type of paradox that we’re facing more broadly with smart assistants. Yes, we can access 80,000 Alexa skills, but how many people use more than a handful? It’s not a matter of utility but discoverability; therefore, we need our smart assistant’s help. The answer to the problem would seem to lie in a personalized smart assistant having a contextual understanding of what we want. In the context of audio consumption, smart assistants would need to learn from our listening habits and behavior what it is that we like based on the context that it can infer from the various data available. These data points would include signals such as, the time (post-work hours; work hours), geo-location (airport; office), peer behavior (what our friends are listening to), past listening habits, and any other information our assistants can glean from our behavior.

Say what you want about Google’s privacy position, but Google is leaning into the fact that it knows so much about its users (i.e Duplex on the Web). Obviously, not everyone will be gung-ho about Google having so much data on its users and the ways in which it will use said data. That being said, I’m not sure there is a voice assistant on the market today that is capable of the level of contextual understanding required for these intelligent, context-rich applications, such as podcast recommendations through learned behavior. Time will tell, but Google just took a sizable step toward enabling this type of future.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

 

Biometrics, Daily Updates, Future Ear Radio

Apple’s Wearables + Research Kit

8-8-19 - Apple's Wearables

In yesterday’s update, I wrote about how we’re seeing medical-grade biometric sensors, such as ECG monitors, being implemented in more and more consumer wearable devices. In the update, I laid out a few ways in which I believe one of the core use cases of wearables (and eventually hearables) will be to serve as, “biometric data collectors.” As the devices become outfitted with more and more sophisticated sensors, the data being collected yields more robust insights, leading to preventative health applications. We’re already seeing this with Apple Watch Series 4’s detecting atrial fibrillation via the embedded ECG sensor.

Yesterday, CNBC reporter, Christina Farr, reported that Apple and Eli Lilly are partnering up in a joint research project to study whether data from iPhones and Apple Watches can detect signs of dementia. It’s an interesting study, and the parameters include each participant being required to use an iPhone, Apple Watch and Beddit sleep tracker. Researchers are looking for ways to identify symptoms of dementia and cognitive decline via the data that can be derived from the participants’ phone habits, sleep patterns and biometric data collected from the watch.

(Quick side note – Apple bought the company Beddit in 2017 and now we’re seeing that one of the reasons why is for these type of studies where they want to monitor sleep patterns. This is such a typical Apple move – make a quite acquisition to use for much broader purposes (Research Kit) a few years later).

 

While this is an intriguing study, I don’t think the point is that Apple is trying to get into the business of detecting dementia. I think this is a byproduct of what Apple has built around collecting data and the unique position that Apple has put itself in within the healthcare space. Kat’s tweet is right on the money with the building blocks that Apple has created and is starting to assemble together.

The iPhone user base, and the Apple Health app in particular, serves as each person’s own health data repository, which is populated by inertial sensor data from the iPhone and biometric data collected via Apple’s Watch (and probably AirPods down the line). Soon, we may see health records being integrated too. These building blocks enable Apple’s healthcare software development kits, such as Research Kit, which helps researchers recruit participants for studies, as well as grants medical researchers access to data from previous studies, such as this dementia study, that have participants opt-in.

Back in March 2017, Apple worked with Mount Sinai Hospital to better understand asthma. Apple worked with the hospital to create an app that 7,600 people downloaded and enrolled in the six-month study within a matter of days. In addition to quickly amassing participants, Apple was able to use geo-location data to correlate the asthma data issued by the participants, with outside metrics such as heat and pollen. That corollary data can be accessed via researchers using Research Kit.

In January 2019, Apple worked with Johnson and Johnson to, ” investigate whether a new heart health program using an app from Johnson & Johnson in combination with Apple Watch’s irregular rhythm notifications and ECG app can accelerate the diagnosis and improve health outcomes of the 33 million people worldwide living with atrial fibrillation (AFib), a condition that can lead to stroke and other potentially devastating complications.”

It doesn’t appear that Apple is attempting to create a product around asthma, just as I don’t think Apple will pursue dementia-detecting technology. I believe that this specific study is part of a broader trend of Apple representing such an extraordinary and unusual position not really seen before in the healthcare space.

Last March, I wrote a long piece titled, “Pondering Apple’s Healthcare Move,” and I believe that Apple’s strategy is to be the ultimate health data collector and facilitator. The healthcare data ecosystem that Apple has incrementally been putting into place since the introduction of the Health App and Health Kit in 2014, puts Apple in a position where a variety of medical professionals might find an aspect of it appealing. Apple may be thinking that the best way to penetrate the healthcare sector is to lean on the way that it is able to uniquely capture, store, and transfer so many different types of data derived from its healthcare ecosystem that it has layered on top of the iPhone user base.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Biometrics, Daily Updates, Future Ear Radio

ECG Sensors Continue Proliferating (Future Ear Daily Update 8-7-19)

8-7-19 - ECG Sensors

One of the most interesting trends occurring with wearables right now is the early implementation of medical-grade sensors into consumer products. The most notable example of this would be when Apple introduced the Apple Watch Series 4, which has an electrocardiogram sensor built into the device. Now, Samsung has announced an upcoming second edition to its Galaxy Watch Active line of smartwatches, one that will include an ECG monitor (initially, this will not be activated on the device until Samsung has gained FDA clearance).

There are essentially three types of sensors that have been embedded into wrist-worn and/or ear-worn wearables. The first would be the inertial sensors, namely the gyroscope and accelerometers. These are the sensors that detect one’s movement and orientation, allowing for all the Fitbit-type metrics we’ve grown accustomed to, such as step tracking. Another interesting way these sensors can be purposed is to detect falls, which is what Starkey’s Livio AI hearing aid uses to do just that.

Image result for apple watch ppg sensor
PPG Sensor on the underside of an Apple Watch emitting green LED laser

The second sensor that has been implemented widely in our body-worn computers, are PPG-optical sensors. PPG stands for photoplethysmogram, and these type of sensors use LED lasers to penetrate the skin and capture blood flow patterns. The patterns are then fed back into the wearables’ data algorithms to be measured, ultimately providing a heart rate readout. It’s a clever and non-invasive way to measure one’s heart rate, and although there are some flaws in the method, many of said flaws have been progressively alleviated as the technology advances.

The newest sensor to be incorporated into wearables are the aforementioned electrocardiogram sensors (ECG). ECG sensors are intended to collect and measure the electrical signals generated by the heart. This means that ECG sensors can be used to detect potential threats or issues transpiring with one’s heart, such as atrial fibrillation. Just look at how many stories there already are of people attributing their Apple Watch series 4 with saving their life (1, 2, 3)

What’s fascinating about ECG sensors being implemented into these consumer devices is that they are one of the first big, bold steps toward transforming wearables into preventative health tools. As we see more and more consumer wearables (and eventually hearables) outfitted with medical grade sensors, one of the primary use cases for wearables will possibly be to serve as, “guardians of our health,” by actively monitoring our biometrics for threats in our bio-data, and then warn us of what’s found in the data. That’s a future scenario that’s well underway in its development.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

Land of the Giants Podcast (Future Ear Daily Update 8-6-19)

8-6-19 - Land of the Giants.jpg

I recently came across Jason Del Rey’s great new podcast, Land of the Giants, which looks at how the biggest tech companies have risen to power. Jason works with Recode media, which was originally founded by former Wall St. Journal tech writers, Walt Mossberg and Kara Swisher in 2014. In 2015, Vox Media bought Recode and integrated it into Vox.

The first season of Land of the Giants covers Amazon and how it became to be such a dominant force in the economy. There have been three episodes to date, with the first episode covering the history of Amazon prime. It’s fascinating to hear first-hand accounts of how Amazon’s flagship service came to be, especially in light of how many issues were plaguing the company at the time, such as too-frequent website crashes.

When Jeff Bezos initially pitched Prime, the core of his thesis was that he wanted to create an impenetrable moat around his top customers, and Prime was considered a long-shot by many internally and externally. One of the major breakthroughs that Amazon had with Prime, however, was from the work of Jeff Wilke, Amazon CEO of Global Consumer, and his operations team inside the fulfillment centers. Wilke’s team was able to shrink the average processing time of an order from 24 hours to 3 hours. These new order cycle times began to be implemented in 2003 and sparked the initial growth of Amazon Prime, which snowballed from there.

The second episode explores Alexa, with the focus being around Amazon’s desire to own the inside of the consumer’s home. As I have written about in great length on this blog, Amazon’s echo devices are proliferating at a staggering rate, representing the fastest adoption of any consumer device to date. While I believe Amazon’s ambitions for Alexa go far beyond the home, the home is absolutely the base for the technology to mature, while serving as a central hub to the internet-of-things (something that has been sorely lacking with the IoT).

Alexa not only represents an interface to facilitate “conversation” between previously rudimentary devices that have since been made voice-enabled, but it also stands change the way Prime members shop. The more Amazon integrates into IoT devices, the more echo devices it can sell to cater to more, “IoT access points” within one’s home. The more that Prime members begin to order through their voice, Amazon’s Prime Member moat becomes all the more defensible. If there’s one thing you’ll learn from this new podcast, it’s that Amazon absolutely obsesses over its customers and will continue to make Prime more unique and valuable, which means it will continue to drive the customer acquisition cost of its Prime members through the roof for any competitors trying to poach them away.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

Apple Buys Intel’s Smartphone Modem Business (Future Ear Daily Update 8-5-19)

8-5-19 - Apple buys intel's smartphone

One of the most defining moments for Apple in the past 15 years was its decision to begin designing its own chipsets and processors. Apple had previously outsourced much of the design of the chips used in its devices, prior to the introduction of the iPhone and its own A-series chip that was used in the first generation iPhone. Since then, Apple has become increasingly less dependent on having to outsource the various  systems-on-chip (SoC) and systems-in-package (SiP), housed in its smartphones, tablets, laptops and wearables.

Apple has set its sights on bringing a new component in-house: the smartphone modems. On July 25th, Apple announced it would be buying “the majority” of Intel’s smartphone modem business for $1 billion. The purchase includes IP and equipment from Intel, along with 2,200 Intel employees who will be joining Apple. So why did Apple buy Intel’s smartphone modem business? In short, it’s the impending arrival of widespread 5G connectivity and the rise of its wearables business.

Rene Ritchie brought expert analyst and founder of Tech.Pinions, Ben Bajarin, on his podcast, Vector, to discuss the acquisition and shed a light on Apple’s motivation. Here’s a key point that Ben made (paraphrasing a bit):

“There’s no doubt that Apple wants to make its own modems. I think that’s been clear not just from reports, but the hiring. Doing baseband has been a high priority, but it’s also been a struggle and again it’s one of those things that they would have needed a license from somebody else whether it was Qualcomm or Intel… they needed that IP because the patent portfolio for modems is just so well covered that you need to get access to that portfolio if you’re going to get into that business.

It makes a lot of sense if you think about where they’re going, with computers that we wear on our wrists, on our faces, in our ears… all of those things will need modems. For them to control the design, miniaturize it and put them in small devices like earbuds, smaller watches or glasses, they need to control the modem and all the silicon bits to design something that small.”

One of the big takeaways from listening to Ben speak about this acquisition is that as Apple continues to bring more of the component design in-house, it allows them to consolidate components into single SoC’s or SiPs, which is really important for energy efficiency and better performance. When we’re viewing this acquisition through the lens of the wearable offerings, the name of the game across the next few years will be to find ways to increase things like energy efficiency and performance, for devices that are super small.

When asked about Apple’s timeline of the implementation of its own modems, here’s what Ben had to say:

“I think for 5G, it’s going to take some time. If we’re thinking about a device that needs a 5G modem, I think we’ll see Apple use Qualcomm’s modems for the foreseeable future. But remember, Apple is already using this technology, they have all the expertise to make an LTE modem with Intel. I could see them using their own LTE-modem within an iPad or even an Apple Watch in the next year or two.”

So, we should see Apple start in this space by designing its own LTE-based modems and relying on Qualcomm in the near-term for anything that requires 5G. In 3-5 years, however, as 5G becomes more ubiquitous and Apple’s wearables are more robust & power-hungry, we might see Apple’s modem ambitions start to migrate toward 5G modems, which will likely be designed and integrated with the other components on the SoC’s and SiP’s Apple designs and develops, allowing for more creative uses and efficient battery life.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio, hearables, Voicebot, VoiceFirst

New Voicebot Post (Future Ear Daily Update 8-2-19)

7-31-19 - Two Key Takeaways

Today I published an article on Voicebot.ai around some key takeaways from Spotify and Apple’s earnings reports that pertain to all things Future Ear.

“Apple and Spotify each reported earnings on Tuesday and there were a few key points in each company’s earnings report that pertain to the world of audio, voice, and hearables. In particular, we can start to see how an aural attention economy is taking shape alongside the visual attention economy ushered in by the mobile era.”

Head on over to Voicebot to check out the post, and let me know what you think on Twitter!

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Conferences, Future Ear Radio, hearables, VoiceFirst

Empowering Our Aging Population with Voice Tech (Future Ear Daily Update 7-30-19)

7-30-19 - Empowering Our Aging Population

Yesterday’s update was based around Cathy Pearl’s fantastic talk from the Voice summit around democratizing voice technology and using it to empower disabled individuals that can benefit from this technology. Today, I wanted to highlight another cohort that stands to gain from voice tech and that’s our aging population. Prior to Cathy’s talk, I attended an equally awesome session led by Davis Park of Front Porch and Derek Holt of K4Connect. I live-tweeted this talk as well (I’m sorry if I overloaded your twitter feed while at the summit!):

To add some context here, K4Connect is a tech startup specifically geared toward building “smart” solutions for older adults. Front Porch, on the other hand, is a group of retirement facilities located in California that has been piloting a series of programs to implement Alexa-enabled devices into its residents’ homes. The two are now working together to expand Front Porch’s pilot to move into phase two, where K4Connect is helping to outfit Front Porch’s residents with IoT devices, such as connected lights and thermostats.

Image
K4Connect and Front Porch’s Pilot Program

From my perspective, this was one of the most important sessions of the entire Voice summit. The reason I say this is because it honed in on two key facts that have been reoccurring themes throughout FuturEar:

  • America’s population is getting considerably older due to the facts that we’re living longer and 10,000 baby boomers are turning 65 years old every day for a 20 year stretch (2011-2030).
  • The older our population gets, the higher the demand climbs for caregivers to look after our aging adults. It was stated in the presentation that we as a nation will need to recruit and retain 300,000 additional caregivers to meet the 2026 demand. Again, the demand will only continue to go up based on the first bullet point.

The takeaway from this talk, similar to Cathy Pearl’s, was that voice technology (namely, voice assistants and the IoT) can be implemented and utilized to offset the demand of the caregivers by empowering our older adults. One overlapping message from this talk and Cathy’s was that caregivers are largely burdened by menial tasks (turn on the light, close the blinds, change the TV channel), and the individuals who are being cared for are hyper-conscious of this. It gets exhausting for the caregiver as well as those receiving care, because they know how exhausting it is for the caregiver. Well, Siri/Alexa/Google do not get exhausted, they’re little AI bots, so who cares if you’re issuing hundreds of commands a day. That’s the beauty in this.

Image
Davis Park highlighting the success of Phase 2 of Front Porch’s Pilot

Following the talk, I spoke with Davis Park about their pilot and I asked him what the Front Porch residents are using their Alexa devices for. “It’s completely different based on the resident. For example, one woman said she loves it because she can now make the perfect hard-boiled egg,” Davis said. This was a total aha! moment for me, because sometimes we’re not appreciating the nuanced ways individuals are finding value in the oft cited use cases of voice assistants today (weather, timers, news, scores, etc) that sometimes get belittled. On the surface, sure, she’s finding value in being able to set a timer, but dig a little deeper and you’ll find in fact that the value is because she’s no longer overcooking her hard-boiled egg.

Image

The slide pictured above from the session illustrates why I see so much potential for voice technology, specifically for older adults. It’s becoming increasingly apparent through numerous research studies that loneliness and social isolation are severely detrimental to us as individuals, as well as to the broader economy.

The industry that I come from, the world of hearing aids and hearing loss, understands these co-morbidities all too well, as hearing loss is often correlated to social isolation. If your hearing is so diminished that you can no longer engage in social situations, you’re more likely to become withdrawn and become social isolated/lonely. 

This is ultimately why I think we’ll see voice assistants become integrated into this new generation of hearing aids. It kills two birds with one stone, as it augments one’s physical sound environment by providing amplification and the ability to hear more clearly, as well as serve as an access point to a digital assistant that can be used to communicate with one’s technology. One of the best solutions on the horizon for helping to circumvent the rising demand for caregivers might be “digital caregivers” in the form of Alexa/Google housed in hearing aids or other hearable devices.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”