Daily Updates, Future Ear Radio

Future Ear Daily Update: 4-4-19

VoiceFirst Health & HIPAA-Compliance

Image: Alexa Developer Blog

It was revealed today that Amazon has created a new class of HIPAA-compliant skills. Amazon has opened the kit up to a select number of developers as it slowly rolls out its new skill kit built for developers to work around sensitive healthcare data. Here’s the official announcement from Amazon:

Today, we’re excited to announce that the Alexa Skills Kit now enables select Covered Entities and their Business Associates, subject to the U.S. Health Insurance Portability and Accountability Act of 1996 (HIPAA), to build Alexa skills that transmit and receive protected health information as part of an invite-only program. Six new Alexa healthcare skills from industry-leading healthcare providers, payors, pharmacy benefit managers, and digital health coaching companies are now operating in our HIPAA-eligible environment. In the future, we expect to enable additional developers to take advantage of this capability

.So what does this change mean? It opens the door for how smart assistants can begin to be implemented in the healthcare setting. Up until this point, implementing voice in a meaningful way in hospitals, doctor’s offices and at the patient’s home has been greatly hindered by the fact that much of the type of data is protected by HIPAA. The six examples that are included in this initial roll out are:

  1. Express Scripts (pharmacy services company): ask Alexa about the status of your prescriptions.
  2. Cigna (global health services company): Manage your health improvement goals and personalized wellness incentives through Alexa.
  3. Swedish Health Connect (healthcare system) – find an urgent care center and make a same-day appointment through Alexa.
  4.  Atrium Health (healthcare system) – Find urgent care center and make same day appointment.
  5. Livongo (digital health company) – Check your glucose levels that can be accessed by Alexa through the connected wearable device that records your glucose levels, such as Omrom’s devices.
  6. Boston Children’s Hospital “Enhanced Recovery after Surgery” (ERAS): – parents can receive information regarding their child’s post-op appointments.

CNBC reporter, Christina Farr, noted in her piece today that, “The developers behind these skills pointed to the trend of bringing health to the home, which represents both a cheaper and more convenient option for the patient. It’s also a way for providers, including doctors and nurses, to monitor patients once they leave the home, which both gives them an opportunity to prevent costly readmissions to the hospital.”

As this HIPAA-compliant developer kit becomes more broadly accessible to developers, we should begin to see more use cases beyond what we’re seeing from these initial six. Alexa continues to expand beyond the home and into our cars, our offices, our earbuds, our classrooms and even into our healthcare system. A great new way for patients to interact with their doctors, data, medicine and all the other cogs in the healthcare machine. The #VoiceFirst community certainly seems to think this is quite meaningful:

This slideshow requires JavaScript.

-Thanks for Reading-

Dave

Daily Updates, Future Ear Radio

Future Ear Daily Update: 4-3-19

Master Assistants

Voicebot reported yesterday that a select number of Walmart consumers could now begin ordering items via voice through Walmart’s Google Assistant action. As Bret Kinsella pointed out in the article, Walmart had initially partnered with Google in a similar capacity, allowing customers to order via voice back in 2018 through Google Express, but ended this particular collaboration in January 2019. The Key difference here is that Google Express was controlled by Google, and exposed consumers to other retailers, while the Google Action is solely controlled by Walmart.

This is a perfect representation of where things appear to be headed with smart assistants in a broad sense. That’s to say that Alexa, Siri, Google, and Samsung’s Bixby (I may be neglecting others) sit at the very top echelon, representing “master assistants” (I saw Ben Basche use that term once and thought it very eloquent). These master assistants play the role of facilitating between the user and the brand/retailer/company/etc’s smart assistant. The middle-man per-se.

So for this particular example, Google Assistant is facilitating the interaction between Walmart and the user. As more and more companies become voice enabled, they’ll be tasked with two objectives. The first is that they’ll need to extend their brand to a voice experience to the point of having its own assistant. The second is that they’ll need to work with the companies that reside behind the master assistants to ensure that their voice experience is accessible by all the master assistants that broker the exchanges between the company’s assistant and the user.

Meanwhile, as everything on the internet becomes voice enabled, the master assistants will be tasked with interconnecting disparate actions or skills, to allow for more complex queries. For example, “Alexa, tell me when would be a good time to go to Montreal for vacation.” This then sends Alexa off in different directions to aggregate information for a response that factors in data from my work calendar (Outlook), airline prices (Southwest/Expedia), historical weather patterns (Accuweather), lodging prices (AirBnB, Hotels.com), bands or sports teams playing near the city during that time (Seatgeek), etc.

In what would take me 10-15 minutes bouncing around apps or the web, Alexa comes back in 5 seconds with, “Ok, Dave, it looks like the first week of May would be ideal, or the last week of August based on your schedule, budget, weather, and things to do. Would you like to learn more?” As the queries become more complex, the reduction in friction becomes more pronounced.

There’s a massive arms race going on right now with the top tech titans and the reason being is that they all want to own the most dominant master assistant because the master assistant appears to be set to become the master broker for all exchanges between users and companies via voice.This is why we’re seeing Jeff Bezos go all in on voice and dedicate a 10,000 person team toward Alexa. Why Google tripled its floor space at this year’s CES and had one of the biggest exhibits of all time, strictly dedicated to Google Assistant. Why Apple has…. wait, never mind.

It’s still early and things can definitely change, but signs are pointing toward a two-class, smart assistant future.

-Thanks for Reading-

Dave

 

Daily Updates, Future Ear Radio

Future Ear Daily Update: 4-2-19

Bragi to Shut Down its Hardware Business

Image result for bragi dash pro
Image Source:  Macrumors

Hugh Langley from Wearable reported yesterday that Bragi would be shutting down its hardware division after receiving confirmation from Bragi CEO, Nikolaj Hviid. Bragi will continue to exist, as it will license its IP and AI, but the hardware division has been sold to a mysterious third party buyer (Starkey? probably not).

It wasn’t that long ago, when the idea of an “ear-computer” (the true definition of a hearable) started to come to life in the market with offerings from Doppler Labs, Bragi, and Nuheara (who is still very much alive in the market). In October of 2015, one year before Apple unveiled AirPods, Doppler’s founder, Noah Kraft, laid out the company’s hearable vision to CNBC. “We want to put a computer, speaker, and mic in everyone’s ear,” Kraft said during the interview. “We have very lofty visions of the future, everything from real-time translation to personal assistants.”

The hard reality is that starting a hardware company from scratch is really, really hard. This rings especially true as smart assistants continue to rise in adoption and popularity, and the companies that reside behind the assistants, Apple, Google, Amazon, and Samsung, are all looking toward our ears as a home for their assistants. In 2017, Doppler went bankrupt, and now, a year and a half later, fellow hearable manufacturer Bragi shuts down its hardware line too. Andy Bellavia from Knowles Corp summed it up very well with the tweet below:

Screenshot 4-2-19.JPG

Andy is exactly right – knowing where the future is headed is tough, but usually through enough research you can kind of understand which technologies are probable and which are not. Timing, however, is extremely tough, and although both Bragi and Doppler might be proven accurate in their vision of a hearables-filled future, some of the “killer use cases” (primarily smart assistant integration) were not mature yet back in 2015. You might be bullish on crypto, or AR/VR, or having computers in our ears, and it’s probable that all of those will be widespread technologies in the future, but to actually know when when the market and all the supporting use cases will be ready for said technology is incredibly hard to predict.

Cheers to these companies for trying to bring their vision of the future to life and helping to pave the way for other companies to learn from them and pick up the pieces where they left off. We’ll likely see new entrants into the market and old entrants thrive and die, but the march toward true in-the-ear computers continues on.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

To listen on your Google Assistant device, enable the skill here: https://assistant.google.com/services/a/uid/00000059c8644238

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

Future Ear Daily Update: 4/1/19

The Impact of Voice Technology on our Elderly Population

This weekend I listened to an episode of one of my favorite voice tech podcasts, “Alexa in Canada.” Teri Fisher, the host of this podcast (as well as another called Voice First Health), is an MD by day, and one of the most active members in the Voice community by night. He probably would have won the Alexa award for “commentator of the year” if he had not been going up against Bret Kinsela of Voicebot, who, let’s be honest, is damn near impossible to beat for that award.

The episode I listened was an interview with “tech futurist” Ian Utile. Ian has lived in Silicon Valley his whole life and has been working in and around the tech industry for 20+ years. During their conversation, Ian explains why he believes voice technology will be profoundly impactful for our aging population (something I completely agree with and have written a lot about).

Here’s a transcription of what Ian had to say:

“There’s 10,000 people on average that turn 65 every day. Both of my parents are over 65. It wasn’t that long ago that my grandmother was living at my parents house and she got Alzheimer’s and dementia and then she eventually passed away. The last few years of her life were very difficult for my grandmother but also my mother.”

“… This is the future I imagine for the elderly that are set up with this type of voice system. They’ll have an Alexa in their room. They will wake up, there will be some type of sensor that will know that they’ve just arose. The lights will turn on, in just the way that is best for their physiology. The TV screen will come up and a photograph of their son of daughter, niece or nephew, or family member will appear on the screen.”

“Alexa very gently says, maybe in Grandpa’s voice who has passed away, ‘Hey Gail, it’s Bob. This is a picture of our daughter and our son and their kids. Cindy’s going to come in the room in the next couple of minutes. She’s going to bring you your strawberries and your water, it’s what you like to have to eat. You are in Vancouver, Canada right now. This is where you’ve lived your whole life. I’m no longer with you, but you still have your family, hun. You love to listen to Tony Bennett and Frank Sinatra, what are you in the mood for right now? And she says, ok… I’ll take Tony Bennett. And the next thing she knows, Tony is singing to her and now all of sudden we’ve created true companionship.”

This is a very exciting scenario and one that I believe is entirely plausible to orchestrate with the assistance of our smart assistants that are controlled through our voices (there’s actually already an Alexa skill, My Life Story, designed to do this to a certain degree). As Ian points out, 10,000 people per day are turning 65 and it’s estimated that by 2029, 18% of America will be above the age of 65 years old. We’re living longer too…so the question becomes, “how do we help care for our aging population?” I believe that a lot of that support can and will be offloaded to our smart assistants.

It will be up to the elderly person’s loved ones to help implement and facilitate this type of system, but so much of the care taking that is currently being shouldered by loved ones, can be done in an increasingly sophisticated fashion by our smart assistants. As the voice economy continues to grow and more companies pop up to fill certain niches, you can imagine that a number of companies (i.e. Lifepod) will arise to help to make this implementation and facilitation as easy as possible. The beauty of this is that it not only relieves some of the burden being carried by the loved ones, but it also empowers the elderly by letting them control their physical surroundings (IoT) and access the internet’s utility and information through simple voice commands. Voice has a very big role to play as our aging population grows and continues to live longer.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

To listen on your Google Assistant device, enable the skill here: https://assistant.google.com/services/a/uid/00000059c8644238

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio, Future Ear Radio Spotlight

Future Ear Daily Update 3-29-19

Future Ear Spotlight: Larry Guterman

Larry Guterman

Today I want to shine the spotlight on Larry Guterman, co-founder of the company SonicCloud (Sonitum). Larry’s story is an interesting one. When he was in his college days, he experienced sudden hearing loss after a loud party, and his hearing then deteriorated from there on to the point where it became a profound hearing loss.

One of the biggest issues that Larry ran into with his hearing loss was that he couldn’t get high quality speech understanding on his phone calls. Hearing aids just weren’t a viable solution and since he traveled frequently as a Hollywood movie director, he struggled to have quality phone calls with his loved ones. So, he took matters into his own hands and the company started building their first product to solve this dilemma in 2016.

SonicCloud is a technology for personalizing streaming audio and phone calls designed for people with all levels of hearing loss. I caught up with Larry at the American Academy of Audiology convention and did the first-ever Future Ear Radio edition with a guest. During the short broadcast, Larry describes his technology and the intent behind it.

In short, while hearing aids provide an ambient solution for every-day noise in your physical surroundings, SonicCloud is software that functions as a “hearing aid for your digital environment” for all the sounds that come from your phone or computer. The app provides you with a fun, interactive hearing assessment to identify the type of hearing loss you have (there’s a big spectrum of the type of losses people have) to create your hearing profile that is then used as a filter for the audio coming from your computer and phone. You apply your sonic cloud hearing profile to your digital audio.

For Larry, it allows him to take crystal-clear phone calls with standard headphones, which is a godsend for him given the severity of his hearing loss. For those of you reading this that were at the Alexa conference, you may recognize Larry, as he and SonicCloud won the startup competition. As we continue to stream more video and audio, it’s technology like SonicCloud that will help those with hearing loss to enjoy their digital content and phone calls to the fullest extent.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

To listen on your Google Assistant device, enable the skill here: https://assistant.google.com/services/a/uid/00000059c8644238

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

 

Daily Updates, Future Ear Radio

Future Ear Daily Update 3-28-19

To listen to the broadcast with your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

and then say, “Alexa, launch Future Ear Radio.”

Future Ear Radio Now Available On Google Assistant

Image result for google assistant

I’m very excited to announce that you can now access Future Ear Radio through Google Assistant! To enable the Google Action: https://assistant.google.com/services/a/uid/00000059c8644238

Kudos again to Witlingo for powering Future Ear Radio and working to have it accessible through Alexa and Google Assistant!

-Thanks for Reading-

Dave

Daily Updates, Future Ear Radio

Future Ear Daily Update: 3-27-19

To listen to the broadcast with your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

and then say, “Alexa, launch Future Ear Radio.”

AirPods 2.0 Video Review – Marques Brownlee

I wrote about the release of the new AirPods last week, but for this update I want to highlight a review video that I thought helps to paint an even clearer picture about what the new AirPods are all about. Marques Brownlee does a ton of really awesome tech gadget reviews and his most recent video is on the new AirPods. Here were some of his takeaways from the new AirPods after using them a bit:

  • First and foremost, the most notable new feature is the wireless charging. It’s just a tad bit odd that Apple’s packaging reads “AirPower” when Airpower has yet to be released. Since AirPower doesn’t exist yet, you’ll need to use one of these wireless chargers. The case fully charges up wirelessly in about an hour.
  • The H-1 chip provides lower latency and faster pairing and switching between Apple devices (Mac to iPad to phone). According to Marques, it’s noticeably faster.
  • 50% better battery life, but “Hey Siri” can drain the battery faster as that feature is always on, waiting for the command (Apple should think about using a Knowles smart mic to avoid this.)
  • These will be compatible with Bluetooth 5.0

Everything else is pretty much the same. There’s no noticeable difference with the exterior of the devices, which really makes these feel like an “S” update from Apple, where all the improvements are happening inside the devices. For current AirPods owners, it probably doesn’t make sense to upgrade unless you really want the wireless charging, but for all new users, the new AirPods are the same cost as the originals.

-Thanks for Reading-

Dave

Daily Updates, Future Ear Radio

Future Ear Daily Update: 3-26-19

To listen to the broadcast with your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

and then say, “Alexa, launch Future Ear Radio.”

Spotify to Acquire another Podcasting Company: Parcast

Parcast

In a press release issued today, Spotify announced that it would be acquiring Parcast, a premium podcast storytelling company. The Parcast acquisition will mark the third  podcasting-based company Spotify has purchased in 2019, following the Gimlet Media and Anchor acquisitions in early February. Similar to Gimlet Media, Parcast provides Spotify with another arm into content creation, as Parcast has launched 18 podcasts since its founding in 2016, with another 20 slated for this year.

Last month, I wrote a two-part series (part 1; part 2) for Voicebot.ai about an impending, “Cambrian explosion of audio content creation.” As I outlined in those articles, there are a multitude of reasons why Spotify is investing heavily in podcasting. Look at the “attention economy” companies that emerged as mobile became the dominant computing platform: Facebook, YouTube (Alphabet), Netflix, Twitter, Snapchat. (not all were hits on Wall St., but they all have come to dominate various demographics’ attention spans.)

Spotify CEO, Daniel Ek, seems to think that our ears are undervalued and that’s about to change. Here’s what he had to say in a blog post he wrote after the Gimlet and Anchor acquisitions:

“Consumers spend roughly the same amount of time on video as they do on audio. Video is about a trillion dollar market. And the music and radio industry is worth around a hundred billion dollars. I always come back to the same question: Are our eyes really worth 10 times more than our ears? I firmly believe this is not the case. For example, people still spend over two hours a day listening to radio — and we want to bring that radio listening to Spotify, where we can deepen engagement and create value in new ways. With the world focused on trying to reduce screen time, it opens up a massive audio opportunity.”

Given the rise of AirPods, and the broader trend of in-the-ear devices becoming computerized, I think that audio content consumption will become more viable and conducive for increasing amounts of time. Coupled with the rising intelligence of smart assistants to serve as an audio-based UI (Voice First), I think the stage is set for a lot more audio content creation and consumption. I believe that this content will be created across the full spectrum, in the same capacity video content is created (think Netflix down to high-end YouTubers all the way down to Snapchat).

The audio content spectrum will range from premium studios, like Gimlet and Parcast, to independent and small studios, which will be financially compensated more effectively through Spotify’s Anchor sponsorships, all the way down to individuals who have access to free tools like Castlingo. While the attention economy for our eyes has been the battleground for the past ten years, the aural attention economy is setting up to be the next battleground site. Expect a lot more jockeying by all the attention economy companies, so that each is in position to capture our attention via our ears as hearables and the voice tech space continue to mature.

-Thanks for Reading-

Dave

Daily Updates, Future Ear Radio

Future Ear Daily Update: 3-25-19

To listen to the broadcast with your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

and then say, “Alexa, launch Future Ear Radio.”

Hearing Aid Know Podcast with Achin Bhowmik of Starkey

Geoff Cooling recently interviewed Achin Bhowmik, Starkey’s Chief Technology Officer and Executive Vice President of Engineering, on his Hearing Aid Know podcast. Achin’s background is that he worked at Intel for 17 years as a VP in its Perceptual Computing Group before joining Starkey two years ago. Since joining Starkey, he’s been a driving force in re-shaping the concept of hearing aids from single-function, amplification devices to multi-functional ear computers by helping to create Starkey’s Livio AI hearing aid.

It’s a quick 17 minute interview that hits on a lot of what Starkey is aiming to do with Livio AI. Give it a listen below.

Livio AI represents one of the most advanced mini ear-computers on the market. It is embedded with inertial and biometric sensors that are used in conjunction with the hearing aid use data to create a daily “Thrive” score that is intended to be achieved each day to encourage physical and cognitive wellness. Starkey has done a good job “gamifying” the data inside the companion Thrive app, so that users get a quick read out of their daily progress toward their goal, similar to how the Apple Watch uses rings as a visual read-out for your daily progress toward your stand, move and exercise goals. Along with recording and providing a visualization of your cognitive and physical data, these sensors are also capable of detecting falls (every 11 seconds, an older adult is admitted to a US emergency room). The newest iteration of Livio AI even includes a Valencell heart rate monitor, making Livio AI one of the smallest devices embedded with a PPG-heart rate monitor.

In addition to being equipped with a variety of sensors, these hearing aids are also a home for smart assistants. Starkey’s introduced its own smart assistant, the Thrive Assistant, which fields any query that is tied to the hearing aid (Which setting am I on? What’s my Thrive score? What’s my battery life?), while general questions (what’s the weather?) go to the cloud and are fielded by Google Assistant. Livio AI is also capable of live-language translation for 27 languages and has voice-to-text transcription. As I said, these hearing aids are pushing the boundaries of what a hearing aid, and even more broadly, hearables, can be.

During the interview (~5 min mark), Achin says that the three goals of Livio AI are, “make uncompromising sound quality, turn the device with sensors and AI into a gateway to health, and the third is to make the device into a window of information.”  First and foremost, the core functionality, amplification, is the top priority. Since these devices are worn for extended periods of the day, they also represent the perfect opportunity for biometric data collection and a home for our smart assistants. An all-in-one little ear-computer, that specializes in amplification.

-Thanks for Reading-

Dave

Daily Updates, Future Ear Radio

Future Ear Daily Update: 3-22-19

To listen to the broadcast with your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

and then say, “Alexa, launch Future Ear Radio.”

“We can all start to look up again.”

There has been some really great discussion on Twitter since the release of AirPods 2.0 on Wednesday, and I think this thread here by Brian Norgard probably sums up best what’s culminating on both the hearables and the voice technology front.

Brian Norgard Screenshot

Brian Norgard Screenshot 2

I’m going to go tweet by tweet here:

1. “We can all start to look up again.” This notion of looking up again is something that I’ve thought a lot about since starting FuturEar. I actually mentioned this exact concept on the This Week in Voice podcast episode I did with Bradley Metrock and Sarah Storm back in February of 2018 (around the 11:15 minute mark).

This is one of reasons why I’m so excited about moving toward a Voice UI. I don’t feel as if smartphones are anywhere near going away, nor do I want to get rid of my smartphone, but I would like to certainly use it less often. So much of what I use it for is for a task-specific “job.” Ordering an Uber, sending a Venmo payment, pulling up my boarding pass, booking a hotel room, and every other job that I rely on my smartphone for. Slowly moving those tasks to my smart assistants and away from my phone, equates to less time on my phone. It means more time with my head up in the air not staring a little block of glass.

2 & 3. This is really awesome to see how very respected tech journalists, like Ben Bajarin, are now thinking about voice as a UI and the comparisons he’s drawing to previous operating systems. In a hearables-laden and mic-filled world, we need a UI that is conducive to ear-worn computers and ambient computers, the same way we needed a UI that was conducive to pocket-sized computers (mobile) or desktop computers that could connect to the internet (HTML).

I love the fact that Brian Roemmele is continuing to be proven miles ahead of everyone with his thinking about the voice space. I mean, of course he is, the guy wrote a massive manifesto about the term he coined, Voice First, back in the 70’s. For those who have met Brian, seen him speak, or work in some capacity around the voice tech industry, we all already know Brian is at the forefront of where this technology is headed. It’s great to see Brian being recognized for his foresight, as it’s common knowledge for those who have been following him for a period of time.

4. While Apple is in a perfect position to play a very significant role in this new computing era, it still remains to be seen that they’re actually going to seize it. As I wrote about in Wednesday’s update, Apple has made a number of moves recently under John Giannandrea that seem to signal a new focus on Siri, but the jury is still out whether voice is a top priority under Tim Cook’s Apple (Tim Apple).

-Thanks for Reading-

Dave