Biometrics, Conferences, Hearables, VoiceFirst

The State of Smart Assistants + Healthcare

 

 

Last week, I was fortunate to travel to Boston to attend the Voice of Healthcare Summit at Harvard Medical School. My motivation for attending this conference was to better understand how smart assistants are currently being implemented into the various segments of our healthcare system and to learn what’s on the horizon in the coming years. If you’ve been following my blog or twitter feed, then you’ll know that I am envisioning a near-term future where smart assistants become integrated into our in-the-ear devices (both hearables and bluetooth hearing aids). Once that integration becomes commonplace, I imagine that we’ll see a number of really interesting and unique health-specific use cases that leverage the combination of the smartphone, sensors embedded on the in-the-ear device, and smart assistants.

 

 

Bradley Metrock, Matt Cybulsky and the rest of the summit team that put on this event truly knocked it out of the park, as the speaker set and the attendees included a wide array of different backgrounds and perspectives, which resulted in some very interesting talks and discussions. Based on what I gathered from the summit, smart assistants will yield different types of value to three groups: patients, remote caregivers, and clinicians and their staff.

Patients

At this point in time, none of our mainstream smart assistants are HIPAA-compliant, limiting the types of skills and actions that be developed specific to healthcare. Companies like Orbita are working around this limitation by essentially taking the same building blocks required to create of voice skills and then building secure voice skills from scratch in its platform. Developers who want to create skills/actions for Alexa or Google that use HIPAA data, however, will have to wait until the smart assistant platforms have become HIPAA-compliant, which could happen this year or next.

It’s easy to imagine the upside that will come with HIPAA-compliant assistants, as that would allow for the smart assistant to retrieve one’s medical data. If I had a chronic condition that required me to take five separate medications, Alexa could audibly remind me to take each of the five, by name, and respond to any questions I might have regarding any of the five medications. If I am telling Alexa of a side effect I’m having, Alexa might even be able to identify which of the five medications are possibly causing that side-effect and loop in my physician for her input. As Brian Roemmele has pointed out repeatedly, the future ahead for our smart assistants is routed through each of our own personalized, contextual information, and until these assistants are HIPAA-compliant, the assistant has to operate at a more general level than a personalized one.

That’s not to say there isn’t value in generalized skills or skills that don’t use data that falls under the HIPAA umbrella and therefore can be personalized. Devin Nadar from Boston Children’s Hospital walked us through their KidsMD skill, which ultimately allows for parents to ask general questions about their children’s illness, recovery, symptoms, etc and then have the peace of mind that the answers they’re receiving are being sourced and vetted by Boston Children’s Hospital; it’s not just random responses being retrieved from the internet. Cigna’s Rowena Track showed how their skill allows for you to check things such as your HSA-balance or urgent care wait times.

Care Givers and “Care Assistants”

By 2029, 18% of America will be above the age of 65 years old and the average US life expectancy rate is already climbing above 80. That number will likely continue to climb which brings us to the question, “how are we going to take care of our aging population?”  As Laurie Orlov, industry analyst and writer of the popular Aging In Place blog, so eloquently stated during her talk, “The beneficiaries of smart assistants will be disabled and elderly people…and everyone else.” So, based on that sentiment and the fact that the demand to support our aging population is rising, enter into the equation what John Loughnane of CCA described as, “care assistants.”

Triangulation Pic.jpg
From Laurie Orlov’s “Technology for Older Adults: 2018 Voice First — What’s Now and Next” Presentation at the VOH Summit 2018

As Laurie’s slide above illustrates, smart assistants or “care assistants” in this scenario, help to triangulate the relationship between the doctor, the patient and those who are taking care of the patient, whether that be care givers or family. These “care assistants” can effectively be programmed with helpful responses around medication cadence, what the patient can or can’t do and for how long they’re restricted, what they can eat, when to change bandages and how to do so. In essence, the “care assistant” serves as an extension to the care giver and the trust they provide, allowing for more self-sufficiency and therefore, less of a burden on the care giver.

As I have written about before, the beauty of smart assistants is that even today in their infancy and primitive state, smart assistants can empower disabled and elderly people in ways that no previous interface has before. This matters from a fiscal standpoint too, as Nate Treloar, President of Orbita, pointed out that social isolation costs Medicare $6.7 billion per year. Smart assistants act as a tether to our collective social fabric for these groups and multiple doctors at the summit cited disabled or elderly patients who described their experience of using a smart assistant as “life changing.” What might seem trivial to you or I, like being able to send a message with your voice, might be truly groundbreaking to someone who has never had that type of control.

The Clinician and the System

The last group that stands to gain from this integration would be the doctor and those working in the healthcare system. According to the annals of Internal Medicine, for every hour that a physician spends with a patient, they must spend two hours on related administration work. That’s terribly inefficient and something that I’m sure drives physicians insane. The drudgery of clerical work seems to be ripe for smart assistants to provide efficiencies. Dictating notes, being able to quickly retrieve past medical information, share said medical information across systems, etc. Less time doing clerical work and more time helping people.

Boston Children’s Hospital uses an internal system called ALICE and by layering voice onto this system, admins, nurses and other staff can very quickly retrieve vital information such as:

  • “Who is the respiratory therapist for bed 5?”
  • “Which beds are free on the unit?”
  • “What’s the phone number of the MSICU Pharmacist?”
  • “Who is the Neuro-surgery attending?”

And boom, you quickly get the answer to any of these. That’s removing friction in a setting where time might really be of the essence. As Dr. Teri Fisher, host of the VoiceFirst Health podcast, pointed out during his presentation, our smart assistants can be used to reduce the strain on the overall system by playing the role of triage nurse, admin assistant, healthcare guide and so on.

 What Lies Ahead

It’s always important with smart assistants and Voice to simultaneously temper current expectations while remaining optimistic about the future. Jeff Bezos joked in 2016 that, “not only are we in the first inning of this technology, we might even be at the first batter.” It’s early, but as Bret Kinsela of VoiceBot displayed during his talk, smart speakers represent the fastest adoption of any consumer technology product ever:

Fastest Adoption
From Bret Kinsela’s “Voice Assistant Market Adoption” presentation at the VOH Summit 2018

The same goes for how smart assistants are being integrated into our healthcare system. Much like Bezos’ joke, very little of this is even HIPAA-compliant yet. With that being said, you still have companies and hospitals the size Cigna and Boston Children’s Hospital putting forth resources to start building out their offerings in an impending VoiceFirst world. We might not be able to offer true, personalized engagement with the assistant yet, but there’s still lots of value that can be derived at the general level.

As this space matures, so too will the level of which we can unlock efficiencies within our healthcare system across the board. Patients of all ages and medical conditions will be more empowered to receive information, prompts and reminders to better manage their conditions. This means that those taking care of the patients are less burdened too, as they can offload the information aspect of their care giving to the “care assistant.” This then frees up the system as a whole, as there are less general inquiries (and down the line, personal inquiries), meaning less patients who need to come in and can be served at home. Finally, the clinicians can be more efficient too, as they can offload clerical work to the assistant and better retrieve data and information on a patient-to-patient basis, and also more efficiently communicate with their patient, even remotely.

As smart assistants become more integral to our healthcare system, my belief is that on-body access to the assistant will be desired. Patients, caregivers, clinicians and medical staff all have their own reasons for wanting their assistant right there with them at all times. What better a place than a discreet, in-the-ear device that allows for one-to-one communication with the assistant?

-Thanks for Reading-

Dave

1 thought on “The State of Smart Assistants + Healthcare”

Leave a Reply