News Updates, Smart assistants, VoiceFirst

Smart Assistants Continue to Incrementally Improve

Echo.jpg

This week we saw a few developments in the #VoiceFirst space that might seem trivial on the surface, but are landmark improvements toward more fully-functional voice assistants. The first was from Amazon with the introduction of “Follow-up mode.” As you can see from Dr. Ahmed Bouzid’s video below, this removes having to say, “Alexa” for each subsequent question that you ask in succession as there is now a five-second window where the mic stays on (when this setting is activated). I know it seems minor, but this is an important step for making communication with our assistants feel more natural.

The second, as reported by The Verge, was the introduction of Google’s multi-step smart home routines. These routines are an incremental improvement on the smart home as they allow for you to link multiple actions into one command. If I had a bunch of smart appliances all synced to my Google Assistant, I could create a routine built around the voice command, “Honey, I’m homeeee” and have that trigger my Google assistant to start playing music, turn on my lights, adjust my thermostat, etc. In the morning, I might say, “Rise and shine” which then starts brewing a cup of coffee and reading my morning briefing, the weather and the traffic report.

This will roll out in waves in terms of what accessories and android functionality are compatible with routines. Routines make smart home devices that much more compelling for those interested in this type of home setting.

The last piece of news pertains to the extent of which Amazon is investing in its Alexa division. If you recall, Jeff Bezos said during Amazon’s Q4 earnings call in February that Amazon would, “double down” on Alexa. Here’s one example of what “doubling down” might entail as Amazon continues to aggressively scale the Amazon Alexa division within the company:

As our smart assistants keep taking baby steps in their progression toward being true, personal assistants, it’s becoming increasingly clear that this is one of the biggest arms races among the tech giants.

-Thanks for Reading-

Dave

 

Biometrics, hearables, News Updates, Smart assistants, VoiceFirst

Pondering Apple’s Healthcare Move

Apple red cross

Outside Disruption

There have been a number of recent developments that involve impending moves from non-healthcare companies intending to venture into the healthcare space in some capacity. First, there was the joint announcement from Berkshire Hathaway, JP Morgan and Amazon that they intend to team up to “disrupt healthcare” by creating an independent healthcare company specifically for their collective employees. You have to take notice anytime you have three companies of that magnitude, led by Buffett, Bezos and Dimon, announcing an upcoming joint venture.

Not to be outdone, Apple released a very similar announcement last week stating that, “Apple is launching medical clinics to deliver the world’s best health care experience to its employees.” The new venture, AC Wellness, will start as two clinics near the new “spaceship” corporate office (the one where Apple employees keep walking into the glass walls). Here’s an example of what one of the AC Wellness job postings look like:

AC Wellness Job Posting
Per Apple’s Job Postings

So in a matter of weeks, we have Amazon, Berkshire Hathaway, JP Morgan and now Apple, publicly announcing that they plan to create distinct healthcare offerings for their employees. I don’t know what the three-headed joint venture will ultimately look like, or if either of these two ventures will extend beyond their employees, but I think that there is a trail of crumbs to follow to try and discern what Apple might ultimately be aspiring for.

Using the Past to Predict the Future

If you go back and look at the timeline of some of Apple’s moves over the past four years, this potential move into healthcare seems less and less surprising. Let’s take a look at some of the software and hardware developments over the past few years, and how they might factor into Apple’s healthcare play:

The Software Developer Kits – The Roads and Repositories

Apple Health SDKs

The first major revelation that Apple might be planning something around healthcare was the introduction of the software development kit (SDK), HealthKit, back in 2014. HealthKit allows for third-party developers to gather data from various apps on users’ iPhones and then feed that health-based data into Apple’s Health app (a pre-loaded app that comes standard on all iPhones running iOS 8 and above). For example, if you use a third-party fitness app (i.e. Nike + Run) developers could feed data from said third-party app into Apple’s Health app, so that the user can see all of the data gathered in that app alongside any other health-related data that was gathered. In other words, Apple leveraged third party developers to make their Health app more robust.

When HealthKit debuted in 2014, it was a bit of a head-scratcher because the type of biometric data you can gather from your phone is very limited and non-accurate. Then Apple introduced its first wearable, the Apple Watch in 2015, and suddenly HealthKit made a lot more sense as the Apple Watch represented a much more accurate data collector. If your phone is in your pocket all day, you might be able to get a decent pedometer reading around how many steps you’ve taken, but if you’re wearing an Apple Watch, you’ll record much more precise and actionable data, such as your blood pressure and heart rate.

Apple followed up on this a year later with the introduction of a second SDK, ResearchKit. ResearchKit allowed for Apple users to opt into sharing their data with researchers for studies being conducted, providing a massive influx of new participants and data which in turn could yield more comprehensive research. For example, researchers studying asthma developed an app to help track Apple users suffering from asthma. 7,600 people enrolled through the app in a six-month program, which consisted of surveys around how they treated their asthma. Where things got really interesting was when researchers started looking at ancillary data from the devices, such as geo-location of each user, to identify any possible neighboring data such as the pollen and heat index to identify any correlations.

Then in 2016, Apple introduced a third SDK called CareKit. This new kit served as an extension to HealthKit that allowed developers to build medically focused apps that track and manage medical care. The framework provides distinct modules for developers to build off of around common features a patient would use to “care” for their health. For example, reminders around medication cadences, or objective measurements taken from the device, such as blood pressure readouts. Additionally, CareKit provides easy templates for sharing of data (i.e. primary care physician), which is what’s really important to note.

These SDK Kits served as tools to create roads and houses to transfer and store data. In the span of a few years, Apple has turned its Health app into a very robust data repository, while incrementally making it easier to deposit, consolidate, access, build-upon, and share health-specific data.

Apple’s Wearable Business – The Data Collectors

Apple watch and airpods

Along with the Apple Watch in 2015 and AirPods in 2016, Apple introduced a brand new, proprietary, wearable-specific computer chip used to power these devices called the W1 chip. For anyone that has used AirPods, the W1 chip is responsible for the automatic, super-fast pairing to your phone. The first two series of the Apple Watch and the current, first generation AirPods use the W1 chip, while the Apple Watch series 3 now uses an upgraded W2 chip. Apple claims that the W2 chip is 50% more power efficient and boosts speeds up to 85%.

W1 Chip
W1 Chip via The Verge

Due to the size constraints of something as small as AirPods, chip improvements are crucial to the devices becoming more capable as it allows for engineers to allocate more space and power for other things, such as biometric sensors. In an article from Steve Taranovich from Planet Analog, Dr. Steven LeBoeuf, the president of biometric sensor manufacturer Valencell said, “the ear is the best place on the human body to measure all that is important because of its unique vascular structure to detect heart rate (HR) and respiration rate. Also, the tympanic membrane radiates body heat so that we are able to get accurate body temperature here.”

AirPods Patents.JPG
Renderings of AirPods with biometric sensors included

Apple seems to know this too, as they filed three patents (1, 2 and 3) in 2015 around adding biometric sensors to AirPods. If Apple can fit biometric sensors onto AirPods, then it’s feasible to think hearing aids can support biometric sensors as well. There are indicators that this is already becoming a reality, as Starkey announced an inertial sensor that will be embedded in its next line of hearing aids to detect falls. While the main method of logging biometric data currently resides with wearables, it’s very possible that our hearables will soon serve that role as they’re the optimal spot on the body to do such. A brand new use case for our ever-maturing ear computers.

AC Wellness & Nurse Siri

The timing for these AC Wellness clinics makes sense. Apple has had four years to build out the data-level aspect to their offering via the SDKs. They’ve made it both easy to access and share data between apps, while simultaneously making their own Health app more robust. At the same time, they now sell the most popular wearable and hearable, effectively owning the biometric data collection market. The Apple Watch is already beginning to yield the types of results we can expect when this all gets combined:

Pulmonary embolism tweet.JPG

To add more fuel to the fire, here’s how the AC Wellness about page reads:

“AC Wellness Network believes that having trusting, accessible relationships with our patients, enabled by technology, promotes high-quality care and a unique patient experience.”

“Enabled by technology” sure seems to indicate that these clinics will draw heavily from all the groundwork that’s been laid. It’s possible that patients would log their data via the Apple Watch (and down the line maybe AirPods/MFi hearing aids) and then transfer said data to their doctor. The preventative health opportunities around this type of combination are staggering. Monitoring glucose levels for diabetes. EKG monitoring. Medication management for patients with depression. These are just scratching the surface of how these tools can be leveraged in conjunction. When you start looking at Apple’s wearable devices as biometric data recorders and you consider the software kits that Apple is enabling developers with, Apple’s potential venture into healthcare begins making sense.

The last piece of the puzzle, to me, is Siri. What patients really now need, with all of these other pieces in place, is for someone (or thing) to understand the data they’re looking at. The pulmonary embolism example above assumes that all users will be able to catch that irregularity. The more effective way would be to enlist an AI (Siri) to parse through your data, alert you to what you need to be alerted to, and coordinate with the appropriate doctor’s office to schedule time with a doctor. You’d then show up to the doctor, who can review the biometric data Siri sent over.  If Apple were to give Siri her due and dedicate significant resources, she could be the catalyst to making this all work. That to me, would be truly disruptive.

Nurse Siri.jpg

-Thanks for Reading-

Dave

hearables, News Updates, Podcasts, Smart assistants, VoiceFirst

This Week in Voice – First Podcast Experience

 

TWiV

This Thursday, I was fortunate to be invited by Bradley Metrock, the host of the podcast, “This Week in Voice,” to sit down with him and discuss the top stories of the week that pertain to Voice technology. I was joined by fellow panelist, Sarah Storm, who is the Head of the cloud studio, SpokenLayer, and the three of us went back and forth around what’s new in the VoiceFirst world.

The great thing about this podcast is that Bradley brings on a wide variety of people with different backgrounds on this show, so that each week you get a different perspective into the stories of the week. This week, we talked about the following five stories:

  1. New York Times: Why We May Soon Be Living in Alexa’s World
    This story serves as a revelation of sorts, as it’s the realization that Alexa, and the other smart assistants, are not just merely new gadgets, but represent a shift in how we communicate with computers as a whole.
  2. VoiceBot.ai: Spotify Working on New Smart Speaker? 
    The fact that Spotify posted two separate job openings for senior positions around a new hardware division turned a lot of heads. This is particularly interesting given the impending IPO, as Spotify might be looking to make some pretty dramatic moves prior to going public. Would Spotify be better off vertically integrating itself via partnerships/acquisitions, or is it possible for them to create a hardware division from scratch?
  3. Forbes: Meet the Voice Marketer 
    Voice represents an entirely new opportunity for brands to market themselves, but the question is how best do you use this new medium? With more personal data than ever at many of these brands’ disposal, it will be a challenge to balance the “creepy” with the truly proactive and engaging.
  4. Satire (I think): Local Man to Marry Alexa
     

  5. The Voice of Healthcare Summit
    Held at the Martin Conference Center at Harvard Medical School in Boston this August, this summit promises to be one of the best opportunities to gather with fellow Voice enthusiasts and healthcare professionals, to collaborate and learn about applying Voice to healthcare. This will be an awesome event and I encourage anyone to go who thinks this might be up their alley!

This was a great experience getting to sit in on this podcast and chat with Bradley and Sarah. I hope you enjoy this episode and cheers to more in the future!

Listen via:

Apple Podcasts;  Google Play Music; Overcast; SoundCloud; Stitcher Radio;  TuneIn

-Thanks for Reading-

Dave

Biometrics, hearables, News Updates, Smart assistants

5 Hearables and Voice First Takeaways from CES 2018

CES-2018---Consumer-Electro.jpg

The annual Consumer Electronics Show (CES) took place this past week in Las Vegas, bringing together 184,000 attendees and a whole host of vendors in the consumer electronics space to showcase all of the new, innovative things each is working on. Once again, smart assistants stole the show, making this the third year in row where smart assistants seem to be gradually dominating the overall theme of the show. Along with the Alexa-fication of everything, there were a number of significant hearable announcements, each in some way or another incrementally improving and expanding on our mini ear-computers. Although I was not in attendance, these are my five takeaways from CES 2018:

1. The Alexa-fication of Everything

Kohler Konnect

It seemed like just about every story coming out of this year’s show was in some way tied to an Alexa (or Google…but mainly Alexa) integration. We saw Kohler introduce the “connected bathroom” complete with a line of smart, Alexa-enabled mirrors, showers, toilets (yes, toilets), bathtubs and faucets. First Alert debuted its new Onelink Safe & Sound carbon monoxide and smoke detector with Alexa built-in. Harman revealed an Echo Show competitor, the JBL LINK View, powered by Google’s assistant.

My personal favorite of the smart-assistant integrations around the home, was the inconspicuous smart light switch, the Instinct, by iDevices. By converting your standard light switches around your home to the Instinct, you enhance the utility of the switch by an order of great magnitude, as it allows for motion-detection lighting, energy savings, and all the benefits of Alexa built-right into your walls.

idevices-ces-social-instinct-instagram_main.jpg
iDevices Instinct per iDevices website

And that’s just the integrations that emerged for the home, as the car became another area of focus of smart assistant integration at this year’s show. Toyota announced that it would be adding Alexa to a number of its Toyota and Lexus cars, starting this year. Kia partnered with Google Assistant to begin rolling that feature out this year too. Add these integrations to the list that also includes Ford, BMW and Nissan from previous announcements. Mercedes decided it doesn’t need Google or Amazon, and unveiled its own assistant. And finally, Anker debuted a bluetooth smart charger, Roav Viva, that can access Alexa in whatever car you’re in for only $50.

Alexa, Google and the other smart assistants are showing no sign of slowing down in their quest to enter every area that we exist.

2. Bragi Announces “Project Ears”

Bragi’s “Project Ears” is a combination of tinnitus relief and personalized hearing enhancement. This announcement was exciting for two reasons.

What’s particularly interesting about Bragi is its partnership with “Big 6” hearing aid manufacturer Starkey, and the byproducts of that partnership that we’re beginning to see. Last week, I wrote about Starkey’s announcement of the “world’s first hearing aid with inertial sensors” and how that was likely a byproduct of the Bragi partnership, as Bragi has been on the forefront of embedding sensors into small, ear-worn devices. Fast-forward one week to CES, and we see Bragi’s Project Ears initiative, which includes “tinnitus relief” by embedding tinnitus masking into the device to help relieve the ringing in one’s ears. So, we see Bragi incorporating elements of hearing aids into their devices, just as we saw Starkey incorporating elements of hearable technology into their hearing aids. The two seem to be leveraging each others’ expertise to further differentiate in each’s respective markets.

Project Ears
From Bragi’s website

The second aspect to this announcement, stems from Bragi’s newly announced partnership with Mimi Hearing Technologies. Mimi specializes in “personalized hearing and sound personalization” and as a result, Bragi’s app will include a “scientific hearing test to measure your unique Earprint™.” This is ultimately to say that the hearing test issued by Bragi’s app will be iterated and improved via this partnership with Mimi. Bragi wants to match you as accurately as possible to your own hearing profile, and this announcement shows that they’re continuing to make progress in doing so.

3. Nuheara Unveils New Products & Utilization of NAL-NL2

New Nuheara Products.JPG
From Nuheara’s press release

Nuheara, the hearable start up from down-under, introduced two new products at this year’s show. The first was the LiveIQ, a pair of wireless earbuds that are priced under $200. These earbuds will use some of the same technology that Nuheara’s flagship hearable, IQBuds, use, as well as providing active noise cancelling.

The second device introduced was the IQBuds Boost, which will essentially serve as an upgrade to the current IQBuds. The IQBuds Boost will use what Nuheara has dubbed “EarID™” which will provide for a more “personalized experience unique to the user’s sound profile.” Sounds familiar, right? Bragi’s “Earprint™” technology and Nuheara’s “EarID™” both aim to serve as a way in which the user can further personalize their experience via each company’s companion app.

In addition to the new product announcements, Nuheara also announced a partnership with the National Acoustic Lab (NAL), “to license its international, industry-recognized NAL-NL2 prescription procedure, becoming the only hearable company globally to do this.”

Here’s what Oaktree Product’s in-house PhD audiologist, AU Bankaitis, had to say about the significance of this announcement:

“Kudos to NuHeara for partnering with the National Acoustic Lab (NAL), the research arm of a leading rehabilitation research facility that developed the NAL-NL2 prescriptive formula commonly applied to hearing instruments. It will be interesting to see how this partnership will influence future IQBud upgrades. Whether or not this approach will result in a competitive advantage to other hearables remains to be seen. Research has clearly shown that relying on a fitting algorithm without applying objective verification with probe-mic measurements often times results in missing desired targets for inputs and frequencies most critical for speech. “

4. Qualcomm Introduces New Chipset for Hearables

Some of the most exciting innovation happening in the whole wearable market, and in particular the hearable sub-market, is taking place under the hood of the devices. Qualcomm’s new chipset, the QCC5100, is a good representation of the innovation occurring inside the devices, as these chips will reduce power consumption by 65%, allowing for increased battery life. Per Qualcomm’s SVP of Voice & Music, Andy Murray:

“This breakthrough single-chip solution is designed to dramatically reduce power consumption and offers enhanced processing capabilities to help our customers build new life-enhancing, feature-rich devices. This will open new possibilities for extended-use hearable applications including virtual assistants, augmented hearing and enhanced listening,”

It’s wild to think that it was only back in 2016 (pre-AirPods), when battery life and connectivity stood as major barriers of entry for hearable technology. AirPods’ W1 chip dramatically improved both, and now we see other chip makers rolling out incremental improvements, further reducing those initial roadblocks.

5. Oticon wins Innovation Award for its Hearing Fitness App

Oticon’s upcoming “hearing fitness app” that will be used in conjunction with Oticon’s Opn hearing aids illustrates the potential for this new generation of hearing aids that are able to harness the power of user data. The app gathers data from your hearing aid usage, to allow the user to view their data in an app that looks somewhat similar to fitbit’s slick data readouts. The app will display the user’s hearing aid usage, which can then be used to further enhance the user’s experience based on the listening environments the user is experiencing. So, not only will this empower users, but this will also serve as a great tool for Audiologists to further customize the device for their patient using real data.

ces-hearing-fitness-app.jpg
Oticon’s Hearing Fitness App wins CES 2018 Innovation Award

Furthermore, this app can integrate other data from other wearable devices, so that all of the data is housed together in one app. It’s important to look at this as another step toward bringing to fruition the idea that hearing aids are undergoing a makeover into multi-function devices, including “biometric data harvesting” to provide actionable insight into one’s data. For example, if my hearing aids are recording my biometric data, and my app notifies me that my heart rate is acting funky or my vitals are going sideways, I can send that data to my doctor and see what she recommends. That’s what this type of app ultimately could be, beyond measuring one’s “hearing fitness.”

What were your favorite takeaways from this year’s show? Feel free to comment or share on twitter!

I will be traveling to the Alexa Conference this week in Chattanooga, Tennessee and will surely walk away from there with a number of exciting takeaways from #VoiceFirst land, so be sure to check back in for another rundown next week.

– Thanks for reading-

Dave

Biometrics, hearables, Live-Language Translation, News Updates, Smart assistants

2018 Starts with a Bang

Editor’s Note: In my initial post, I mentioned that along with the long-form assessments I’ve been publishing, I’d also be doing short, topical updates. This is the first of those updates.

In the first week of 2018, we saw a handful of significant updates that pertain to various trends converging around ears. Here’s a rundown of what you need to know:

Amazon introduces the Amazon Mobile Accessory Kit (AMAK)

AMAK

As Voicebot.ai reported from an Amazon blog post, Amazon’s new Mobile Accessory Kit will allow for much easier (and cheaper) Alexa integration into OEM manufacturer’s devices, such as hearables. It’s been possible in the past to integrate Alexa into third party devices, but this kit will serve as a much more simplified process to convert any type of hardware into Alexa-integrated hardware. This is great news for this new use case, as it will surely put Alexa in more and more of our ear-worn devices.

Per Amazon’s senior product manager, Gagan Luthara:

“With the Alexa Mobile Accessory Kit, OEM development teams no longer need to perform the bulk of the coding for their Alexa integration. Bluetooth audio-capable devices built with this new kit can connect directly to the Alexa Voice Service (AVS) via the Amazon Alexa App (for Android and iOS) on the customer’s mobile device.”

Starkey Announces Exciting Additions to Next Generation Hearing Aids

There were a number of exciting revelations at Starkey’s Biennial Expo, but among all the announcements, there were two that really intrigued me. The first was the inclusion of “fall detection” sensors in Starkey’s next generation of hearing aids. This will be the first hearing aid with inertial sensors:

Fall Detection Sensors.JPG

On the surface, this is really great, as every 11 seconds an older adult is treated in the emergency room for a serious fall. The purpose of these sensor is to detect those type of falls, so that the user can get immediate help. What’s even more intriguing is the fact that we’re now beginning to see advanced sensors being built into this new wave of hearing aids. As I will write about soon, the preventative health benefits combined with smart assistants, offer some very exciting possibilities and another promising use case for our ear-worn devices.

iTranslate.png

The second announcement, was the upcoming live-language translation feature to be added to this same, next generation of Starkey hearing aids. This stems from Starkey’s partnership with hearable manufacturer, Bragi, which has this feature available with its Bragi Dash Pro. The live-language translation is not Bragi’s proprietary software, as Bragi currently uses the third party application, iTranslate to power this feature for its device. Although it has not been announced formerly, I expect that Starkey’s live-language translation feature will also be powered by iTranslate. Expect more features like this to become more widespread across our connected devices over time as more manufacturers support this type of integration.

As we move into week two of 2018, expect another wave of exciting announcements coming out of CES. Check back here next week as I will be doing a rundown of the most important takeaways coming out of Vegas this week.

-Thanks for Reading-

Dave