Biometrics, Daily Updates, Future Ear Radio, hearables, wearables

Wearables Grow Up (Future Ear Daily Update 9-19-19)

Wearables Grow Up

One of my favorite podcasts, a16z, recently created a secondary, news-oriented show, “16 minutes.” 16 Minutes is great because the host, Sonal Chokshi (who also hosts the a16Z podcast), brings on various in-house experts from a16Z’s venture capital firm to provide insight into each week’s news topics. This week, Sonal brought on general partner, Vijay Pande, to discuss the current state of wearable computing. For today’s update, I want to highlight this eight minute conversation (it was one of two topics covered on this week’s episode – fast-forward to 7:45), and build on some of the points Sonal and Vijay make during their chat.

The conversation begins by covering a recent deal struck by the government of Singapore and Fitbit. Singaporeans will be able to register to receive a Fitbit Inspire band for free if they commit to paying $10 a month for a year of the company’s premium coaching service. This is part of Fitbit’s pivot toward a SaaS business, and a stronger focus about informing users about what the data being gathered actually means. Singapore’s Health Promotion Board will therefore have a sizeable portion of its population (Fitbit’s CEO projects 1 million of the 5.6 citizens will sign up), monitoring their data consistently via wearable devices that can be tied to each citizen’s broader medical records.

This then leads to a broader conversation about the ways in which wearables have been maturing, and in many ways, wearables are growing up. To Vijay’s point, we’re moving way beyond step-counting into much more clinically relevant measurable data. Consumer wearables that are increasingly being outfit with more sophisticated, medical-grade sensors, combined with the longitudinal data that can be gathered since they’re being worn all day, creates a combination not yet seen before. Previously, we’ve been limited to sporadic data that’s really only gathered when we’re in the doctor’s office. Now, we’re gathering some of the same types of data by the minute, and at the scale of millions and millions of people.

Ryan Kraudel, VP of Marketing at biometric sensor manufacturer Valencell, made me aware of this podcast episode (thanks, Ryan) and added some really good points on twitter about what he’s been observing these past few years. A big part of what’s different between today’s wearables and the first generation devices is the combination of more mature sensors that are proliferating at scale and the machine learning and AI layer that’s being overlaid on top to assess what the data is telling us, which has become more sophisticated as well.

To Sonal’s point, we’ve been benchmarking our data historically against the collective averages of the population, rather than benchmarking our data against our own personal data, because we haven’t had the ability to gather the personal data in the ways that we can now. When you’re recording longitudinal data over a long period of time, such as multiple years, you start to get really accurate baseline measurements unique to each individual.

This enables a level of personalization that will open the door to preventative health use cases. This has been a big application that I’ve been harping on for a while – the ability to have AI/ML assess your wearable data constantly to help identify risks in your data, based on your own historical cache of data that’s years and years old. Therefore, you enable the ability for the user to be notified of threats to their health data. To Vijay’s point at the end, in the near future, our day-to-day will not be that different but what we’re learning will be radically different, as you’ll be measuring certain metrics multiple times per day, rather than once a year during your check up.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to your flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

audiology, Daily Updates, Future Ear Radio, hearables, hearing aids

Signia’s Acoustic-Motion Sensors (Future Ear Daily Update 9-18-19)

Acoustic Motion Sensor 2.jpg

Much of what I am excited about right now in the world of consumer technology broadly, and in wearables/heararables/hearing aids more narrowly, is the innovation happening at the component level inside the devices. I’m still reeling a bit from Apple’s U1 chip embedded in the iPhone 11 and the implications of it that I wrote about here. New chips, new wireless technologies, new sensors, new ways to do cool things. Now, we can add another one to the list – acoustic-motion sensors – which will be included in Signia’s new line of hearing aids, Xperience.

Whereas video and camera systems rely on optical motion detection, Signia’s hearing aids will use its mics and sensors to assess changes in the acoustic environment. For example, if you’re moving from sitting at a table speaking face to face with one person, to a group setting where you’re standing around a bar, the idea is that the motion sensors would react to the new acoustic setting and then automatically adjust the mics accordingly, from directional to omni-directional settings and balance in-between.

These acoustic-motion sensors are part of a broader platform that simultaneously uses two processors, Dynamic Soundscape Processing and Own Voice Processing. The Own Voice processor is really clever. It’s “trained” for a few seconds to identify the user’s voice and differentiate it from other peoples’ voices that will inevitably be picked up through the hearing aid. This is important, as multiple hearing aid studies have found that a high number of hearing aid wearers are dissatisfied by the way their own voice sounds through their hearing aids. Signia’s Own Voice processor was designed specifically to alleviate that effect.

Now, with the inclusion of acoustic-motion sensors to constantly monitor the changes in the acoustic setting, the Dynamic Sound processor will be alerted by the sensors to adjust on-the-fly to provide a more natural sounding experience. The hearing aid’s two processors will then communicate with one another to determine which processor each sound should feed into. If you ask me, that’s a lot of really impressive functionality and moving pieces for a device as small as a hearing aid to handle, but it’s a testament to how sophisticated hearing aids are rapidly becoming.

I’ve written extensively about the innovation happening inside the devices and what’s most exciting is that the more I learn about what’s happening, the more I realize that we’re really only getting started. A quote that still stands out to me from Brian Roemmele’s U1 chip write up, is this:

“The accelerometer systems, GPS systems and IR proximity sensors of the first iPhone helped define the last generation of products. The Apple U1 Chip will be a material part of defining the next generation of Apple products.” – Brian Roemmele

To build on Brian’s point, it’s not just the U1 chip, it’s all of the fundamental building blocks being introduced that are enabling this new generation of functionality. Wearable devices in particular are poised to explode in capability because the core pieces required for all of the really exciting stuff that’s starting to surface, are maturing to the point where it has become feasible to begin implementing them into devices as small as hearing aids. There is so much more to come with wearable devices as the components inside the devices continue to be innovated on, which will then manifest in cool new capabilities, better products, and ultimately, better experiences.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to your flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio, Podcasts

NPR’s Booming Podcast Business (Future Ear Daily Update 9-17-19)

NPR Podcast Business 3

A report from Current surfaced last week that NPR is projecting its podcast revenue to surpass its radio broadcast revenue for the first time next year. The information in the report comes from NPR’s membership meeting that was held on September 5th, where NPR CFO, Deborah Cowan, announced that corporate sponsorships for podcasts would grow to $55 million next year. If projections are met, both broadcast and podcast revenue would combine to $114.5 million, which would double NPR’s sponsorship revenue from just three years ago.

If you recall, NPR helped to really drive the initial boom of podcasting with one of the first mainstream podcast hits, Serial. In addition to Serial, NPR tends to rank highly in the podcast charts a multitude of categories with shows like Planet Money, How I Built This, Fresh Air, and This American Life. NPR has rather brilliantly augmented itself with a whole array of on-demand podcasts, comprised of a mix of new shows specifically made to be podcasts, or its popular broadcast shows that have been converted into on-demand podcasts.

NPR’s ability to cater to traditional methods of consumption, while re-investing its profits into new ways to consume its content is starting to show, beyond just the increasing podcasting revenue. I recently wrote about NPR’s process of making its Morning Edition segment available to Alexa users, and feed the local broadcast to the user based on their location. The project lead who detailed the process in a medium post, really helped to illustrate how forward thinking NPR is about pushing the boundaries with how listeners are able to access the content. It’s impressive stuff.

In that same update, I wrote about how NPR might be in a great position to follow in the BBC’s footsteps in launching its own “mini-assistant.” The idea goes that if an entity hosts a certain level of content, it might make more sense to go the way of a mini-assistant, which would be a home for multiple skills. In that scenario, NPR would be able to use its assistant to be the master of its domain and more intelligently and accurately connect listeners with the type of content that it’s looking for, just as the BBC will be attempting to do with, “Beeb.”

With it becoming apparent that NPR’s podcasting business is booming and eclipsing traditional revenue, it will be interesting to see how heavily it pushes the boundaries on its content distribution and facilitation to greater increase its on-demand listenership and continue growing podcast revenue. Might we see NPR take its podcasting ambitions a step further and launch its own mini-assistant to house its content, serve as the operator to redirect listeners to what they’re looking for, and interface with master assistants like Alexa and Google assistant in the background? We shall see, but there’s a growing financial incentive to doing just that.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Apple, Daily Updates, Future Ear Radio, VoiceFirst

Apple’s U1 Chip (Future Ear Daily Update 9-13-19)

Apple U1 Chip

Perhaps the most interesting revelation from Apple’s product event Tuesday was what was not announced a single time during the presentation: Apple’s U1 chip that will be embedded into the iPhone 11 and iPhone 11 pro. The only reference of the chip during Phil Schiller’s presentation was an image of it during Phil Schiller’s slides. For today’s update, I want to share what I have learned the past few days about the implications of the U1 chip  from people a lot more knowledgeable about this technology than me .

Image from Brian Roemmele’s Quora post regarding Apple’s September 10th, 2019 event and the introduction of the U1 chip

First of all, I’ll be the first to admit that I was unaware that Apple was going to be bringing this type of chip to market. The person who really turned me onto understanding what’s going on here, was none other than Brian Roemmele. Brian wrote an incredible Quora post that completely breaks down what’s going on with this chip. Much of his analysis stems from the 35+ years of patent analysis he’s been doing, largely centered around Apple’s patents. His Quora post now serves as the foundation for my understanding of the U1 chip, much like how his post on the Apple Card helped to shape my thinking around the trajectory of its Apple’s finance goals. Both are well worth your time and offer way more depth than what you’ll find here.

The U1 chip is powered by Ultra Wideband (UWB) radio technology which uses a very low energy level for short-range, high-bandwidth communications. This type of wireless technology is perfect for spatial awareness and precise GPS. It’s basically a competitor to Bluetooth, but more accurate as it’s able to precisely locate an object within 10 centimeter range, as opposed to the current version of Bluetooth which is roughly a meter. It’s also about four times faster than Bluetooth currently. Brian suggests that the power requirements for UWB is so low that you could likely power a device using UWB on a hearing aid battery-sized coin cell that would last upwards for a year.

Although UWB has been around for decades, Bluetooth has ultimately won out to this point because it has historically been cheaper to implement. As a result, we have all types of legacy infrastructure and systems built around Bluetooth, so even though UWB might be more technologically advanced and capable, the incumbent and current standard, Bluetooth, is going to be tough to unseat. There are only a handful of companies that can truly influence something such as the preferred method of connectivity and wireless communication, and Apple is one of them.

The first application of this chip will be the ability to point your phone at any fellow U1 chip user and quickly AirDrop them files. However, there’s much, much more to this. Brian, like he is so in my instances, was way out ahead of this announcement. He was cryptically tweeting as the event approached about, “following the balloons.” I now know that Brian was referring to Apple using the U1 chip in an upcoming Tile-like product from Apple, which would allow the user to hold up their iPhone camera and have an AR overlay with a red balloon signifying the location of said product.

So, an obvious application of this chip would be to use it to locate missing iOS devices embedded with the chip, including the Tile-like product, or friends and family members precise locations via the “find my friends” app (so long as they’re carrying an iOS device on their person with the U1 chip). Brian takes it step further by insinuating that we’ll eventually be able to have Siri field questions based on data from the U1 chip too.

If we’re able to precisely locate objects using the U1 chip and create AR overlays to display their locations, then it’s conceivable that we’ll see this expand considerably. Here’s how Brian suggests this might evolve:

“The use case will allow for you to find a product like you world on a website with a whimsical Balloon, also used in the Find My app, to direct you to the precise location of the Apple product. With FaceID and Apple Pay you just look at your phone and confirm and leave. It is not hard to imagine many retail businesses adopting the system. It is also not hard to imagine AppleLocate used in industrial locations and medical locations.” – Brian Roemmele

Many astute folks on Twitter (like the ones whose tweets I’ve embedded in this post) are pointing out ways in which this type of chip might impact the trajectory of these emerging technologies. Beyond geo-location, it would seem that UWB will be foundational to many upcoming technologies expected in the 2020s, from AR/MR to crypto to medical biometrics to autonomous vehicles.

So, while the Apple event might have been a bit underwhelming, I think what we’re really witnessing is an interim period where Apple is rolling out all of the building blocks required for the products it will be releasing into the next decade. Brian very eloquently describes this period:

“The accelerometer systems, GPS systems and IR proximity sensors of the first iPhone helped define the last generation of products. The Apple U1 Chip will be a material part of defining the next generation of Apple products.” – Brian Roemmele

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Apple, Daily Updates, Future Ear Radio, hearables

Apple Hearing Study (Future Ear Daily Update 9-11-19)

9-11-19 - Apple Hearing Study.jpg

Yesterday, Apple hosted its annual September event to show off its new set of products that are set to go on sale as we enter into the last quarter of the year. Although the bulk of the announcements were focused on the new iPhones set to debut and all the upgrades in the phones’ cameras and processors, there were a few other announcements that I thought were interesting and worth writing updates for. Today, I am writing about the Apple Hearing Study that was announced.

In the upcoming Apple WatchOS6 update that is due out September 19th, there will be a new sound level feature included that the user can configure to appear as one of the readouts on the watches’ display. Apple will be using the microphones on the watch in a low-power mode to always be recording your environment’s decibel level, which will then be visualized on the watch, and display green, yellow or red based on the volume of the noise. The knee jerk reaction might be to say, “wait they’re always recording me?” but, no, Apple has stated that it’s not going to save any of the audio; they’ll only be saving the sound levels.

Image
From Apple’s September 10th, 2019 keynote

The Apple Watch Series 5 that was announced will feature an, “always-on display,” implying that future generations will feature an always-on display as well. Therefore, users who have configured their Apple Watches’ display to show the sound level meter, will constantly be able to assess how dangerous the sound levels are in their environment.

In my opinion, this is a bigger deal than it might appear, because people tend to lose their hearing loss gradually. One of the big reasons why is because they’re completely unaware that they’re exposing their ears to dangerous levels of sound for prolonged periods of time. As an Apple Watch user myself, the ability to be able to quickly glance at my watch to assess how loud the environment I am in is really appealing. An always-on display will just make this effect more pronounced, hopefully leading more people to considering keeping hearing protection, like high quality earplugs, on them at all times. It can’t be understated how powerful the effect will be on peoples’ psyche to constantly see that sound level bar flicker or linger in the red.

So, as this new feature becomes available to all Apple Watches running on OS6, Apple will overnight have an army of users who can gather data on their behalf. Which brings us to the Hearing Study that Apple will be conducting in conjunction with the University of Michigan and the World Health Organization. Here’s Michigan professor Rick Neitzel, who will lead the study, describing the purpose:

“This unique dataset will allow us to create something the United States has never had—national-level estimates of exposures to music and environmental sound. Collectively, this information will help give us a clearer picture of hearing health in America and will increase our knowledge about the impacts of our daily exposures to music and noise. We’ve never had a good tool to measure these exposures. It’s largely been guesswork, so to take that guesswork out of the equation is a huge step forward.”

Users will be able to opt into this study, or the other two studies that were announced at the event, through a new Apple Research app. As I wrote about in August, Apple is slowly inserting itself further and further into the healthcare space, by being the ultimate health data collector and facilitator. This is just another example of Apple leveraging its massive user base to quickly gather data around the various sensors embedded in Apple’s devices at scale to offer to researchers. Creating a dedicated app to facilitate this data transfer, with explicit user opt in, will shield Apple from scrutiny around the privacy and security of sensitive data.

Apple’s wearables are increasingly shaping up to be preventative health tools, or as Apple has described “guardians of health.” The introduction of a decibel level output on the Watch’s display is another incremental step toward becoming said, “guardian of health,” as it will help to proactively notify the user of another danger to their health:  gradual hearing loss. It’s not hard to imagine future generations of AirPods supporting the same feature using mics to sense the sound levels, but instead of a notification, perhaps they’ll activate noise cancellation to protect one’s ears. One can hope!

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Biometrics, Daily Updates, Future Ear Radio, hearables, VoiceFirst

Hearables Panel Video (Future Ear Daily Update 9-10-19)

9-10-19 - Hearables Panel Video

Back in July, I published an update recapping the hearables panel that I participated on in July at the Voice summit. One of my fellow panelists, Eric Seay, had a friend in the audience who shot a video of the panel, so for today’s update, I’m sharing the video. I had remembered the panel being really insightful, but upon watching it again a few months later, I’m reminded what an awesome panel it really was. The collection of backgrounds and expertise that we each brought to the table fostered a really interesting discussion. Hope you enjoy!

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

 

Daily Updates, Future Ear Radio, Google Assistant, Voicebot, VoiceFirst

New Voicebot Post: Google Assistant Hearing Aids (Future Ear Daily Update 9-9-19)

9-9-19 - Voicebot Post

I published an article in Voicebot this weekend, highlighting the importance of Android 10 to the world of hearing aids. As I outlined in the piece, prior to Android 10 and the low-energy Bluetooth protocol included in the update, the vast majority of hearing aids were limited to pairing with iPhones. This was due to the fact that Apple introduced its own low power Bluetooth protocol to the market in 2014, which allowed for “made for iPhone hearing aids.” With an open source protocol coming from the Android side of the market, we now have a clear line of sight to a future where any Bluetooth hearing aid can pair to any iPhone or Android phone that uses Android 10. In other words, we’re near the point of universal connectivity between smartphones and hearing aids.

Beyond the universal streaming aspect to this new protocol, we’re also going to see direct Google Assistant access become much more widely available with hearing aids. This is really significant as Google Assistant is already considerably more sophisticated than Siri, which is the primary assistant that hearing aid wearers have had access to up until now. With an even faster and more intelligent version of Google Assistant due out later this year, hearing aid wearers are in for a treat as they begin to be exposed to the benefits and possibilities of having an intelligent assistant residing right in their hearing aids. Check out the post to get a full breakdown of this very important milestone.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Alexa, Daily Updates, Future Ear Radio, VoiceFirst

The NPR Mini Assistant (Future Ear Daily Update 9-5-19)

9-5-19 - The NPR Mini Assistant.jpg

I came across a Medium post yesterday that was very intriguing from Nara Kasbergen, the technical lead for voice and emerging platforms at NPR. In the post, Nara details the process behind creating the ability for Alexa users to stream NPR’s Morning Edition through their Alexa-enabled devices. For today’s update, I’m covering the challenges that Nara’s team had to overcome, the finished product, and how this ultimately builds on the theory that while Alexa, Google and Siri all represent “master assistants” or “routers,” there are lots of signs that we might start seeing a wide variety of “mini assistants” emerge.

One of the biggest challenges that Nara mentions with bringing Morning Edition to Alexa is that Morning Edition is a mix of local and national stories, resulting in 300 versions of being aired in different locations around the country. In order to solve this, the team turned to a new SaaS solution from Triton Digital that allowed NPR to automate and create recordings of member stations’ live streams, which are then all funneled into one big RSS feed. 5 minutes after each show airs, an MP3 file becomes available within the RSS feed to the team to place in Alexa’s AudioPlayer interface. So, they solved their first big hurdle of making the content available for on-demand streaming via Alexa, but the next hurdle was to properly match each Alexa user to their local station.

This is where things get really interesting. The team initially thought that they needed to create a new standalone skill specifically for Morning Edition, which would have resulted in an increase in maintenance burden. Upon researching their options, the team realized that the best route was to actually build Morning Edition into the legacy NPR skill. Here’s Nara’s own words with how this works:

“Thanks to a collaboration with Amazon, the invocation “Alexa, play Morning Edition” is just an alias for “Alexa, ask NPR to play Morning Edition”, which is powered by a PlayMorningEditionIntent in our existing NPR skill. Both station streaming and the on-demand Morning Edition experience live in a single codebase, significantly reducing the maintenance burden for our team. And our users benefit because whether they say “Alexa, play NPR” or “Alexa, play Morning Edition”, they only need to choose a member station once, after which that setting will be persisted across both experiences. In fact, anyone who already used the NPR skill to listen to a live stream in the past won’t have to choose a station at all (as long as their member station broadcasts Morning Edition and has opted-in to making it available on voice platforms); they can start listening to Morning Edition right away.”- Nara Kasbergen

There are a number of things that stand out here to me. First of all, the fact that the user only need to identify their member station once, whether it be for Morning Edition or one of the NPR broadcasts contained in the skill, is absolutely huge for the user experience. That type of UX is habit-forming.

Second of all, creating aliases and masquerading intents is perhaps the key to opening the world of mini assistants. While some users might be aware that Morning Edition actually exists inside of the broader NPR skill, others will likely be oblivious. Therefore, intents that communicate to Alexa that you want a specific piece of content within a skill, without having to actually say, “Alexa, tell NPR to play Morning Edition,” is absolutely critical. The easier it is for the user to call up what they want, the better.

Last week, we saw the BBC announce a forthcoming “Beeb” assistant, which will serve as a central repository for all of BBC’ content. Fellow content hubs like NPR are seemingly building toward a similar offering, as evidenced with what Nara’s team was able to accomplish with bringing Morning Edition to the NPR skill. At a certain point, the skill will be so large and full options for the user, that it might make sense for the NPR skill to function more like a mini-assistant, that helps route the user to what they want inside the skill. In addition, clever uses of aliased intents will allow for Alexa to serve as the “router” as it communicates with the mini-assistant in the background. It’s a win-win, the user can communicate with the mini-assistant, whether they know they are or not.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

 

Daily Updates, Future Ear Radio

Media as a Main Driver of Voice Technology Adoption (Future Ear Daily Update 9-3-19)

9-3-19 - Media as on the main drivers

One of the my favorite voice technology podcasts, This Week in Voice, returned this week to kick off season four. Bradley Metrock, host of the podcast, welcomed Voicebot creator and CEO, Bret Kinsella, on to discuss the state of voice technology and what to expect heading into the fall of 2019. As two of the most knowledgeable and well-connected people in the industry, listening to this episode is a great way to kick off work coming back from Labor Day Weekend.

The first half of the episode focuses on this idea of whether or not we’re heading into a “cold winter” with the technology, aka “The Trough of Disillusionment” from Gartner’s Hype Cycle. I thought Bret did a good job of putting things into perspective by pointing out the parallels during the iPhone’s early years. When the iPhone came out, it was deemed a toy by competitors and many in the media, but over time, as the app store matured, the hardware kinks were ironed out, and the Android ecosystem emerged, people shifted quickly toward mobile computing. What was true in hindsight was that much of the negative sentiment and debates around mobile computing ultimately did not affect the adoption of the technology.

One of the key points that Bret made during the episode was the importance of media for the early adoption of voice assistants and the affiliated hardware that house said assistants. This is something that I feel strongly about too as I believe it’s one of the areas with the most low hanging fruit to utilize voice assistants. During the conversation, they referenced a new Ghostbusters skill that was created to create buzz around the upcoming film coming out in 2020. To Bret’s point, it’s better to engage fans through quick, companion-type experiences via a smart speaker because of how much less friction (time & effort) it takes to begin the experience, compared to say, an app (download the app, register, etc).

“For someone who’s doing something that might be this more transactional or promotional experience, or just doing some fan service, where the user can just ask for it and they can engage with it – that’s actually a lower price of entry for the consumer to engage with and it’s one of those things that’s really good for that quick hit where you’re trying to get someone really quick awareness, some type of response, maybe a call to action, and then it sort of doesn’t really matter after that.” – Bret Kinsella

This is just one of many examples of how some type of media can be leveraged through things like smart speakers. Voicebot reported on a new skill from StatMuse to get expert NFL fantasy football info and recommendations from ESPN’s fantasy football correspondent, Matthew Barry. Every single subject matter expert could (and probably should) have some type of content available through a smart speaker, whether that be in the form of a skill, a flash briefing, or even podcast content.

In my opinion, one of the biggest culprits behind the growing sentiment that voice is entering into a “cold winter” is the idea that building a voice experience is the entire battle. As Bob Stolzberg, CEO of VoiceXP, mentioned to me when I was writing an article for the Harvard Business Review, “Having a voice experience is only half the battle – successfully marketing your voice experience is the other half of the battle.” If you build it, they won’t necessarily come. Companies need to treat voice experiences the same way they have with any other new digital experience by leveraging their legacy communication channels to make people aware of their voice experiences.

So, while we’re definitely seeing voice technology go through some growing pains while it matures, there are still many opportunities for companies to take advantage of the 26%+ of Americans who own smart speakers and the hundreds of millions that use voice assistants on their phones. Media that’s conducive to voice assistants, in all its forms, from daily flash briefings to companion skills designed to promote or engage with fans, represent good places for any entity to start migrating their presence to the voice web.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Apple, Biometrics, Daily Updates, Future Ear Radio

Follow the Integration (Future Ear Daily Update 8-28-19)

8-28-19 - Follow the Integration

There’s been a lot of chatter, analysis, and overall hot takes about Apple’s new credit card, the Apple Card, transpiring the past few weeks online. One of the best pieces I have read on the subject came from longtime Apple analyst, Horace Dediu (@asymco). Horace begins by tearing down the fallacies in the tired, old cliche, “we were promised flying cars and we got x,”  that is often used to trivialize much of the innovation that’s occurred in the past few decades. While flying cars represent, “extrapolated technologies,” innovations like the Apple Card would represent “market creating” technologies that are often much more ubiquitous, popular and behavior changing (read his piece to fully understand this point).

There was one point that he made in the article that I really want to hone in on today as I think there’s a direct parallel that can be drawn that pertains to FuturEar:

“Here’s the thing: follow the integration. First, Apple Card comes after Apple Pay, more than 4 years ago. Apple Card builds on the ability to transact using a phone, watch and has the support of over 5000 banks. Over 10 billion transactions have been made with Apple Cash. Over 40 countries are represented.”

“Follow the integration.” That’s the best way to really understand where Apple is headed. As I have written about before, Apple tends to incrementally work their way into new verticals and offerings, and if you follow the acquisitions, the product development – the integration – you start to get a sense of what’s to come with future product and service offerings.

A good example of this would be to look at the burgeoning Apple Health ecosystem. There are two separate areas to focus on: the software and services, and the hardware.  In 2014, Apple began the formation of said ecosystem by introducing the Apple Health app and its Health Kit software development kit (SDK), which was a year before the Apple Watch. This might have been cause for some head scratching as there wasn’t a whole lot of hardware on the market prior to the Apple Watch that could feed data into Apple Health (except the basic inertial data from the phone)?

A year later, in 2015, the Apple Watch came out which would become the main source to populate the data in the Apple Health app. Flash forward to today and Apple has rolled out two more SDKs and iterated on the Apple Watch four times to create a much more sophisticated biometric data collector. On the SDK front, CareKit allows for third party app developers to create consumer-focused applications around data collected by the Apple Watch or with data collected in the third party apps, such as apps centered around Parkinson’s, diabetes, and depression. ResearchKit helps to facilitate large-scale studies for researchers, all centered around Apple’s health ecosystem.

Five years after the kickoff of Apple’s health ecosystem, Apple has laid the groundwork to move deeper and deeper into the healthcare ecosystem. In 2018, the company announced AC Wellness, a set of medical clinics designed to, “deliver the world’s best health care experience to its employees.” It’s not hard to imagine Apple using these clinics as guinea pigs and then roll them out beyond their own employees. In August of 2018, Apple added its 75th health institution to support personal health records on the iPhone.

Just as there were years of innovation and incremental pieces of progress leading to the Apple Card, the same can be said for Apple Health. Follow the integration and you’ll start to get a sense of where Apple is headed.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”