Alexa, Daily Updates, Future Ear Radio, Podcasts, VoiceFirst

Podcast: Katherine Prescott – Multimodal Use Cases (Future Ear Daily Update 11-21-19)

Last post, I announced the launch of the Future Ear podcast, with the first episode featuring Katherine Prescott, founder and editor of Voicebrew. During our initial discussion, we spoke about the booming smart display market and how these multimodal smart speakers are setting the stage for a whole new host of use cases for our voice assistants.

For the second episode of the Future Ear podcast, Katherine and I continue our discussion by beginning to explore what some of these new use cases might be that are brought forth by devices like Amazon’s Echo Show 5. The first area that we both agreed upon that’s prime for some interesting new usage is the kitchen setting.

As Katherine points out at the top of the podcast, there’s a certain, “chicken and the egg,” dynamic here because in a two-sided market (production & consumption), each side’s growth incentivizes the other. So, what we’re seeing is the consumption side start to really grow (all the people buying multimodal smart speakers; 1.65 million Echo Show 5’s sold last quarter), which can serve as the first push to get the network effects flywheel to start spinning. More users means more incentive to make “apps” specific to these new devices. More stuff to do with these devices means further incentive to own one. And on and on the flywheel spins.

One way to speed things up is for the major voice assistant providers (Amazon, Apple, Google, Samsung) to leverage their deep pockets and ability to custom build experiences that are deeply integrated with said providers’ assistants and platforms. One really interesting example of this is Amazon’s partnership with the Food Network to create The Food Network Kitchen.

As I’ve written about previously, The Food Network Kitchen represents one of the most interesting new applications, specifically built with smart displays in mind. The Food Network Kitchen is a “freemium” app that can be accessed on mobile devices or Alexa Show devices, which allows for users to call up 80,000 recipes, watch how-to videos, and step-by-step tutorials. There is also the option to pay and subscribe to the service as well, which will give the user access to more than 800 on-demand cooking classes and the ability to tune into live classes to cook alongside professional chefs, like Bobby Flay (a similar service to Peloton’s live workouts).

Food Network Kitchen represents one of the most novel ways to fuse together what makes the smart display so unique – voice commands + a visual display. Users can watch tutorial videos and can easily follow along by stopping the video or rewinding with their voice. Users can ask questions pertinent to the video, such as, “how many pounds of chicken am I supposed to be cooking,” and the answer will come back as a small pop-up window, allowing the video to continue playing. This combo opens up entirely new contexts that we’ve yet to really see before.

As we talk about in the episode, we believe a service as compelling as this will help to spur forward the network effects of multimodal voice first devices and may set the standard of what a multimodal voice first experience should look and feel like.

-Thanks for Reading-

To subscribe to the podcast:

To add to your flash briefing, click here – then say, “Alexa, play news”

To listen on your Google Assistant device, enable the Google Action here 

1 thought on “Podcast: Katherine Prescott – Multimodal Use Cases (Future Ear Daily Update 11-21-19)”

Leave a Reply