A few months ago, I wrote a two-piece article for Voicebot titled, “The Cambrian Explosion of Audio Content.”(part 1, 2) In the articles, I laid out the development of all the necessary “ingredients” required to be combined to create this explosion. We’re seeing significant movement in the financial markets around podcasting, largely fueled by Spotify’s slew of podcast-centric acquisitions made this years. AirPods are increasingly at the core of Apple’s growth strategy and Voicebot just released an article announcing that the smart speaker install base has now reached 76 million. Hardware that is tailored to audio content consumption continues proliferating at scale. Tools designed to create audio content continue to emerge and mature as well, continually reducing the barrier of entry for content creators.
One of the remaining ingredients needed to make this explosion go atomic is intelligent search and discovery. Voicebot reported yesterday that Google will begin adding Podcasts to its search results. This is the beginning of the formation of one of the last pieces of the audio content puzzle to make this all go boom. Initially, Google’s foray into podcast search will be no different than the way it displays search results for the variety of other types of content it surfaces. Where this appears to be headed, however, is where things start to get very interesting.
In the blog post Google published announcing this new aspect to its search engine, it mentioned that later this year this feature will be coming to Google Assistant. This is a really big deal as the implications go beyond “searching” for podcasts, but rather sets the stage for Google Assistant to eventually work on the user’s behalf to intelligently surface podcast recommendations. In the two-part piece I wrote, I mentioned this as being the long-term hope for podcast discovery:
This is the same type of paradox that we’re facing more broadly with smart assistants. Yes, we can access 80,000 Alexa skills, but how many people use more than a handful? It’s not a matter of utility but discoverability; therefore, we need our smart assistant’s help. The answer to the problem would seem to lie in a personalized smart assistant having a contextual understanding of what we want. In the context of audio consumption, smart assistants would need to learn from our listening habits and behavior what it is that we like based on the context that it can infer from the various data available. These data points would include signals such as, the time (post-work hours; work hours), geo-location (airport; office), peer behavior (what our friends are listening to), past listening habits, and any other information our assistants can glean from our behavior.
Say what you want about Google’s privacy position, but Google is leaning into the fact that it knows so much about its users (i.e Duplex on the Web). Obviously, not everyone will be gung-ho about Google having so much data on its users and the ways in which it will use said data. That being said, I’m not sure there is a voice assistant on the market today that is capable of the level of contextual understanding required for these intelligent, context-rich applications, such as podcast recommendations through learned behavior. Time will tell, but Google just took a sizable step toward enabling this type of future.
-Thanks for Reading-
Dave
To listen to the broadcast on your Alexa device, enable the skill here
To add to you flash briefing, click here
To listen on your Google Assistant device, enable the skill here
and then say, “Alexa/Ok Google, launch Future Ear Radio.”