Daily Updates, Flash Briefings, Future Ear Radio

The Flash Briefing Cadence Conundrum (Future Ear Daily Update 8-27-19)

8-27-19 - Flash Briefing Cadence Conundrum

One of the dilemmas that flash briefing creators have encountered is the question of how often they should publish a new briefing. There’s a difference of opinions, but one argument is that it needs to be done daily, or as close to daily as possible, to prevent one’s listeners from constantly having to tell Alexa, “next.” While this is understandable from a listener-standpoint, the notion that creators must abide by a daily publishing standard is one of the biggest potential detractors to getting people to start their own flash briefing. That’s a really daunting expectation to meet.

A potential solution to the flash briefing cadence conundrum would be for Amazon to create a, “play new updates only,” option. This would not only make the listening experience more enjoyable for the listener, but it would also change the nature of production as there’s no longer a perceived need to create more content than necessary. As a result, and in addition to the emergence of easy to use production tools, I believe that much more granular levels of personalization would become enabled. After all, a flash briefing is intended to be each specific user’s personal news feed.

Think of a flash briefing in levels of personalization, with the top level being comprised of the least personal, most mass-appeal sources of info, down to the bottom with hyper-niche, uber-personalized info. At the top, you might have the Wall Street Journal, Reuters, Fox News, CNN, NPR, and all the other nationally syndicated news sources. A level below, you might start to get into industry-specific sources or sources that pertain to some of your hobbies and interests; things that apply to a smaller, more niche audience. At the bottom, you might have content produced that is specific to you a few others.

Flash breifing pyramid

Let’s say that my flash briefing starts at the top of the pyramid and works its way down. The top is likely to be filled with info that’s updated daily – it’s the news after all. I then get updates on what’s happening in the voice, hearables and audiology industries, as well as what’s going on with my sports teams or other information pertinent to my interests outside of work. Some of the sources contained in level two might be updating their content daily, while others update sporadically. It doesn’t really matter to me – they’re in my flash briefing because I like the info provided, regardless of how often it’s updated.

At the bottom is where I start to get updates happening in my personal life. If a family lives all around the country, maybe they each have a little flash briefing where they post occasional updates for the rest of the family to hear. If I’m an orthopedic doctor working in a medical setting, maybe I want to know who’s on call today (thanks Sirish Kondabolu for this suggestion). If someone’s kids are playing on a youth soccer team, maybe they want updates on what’s ahead for the week with practice or game times and locations. The possibilities here are limitless, as well all have certain information that’s important for us to stay on top of.

Flash briefings represent one of the most exciting new ways to consume content, as they combine the on-demand nature of voice assistants with an RSS-feed of audio-content that can be passively consumed while you navigate your busy life. Without a, “play new updates only,” feature however, the feed is exposed to becoming way too redundant, and therefore detracts all types and levels of creators from creating content. Here’s to hoping for this type of option, or something similar, to really open this new medium up.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

 

Daily Updates, Future Ear Radio, hearables

Voice of Healthcare Podcast Ep 21 – Dr. Eric Topol (Future Ear Daily Update 8-26-19)

8-26-19 - Voice of Healthcare Podcast

Back in June, I wrote an update about an a16z podcast interview with Dr. Eric Topol. Dr. Topol is a longtime cardiologist and the chair of innovative medicine at Scripps Research. He’s been practicing medicine for 40 years and seems to be regarded in the medical community as one of the most forward-thinking, open-minded doctors. His work focuses on genomics, big data, and the underlying technologies enabling personalized medicine. He’s published 1,100 peer reviewed articles, has more than 200,000 citations to his credit, and is among the top ten most cited researchers in medicine.

Today, I got a notification that Matt Cybulsky’s Voice of Healthcare podcast had a new episode out and I was excited to see the guest was Dr. Topol (what a huge guest to land!). This episode does a really good job of building on what Dr. Topol spoke about with a16z’s Vijay Pande, and really hones in on Dr. Topol’s view of how voice-specific applications will shape the healthcare industry as voice technology continue to migrate into the medical setting and it’s impact grows.

One of the topics Matt and Eric spoke about that resonated with me was the idea that our medical professionals today are stretched too thin on time and as a result, they’re not able to spend the type of time that’s beneficial to the patient and the physician. They’re too burdened with clerical work and drowning in administrative tasks. For the patient, this means less one-on-one time with the doctor, which is obviously a bad thing. But as Dr. Topol mentioned, we’re seeing historic highs among doctors of burn out, clinical depression and suicide, which he believes to be a result of the fact that doctors are too detached from what motivated them to get into medicine in the first place. It was not to do endless amounts of administrative work, it was to help people, and Dr. Topol believes that if we can unburden the doctors and free up their time to get back to spending it with patients, it will help to solve what’s happening.

With the rising accuracy of voice dictation and natural language processing, we’re moving closer and closer to a point where offloading certain drudge work, like note taking can be done through voice dictation. To me, it’s becoming increasingly clear that some of the most impactful early applications of voice technology in the medical setting will be to aid the professional in reducing and offloading clerical work.

Another topic discussed that piqued my interest was the idea of virtual coaching. I think this will be a major use case with hearables as time goes on, where patients with chronic conditions can communicate to their virtual coach about their diet, sleeping patterns, stress levels, and other activities. Some of this might be communicated through sensor data, while other parts might be images posted to a companion app, or even through conversations with the assistant. As Eric mentioned, this would be a huge boon for folks with diseases such as diabetes who might struggle managing their glucose levels, and can be aided by a virtual coach to help better educate them on what’s working for them based on all the data the coach (or voice assistant) has to work with.

If you’re working in or around either the voice space or in some type of medical profession, you should check the episode out. If you’re interested, subscribe and listen to The Voice of Healthcare Podcast @ http://bit.ly/VoiceHealthcare and you can read the full transcript of this interview here: http://bit.ly/DrEricTopolVoiceofHealthcare

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

audiology, Daily Updates, Future Ear Radio, hearables

Sonova’s SWORD Chip (Future Ear Daily Update 8-22-19)

8-22-19 - Sonova's SWORD chip

Last May, I wrote a long essay about how the most interesting innovation occurring in the world of hearing aids and hearables is the stuff that’s happening inside the devices. Nearly all the cool new features and capabilities that are emerging with our smart ear-worn computers are by-and-large derived from the breakthroughs transpiring underneath the hood of the devices. As I wrote about in that essay, the core of the innovation is based around what 3D Robotic’s CEO, Chris Anderson, describes as the “Peace Dividends of the Smartphone Wars.”

It’s estimated that more than 7 billion smartphones have been produced since 2007, which means tens or hundreds of billions of the components comprising the guts of the devices have been produced for said smartphones as well. All of the components contained in the smartphone supply chain, are the same components housed in the vast majority of consumer electronic devices, ranging from drones, to Roombas, to TVs, to smart speakers, to Apple Watches, to hearables. Components such as microphones, receivers, antennas, sensors, DSP chips, etc. have become dramatically cheaper and more accessible for OEMs, as they represent the aforementioned, “peace dividends.”

A good example of this phenomenon is Sonova’s SWORD (Sonova Wireless One Radio Digital) chip. When hearing aids began to become Bluetooth enabled, the hearing aid manufacturers initially opted to work directly with Apple to use its proprietary 2.4 GHz Bluetooth low energy protocol, creating a new class of made-for-iphone (MFi) hearing aids. The upside for hearing aid manufacturers to use this protocol, rather than Bluetooth Classic, was that it represented a battery efficient solution that paired very well binaurally, so long as the hearing aids were paired to an iPhone. That’s Apple in a nutshell: if you’re part of its ecosystem, it works great, if not, don’t bother.

So, in 2018, when all of Sonova’s competitors had their MFi hearing aids on the market, some for years, the market expected that Sonova would soon release it’s own, long-awaited line of MFi hearing aids. Instead, Sonova released the Audeo Marvel line and incorporated a new chip, SWORD, which supported five Bluetooth protocols, allowing users to pair to iPhones using the MFi protocol (Bluetooth low energy), or to Android handsets using Bluetooth Classic.

One of the reasons the MFi protocol was initially more attractive relative to BT Classic is that bluetooth low energy is inherently more power efficient and capable of streaming binaurally. Sonova’s SWORD solved the power dilemma, due to a new chip design that included a new power management system relying on voltage converters. It solved the binaural issue, by allocating one of the five BT protocols to its own proprietary, Binaural VoiceStream Technology (BVST).

This week, Sonova took it a step further by ushering in Marvel 2.0 and allowing for RogerDirect connectivity. This allows for the various Roger microphone transmitters to directly pair with the Roger receiver built into the Marvel hearing aids. This is done by allocating one of the five BT protocols to the Roger system. Abram Bailey at Hearing Tracker wrote a great recap on this new firmware update that will soon become available to all Marvel owners through their hearing care provider. You can also check out Dr. Cliff Olson’s video on the update below.

All of the innovation occurring inside the devices might not be the most glamorous, we’re talking about computer chips after all, but it’s what all the cool new features of today’s hearing aids and hearables are predicated on. That’s why I am so intrigued by what Sonova is doing with SWORD – it makes the device so much more capable in what it can do. If you’re curious about what features on the horizon are most likely to appear next with our little ear-worn computers, start looking at the what’s going on underneath the hood of the devices and you’ll start to get an idea of what’s feasible based on the component innovation.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

 

Daily Updates, Future Ear Radio

NPR Marketplace Appearance (Future Ear Daily Update 8-21-19)

8-21-19 - NPR marketplace

Last week, as I was heading to my hotel for a work trip, I received an email from a corespondent at NPR asking to speak with me about the topic of live-language translation. I had recently been quoted in an article published by Chris Smith in Wareable on the topic of hearables and language translation, which is where I assume Andy, the correspondent, came across my ideas and work. Needless to say, I was completely caught off guard, and when I realized he wanted to interview me for NPR’s show, Marketplace, I was pretty elated.

While we chatted for about ten minutes on the phone, I ultimately ended up contributing about 30 seconds to the total story, but regardless, it was one of the coolest moments in my career! Speed ahead to the 19 minute mark and you’ll hear the story begin. Enjoy!

(Side note – in college, for my senior capstone course we were split into teams of 7 and given a client to create a comprehensive marketing and advertising campaign. <y client was NPR – talk about life coming full circle!)

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

Webinar: Podcasting in the Age of Voice (Future Ear Daily Update 8-20-19)

8-20-19 - webinar-podcasting

Today, I will be joining a group of podcasters and micro-casters to present a webinar titled, “Podcasting in the Age of Voice.” The premise of this webinar is to shed a light on how voice assistants and the hardware they’re housed in add a new dynamic to audio content creation and consumption. Smart speakers and displays, connected cars, and hearables all provide new avenues to reach one’s target audience, while podcasting and microcasts (or flash briefings) present the perfect type of content for these audio-based devices.

Hosts
The hosts of the webinar: Brielle, Susan and Scot

The panel is being co-hosted by the folks at Witlingo and Scot and Susan Westwater of Pragmatic Digital. I will be joined by fellow panelists, Dr. Sirish Kondabolu and Gordon Collier. Sirish is an Orthopaedic Surgeon and Co-Founder of Medicine ReMixed, a digital media company focused on podcasting and Voice technology. He’s currently working on patient education applications for Amazon Alexa and Google Home. Gordon is the CEO of Pipeline Search Solutions, an Executive Search Firm based in Richmond, VA. He’s also the founder and host of the My Career Fit podcast, which is now the first job search assistant available on Amazon Alexa and Google Home.

Panelists
The panelists: myself, Gordon and Sirish

During the webinar, we’ll go through ways in which we are using tools like Witlingo’s Castlingo and Buildlingo to support the production side of our new endeavors. I personally will be touching on how I use my flash briefing to serve as a companion, promotional vehicle for my daily blog updates, while the other two panelists will illustrate how they’re utilizing their Alexa skills, Google actions, podcasts and/or microcasts. Ultimately, we’ll be sharing all our learnings from experimenting with these new mediums.

To sign up for the webinar (noon EST), click here. It should be a great hour presentation and if you can’t make it live today, no worries as we’ll be sending out a link to the recording later on.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

Gary Vaynerchuk: How to Tell a Story on Social Media (Future Ear Daily Update 8-19-19)

8-19-19 - Gary Vaynerchuk

At the Voice summit this year, I discovered that I shared a commonality with many of the folks I met there: we’re all big fans of Gary Vaynerchuk. Many people even told me that the reason they were at the Voice summit and interested in the voice technology space was largely influenced by Gary and the content he creates (see Scot Westwater‘s tweet below that Gary retweeted).

For those who might not be aware, Gary owns a digital media company, VaynerMedia, and is a content creation machine, publishing a wide variety of digital content daily. He was also very publicly transparent in his bullish sentiment around social media’s rise and importance in the future more than a decade ago. He’s now a big proponent of voice technology, predicting that it will be a transformative technology into the 2020’s and give way to companies the size of Instagram and Facebook.

 

For today’s update, I thought I would share the piece of Gary’s content arsenal that has resonated with me most, which was this article titled, “How to Tell a Story on Social Media.” Here’s how the article begins:

For me, I’m only interested in one thing. The thing that binds us all together. No matter who you are or what your profession is – whether you’re an entrepreneur or in sales or a designer or a developer – no matter what you do, your job is to tell a story.

That is never going to change. The way you build your business and the way you make real impact is by great storytelling.

This is REALLY important.

We all need to reverse engineer what’s actually happening in our world to win. My whole career has been predicated on reverse engineering – Understanding what I think is going to happen in the next 24-36 months and then figuring out how to work backwards from there to map out the path to capitalize on it.

My biggest problem right now, in general, is that I feel that the far majority of people in business organizations and media companies all across the board are storytelling in 2019 like it’s 2009. It’s all I think about.

The article is truly a masterclass on the art of telling a story in 2019 using the tools available to us. Gary walks the reader through today’s world that is “hyper ADD” and the challenges associated with a world where every type of storyteller is competing for our attention. Therefore, to cut through the noise you need to reverse-engineer how people consume content today in order for you to understand how to produce content that caters to people’s content consumption habits in today’s world.

Through this piece, Gary helped confirm in my mind that I needed to eventually move my blog’s cadence to a daily format in order to allow me to create an on-going narrative, which is what I did shortly after reading this. This was the motivation behind starting my flash briefing too – another way for me to tell the story.

One of the biggest takeaways I have had since moving to a daily cadence is that it allows for better storytelling, because I’m constantly building on the overall narrative/story, and providing myself more opportunities to extrapolate out of what’s going on in the news, broadly or industry-specific, and regularly apply those key points to the overall story. It’s not as if what he’s saying is novel, what resonates is that he communicates this message through the lens of storytelling in 2019.

“Attention changes. Tools change. The mediums in which we storytell continue to change.”

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio, hearables

The Hearables & Hearing Aid Convergence (Future Ear Daily Update 8-14-19)

Andy B Screenshot

For today’s update, I want to hone in and expand on this short, but powerful twitter thread by Andy Bellavia of Knowles Corp pertaining to the convergence of hearing aids and hearables. I think Andy is right on the money with his analysis, as the two worlds of hearing aids and consumer hearables are beginning to converge and blend together. As he points out, the two each have their own sets of advantages, and can steal innovation from one another, which ultimately might make them resemble each other.

For me, the fact of the matter is that nearly all ear-worn devices will ultimately transform into hearables, which really just means that all these devices will become progressively computerized in time (remember, a hearable is an ear-worn computer). However, just because they’ll all be computerized, doesn’t mean there won’t be room for the devices to specialize.

For example, there’s a variety of different types of tablets that are tailored to different end-users. I might want an iPad Pro because I need the professional software baked into the device, while my dad wants an Amazon Fire tablet to read and watch media, and my sister wants a LeapFrog Kids Tablet for her daughter. All of these are tablets, have similar form factors, user-interfaces, and baseline functionality, but they’re also differentiated by various aspects in the functionality, hardware and software tailored to the targeted end user.

Along the same vein, we’ll likely see a similar pattern with hearables. My mom might want sophisticated Bluetooth hearing aids, I might want to wear AirPods, and my brother might want to wear Bose QC35 II. Today, all of these can stream audio from a smartphone and serve as a hub to communicate with a smart assistant, while differentiating around things like amplification, form factor, and active noise cancellation. It’s early days for hearables, so we should expect there to be aspects that become universal and ubiquitous among all devices, while specialized aspects in the hardware and software are simultaneously emerging or being expanded upon.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio, hearables, Podcasts, VoiceFirst

Beetle Moment Marketing Podcast Appearance (Future Ear Daily Update 8-13-19)

8-13-19 - Beetle moment

One of the best things about attending the Voice Summit was meeting so many sharp people working in and around the voice space. One of the people I was fortunate to meet and spend some time with was Emily Binder, founder of Beetle Moment Marketing. Emily’s marketing firm specializes in helping brands differentiate by leveraging emerging technologies, which includes all things voice.

She has an impressive portfolio of work, which includes the creation and management of Josh Brown and Ritzholtz Wealth Management’s flash briefing, “Market Moment” and Alexa skill/mini podcast, “The Compound Show.” (Josh, aka The Reformed Broker, is one of the most popular internet figures in the finance world, with over a million twitter followers). This is just one example of the type of work that she does on a regular basis for all types of clients.

Emily approached me at the Voice Summit about coming on her podcast to record an episode centered around hearables, which we recorded last week. It was a quick, 18-minute discussion about the evolution of the hearables’ landscape, where the technology is going, and some of the challenges that have to be navigated to get there. We also touched on Flash Briefings and shared the same sentiment about how much potential there is with this new medium, while simultaneously being a bit disappointed by Amazon not giving the Alexa-specific feature more prominence (it should be the star of Amazon’s smart speakers!).

Check out the episode, and be sure to engage on Twitter to let me know what you think!

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio, hearables, Live-Language Translation

Hearables & Live-Language Translation (Future Ear Daily Update 8-12-19)

8-12-19 - Hearables & live language

Last Friday, Chris Smith published an article in Wareable breaking down the state of live-language translation. Chris reached out to me a month ago to gather my thoughts on the exciting, hearables-specific use case, and I thought Chris did an awesome job weaving some of my thoughts into the broader piece. Also, very cool to see my buddy Andy Bellavia interviewed as well, providing some perspective into the hardware-side of things, and what’s necessary from that side to push this all forward. Give it a read to get a better idea of how far along this use case has progressed and what’s required to bring the Babel fish to life.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio, hearables, Smart assistants

Podcasting + Search (Future Ear Daily Update 8-9-19)

8-9-19 - Podcasting + Search

A few months ago, I wrote a two-piece article for Voicebot titled, “The Cambrian Explosion of Audio Content.”(part 1, 2) In the articles, I laid out the development of all the necessary “ingredients” required to be combined to create this explosion. We’re seeing significant movement in the financial markets around podcasting, largely fueled by Spotify’s slew of podcast-centric acquisitions made this years. AirPods are increasingly at the core of Apple’s growth strategy and Voicebot just released an article announcing that the smart speaker install base has now reached 76 million. Hardware that is tailored to audio content consumption continues proliferating at scale. Tools designed to create audio content continue to emerge and mature as well, continually reducing the barrier of entry for content creators.

One of the remaining ingredients needed to make this explosion go atomic is intelligent search and discovery. Voicebot reported yesterday that Google will begin adding Podcasts to its search results. This is the beginning of the formation of one of the last pieces of the audio content puzzle to make this all go boom. Initially, Google’s foray into podcast search will be no different than the way it displays search results for the variety of other types of content it surfaces. Where this appears to be headed, however, is where things start to get very interesting.

In the blog post Google published announcing this new aspect to its search engine, it mentioned that later this year this feature will be coming to Google Assistant. This is a really big deal as the implications go beyond “searching” for podcasts, but rather sets the stage for Google Assistant to eventually work on the user’s behalf to intelligently surface podcast recommendations. In the two-part piece I wrote, I mentioned this as being the long-term hope for podcast discovery:

This is the same type of paradox that we’re facing more broadly with smart assistants. Yes, we can access 80,000 Alexa skills, but how many people use more than a handful? It’s not a matter of utility but discoverability; therefore, we need our smart assistant’s help. The answer to the problem would seem to lie in a personalized smart assistant having a contextual understanding of what we want. In the context of audio consumption, smart assistants would need to learn from our listening habits and behavior what it is that we like based on the context that it can infer from the various data available. These data points would include signals such as, the time (post-work hours; work hours), geo-location (airport; office), peer behavior (what our friends are listening to), past listening habits, and any other information our assistants can glean from our behavior.

Say what you want about Google’s privacy position, but Google is leaning into the fact that it knows so much about its users (i.e Duplex on the Web). Obviously, not everyone will be gung-ho about Google having so much data on its users and the ways in which it will use said data. That being said, I’m not sure there is a voice assistant on the market today that is capable of the level of contextual understanding required for these intelligent, context-rich applications, such as podcast recommendations through learned behavior. Time will tell, but Google just took a sizable step toward enabling this type of future.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”