Biometrics, Daily Updates, Future Ear Radio, hearables, wearables

Wearables Grow Up (Future Ear Daily Update 9-19-19)

Wearables Grow Up

One of my favorite podcasts, a16z, recently created a secondary, news-oriented show, “16 minutes.” 16 Minutes is great because the host, Sonal Chokshi (who also hosts the a16Z podcast), brings on various in-house experts from a16Z’s venture capital firm to provide insight into each week’s news topics. This week, Sonal brought on general partner, Vijay Pande, to discuss the current state of wearable computing. For today’s update, I want to highlight this eight minute conversation (it was one of two topics covered on this week’s episode – fast-forward to 7:45), and build on some of the points Sonal and Vijay make during their chat.

The conversation begins by covering a recent deal struck by the government of Singapore and Fitbit. Singaporeans will be able to register to receive a Fitbit Inspire band for free if they commit to paying $10 a month for a year of the company’s premium coaching service. This is part of Fitbit’s pivot toward a SaaS business, and a stronger focus about informing users about what the data being gathered actually means. Singapore’s Health Promotion Board will therefore have a sizeable portion of its population (Fitbit’s CEO projects 1 million of the 5.6 citizens will sign up), monitoring their data consistently via wearable devices that can be tied to each citizen’s broader medical records.

This then leads to a broader conversation about the ways in which wearables have been maturing, and in many ways, wearables are growing up. To Vijay’s point, we’re moving way beyond step-counting into much more clinically relevant measurable data. Consumer wearables that are increasingly being outfit with more sophisticated, medical-grade sensors, combined with the longitudinal data that can be gathered since they’re being worn all day, creates a combination not yet seen before. Previously, we’ve been limited to sporadic data that’s really only gathered when we’re in the doctor’s office. Now, we’re gathering some of the same types of data by the minute, and at the scale of millions and millions of people.

Ryan Kraudel, VP of Marketing at biometric sensor manufacturer Valencell, made me aware of this podcast episode (thanks, Ryan) and added some really good points on twitter about what he’s been observing these past few years. A big part of what’s different between today’s wearables and the first generation devices is the combination of more mature sensors that are proliferating at scale and the machine learning and AI layer that’s being overlaid on top to assess what the data is telling us, which has become more sophisticated as well.

To Sonal’s point, we’ve been benchmarking our data historically against the collective averages of the population, rather than benchmarking our data against our own personal data, because we haven’t had the ability to gather the personal data in the ways that we can now. When you’re recording longitudinal data over a long period of time, such as multiple years, you start to get really accurate baseline measurements unique to each individual.

This enables a level of personalization that will open the door to preventative health use cases. This has been a big application that I’ve been harping on for a while – the ability to have AI/ML assess your wearable data constantly to help identify risks in your data, based on your own historical cache of data that’s years and years old. Therefore, you enable the ability for the user to be notified of threats to their health data. To Vijay’s point at the end, in the near future, our day-to-day will not be that different but what we’re learning will be radically different, as you’ll be measuring certain metrics multiple times per day, rather than once a year during your check up.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to your flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

audiology, Daily Updates, Future Ear Radio, hearables, hearing aids

Signia’s Acoustic-Motion Sensors (Future Ear Daily Update 9-18-19)

Acoustic Motion Sensor 2.jpg

Much of what I am excited about right now in the world of consumer technology broadly, and in wearables/heararables/hearing aids more narrowly, is the innovation happening at the component level inside the devices. I’m still reeling a bit from Apple’s U1 chip embedded in the iPhone 11 and the implications of it that I wrote about here. New chips, new wireless technologies, new sensors, new ways to do cool things. Now, we can add another one to the list – acoustic-motion sensors – which will be included in Signia’s new line of hearing aids, Xperience.

Whereas video and camera systems rely on optical motion detection, Signia’s hearing aids will use its mics and sensors to assess changes in the acoustic environment. For example, if you’re moving from sitting at a table speaking face to face with one person, to a group setting where you’re standing around a bar, the idea is that the motion sensors would react to the new acoustic setting and then automatically adjust the mics accordingly, from directional to omni-directional settings and balance in-between.

These acoustic-motion sensors are part of a broader platform that simultaneously uses two processors, Dynamic Soundscape Processing and Own Voice Processing. The Own Voice processor is really clever. It’s “trained” for a few seconds to identify the user’s voice and differentiate it from other peoples’ voices that will inevitably be picked up through the hearing aid. This is important, as multiple hearing aid studies have found that a high number of hearing aid wearers are dissatisfied by the way their own voice sounds through their hearing aids. Signia’s Own Voice processor was designed specifically to alleviate that effect.

Now, with the inclusion of acoustic-motion sensors to constantly monitor the changes in the acoustic setting, the Dynamic Sound processor will be alerted by the sensors to adjust on-the-fly to provide a more natural sounding experience. The hearing aid’s two processors will then communicate with one another to determine which processor each sound should feed into. If you ask me, that’s a lot of really impressive functionality and moving pieces for a device as small as a hearing aid to handle, but it’s a testament to how sophisticated hearing aids are rapidly becoming.

I’ve written extensively about the innovation happening inside the devices and what’s most exciting is that the more I learn about what’s happening, the more I realize that we’re really only getting started. A quote that still stands out to me from Brian Roemmele’s U1 chip write up, is this:

“The accelerometer systems, GPS systems and IR proximity sensors of the first iPhone helped define the last generation of products. The Apple U1 Chip will be a material part of defining the next generation of Apple products.” – Brian Roemmele

To build on Brian’s point, it’s not just the U1 chip, it’s all of the fundamental building blocks being introduced that are enabling this new generation of functionality. Wearable devices in particular are poised to explode in capability because the core pieces required for all of the really exciting stuff that’s starting to surface, are maturing to the point where it has become feasible to begin implementing them into devices as small as hearing aids. There is so much more to come with wearable devices as the components inside the devices continue to be innovated on, which will then manifest in cool new capabilities, better products, and ultimately, better experiences.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to your flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Apple, Daily Updates, Future Ear Radio, hearables

Apple Hearing Study (Future Ear Daily Update 9-11-19)

9-11-19 - Apple Hearing Study.jpg

Yesterday, Apple hosted its annual September event to show off its new set of products that are set to go on sale as we enter into the last quarter of the year. Although the bulk of the announcements were focused on the new iPhones set to debut and all the upgrades in the phones’ cameras and processors, there were a few other announcements that I thought were interesting and worth writing updates for. Today, I am writing about the Apple Hearing Study that was announced.

In the upcoming Apple WatchOS6 update that is due out September 19th, there will be a new sound level feature included that the user can configure to appear as one of the readouts on the watches’ display. Apple will be using the microphones on the watch in a low-power mode to always be recording your environment’s decibel level, which will then be visualized on the watch, and display green, yellow or red based on the volume of the noise. The knee jerk reaction might be to say, “wait they’re always recording me?” but, no, Apple has stated that it’s not going to save any of the audio; they’ll only be saving the sound levels.

Image
From Apple’s September 10th, 2019 keynote

The Apple Watch Series 5 that was announced will feature an, “always-on display,” implying that future generations will feature an always-on display as well. Therefore, users who have configured their Apple Watches’ display to show the sound level meter, will constantly be able to assess how dangerous the sound levels are in their environment.

In my opinion, this is a bigger deal than it might appear, because people tend to lose their hearing loss gradually. One of the big reasons why is because they’re completely unaware that they’re exposing their ears to dangerous levels of sound for prolonged periods of time. As an Apple Watch user myself, the ability to be able to quickly glance at my watch to assess how loud the environment I am in is really appealing. An always-on display will just make this effect more pronounced, hopefully leading more people to considering keeping hearing protection, like high quality earplugs, on them at all times. It can’t be understated how powerful the effect will be on peoples’ psyche to constantly see that sound level bar flicker or linger in the red.

So, as this new feature becomes available to all Apple Watches running on OS6, Apple will overnight have an army of users who can gather data on their behalf. Which brings us to the Hearing Study that Apple will be conducting in conjunction with the University of Michigan and the World Health Organization. Here’s Michigan professor Rick Neitzel, who will lead the study, describing the purpose:

“This unique dataset will allow us to create something the United States has never had—national-level estimates of exposures to music and environmental sound. Collectively, this information will help give us a clearer picture of hearing health in America and will increase our knowledge about the impacts of our daily exposures to music and noise. We’ve never had a good tool to measure these exposures. It’s largely been guesswork, so to take that guesswork out of the equation is a huge step forward.”

Users will be able to opt into this study, or the other two studies that were announced at the event, through a new Apple Research app. As I wrote about in August, Apple is slowly inserting itself further and further into the healthcare space, by being the ultimate health data collector and facilitator. This is just another example of Apple leveraging its massive user base to quickly gather data around the various sensors embedded in Apple’s devices at scale to offer to researchers. Creating a dedicated app to facilitate this data transfer, with explicit user opt in, will shield Apple from scrutiny around the privacy and security of sensitive data.

Apple’s wearables are increasingly shaping up to be preventative health tools, or as Apple has described “guardians of health.” The introduction of a decibel level output on the Watch’s display is another incremental step toward becoming said, “guardian of health,” as it will help to proactively notify the user of another danger to their health:  gradual hearing loss. It’s not hard to imagine future generations of AirPods supporting the same feature using mics to sense the sound levels, but instead of a notification, perhaps they’ll activate noise cancellation to protect one’s ears. One can hope!

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Biometrics, Daily Updates, Future Ear Radio, hearables, VoiceFirst

Hearables Panel Video (Future Ear Daily Update 9-10-19)

9-10-19 - Hearables Panel Video

Back in July, I published an update recapping the hearables panel that I participated on in July at the Voice summit. One of my fellow panelists, Eric Seay, had a friend in the audience who shot a video of the panel, so for today’s update, I’m sharing the video. I had remembered the panel being really insightful, but upon watching it again a few months later, I’m reminded what an awesome panel it really was. The collection of backgrounds and expertise that we each brought to the table fostered a really interesting discussion. Hope you enjoy!

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

 

Daily Updates, Future Ear Radio, hearables

Voice of Healthcare Podcast Ep 21 – Dr. Eric Topol (Future Ear Daily Update 8-26-19)

8-26-19 - Voice of Healthcare Podcast

Back in June, I wrote an update about an a16z podcast interview with Dr. Eric Topol. Dr. Topol is a longtime cardiologist and the chair of innovative medicine at Scripps Research. He’s been practicing medicine for 40 years and seems to be regarded in the medical community as one of the most forward-thinking, open-minded doctors. His work focuses on genomics, big data, and the underlying technologies enabling personalized medicine. He’s published 1,100 peer reviewed articles, has more than 200,000 citations to his credit, and is among the top ten most cited researchers in medicine.

Today, I got a notification that Matt Cybulsky’s Voice of Healthcare podcast had a new episode out and I was excited to see the guest was Dr. Topol (what a huge guest to land!). This episode does a really good job of building on what Dr. Topol spoke about with a16z’s Vijay Pande, and really hones in on Dr. Topol’s view of how voice-specific applications will shape the healthcare industry as voice technology continue to migrate into the medical setting and it’s impact grows.

One of the topics Matt and Eric spoke about that resonated with me was the idea that our medical professionals today are stretched too thin on time and as a result, they’re not able to spend the type of time that’s beneficial to the patient and the physician. They’re too burdened with clerical work and drowning in administrative tasks. For the patient, this means less one-on-one time with the doctor, which is obviously a bad thing. But as Dr. Topol mentioned, we’re seeing historic highs among doctors of burn out, clinical depression and suicide, which he believes to be a result of the fact that doctors are too detached from what motivated them to get into medicine in the first place. It was not to do endless amounts of administrative work, it was to help people, and Dr. Topol believes that if we can unburden the doctors and free up their time to get back to spending it with patients, it will help to solve what’s happening.

With the rising accuracy of voice dictation and natural language processing, we’re moving closer and closer to a point where offloading certain drudge work, like note taking can be done through voice dictation. To me, it’s becoming increasingly clear that some of the most impactful early applications of voice technology in the medical setting will be to aid the professional in reducing and offloading clerical work.

Another topic discussed that piqued my interest was the idea of virtual coaching. I think this will be a major use case with hearables as time goes on, where patients with chronic conditions can communicate to their virtual coach about their diet, sleeping patterns, stress levels, and other activities. Some of this might be communicated through sensor data, while other parts might be images posted to a companion app, or even through conversations with the assistant. As Eric mentioned, this would be a huge boon for folks with diseases such as diabetes who might struggle managing their glucose levels, and can be aided by a virtual coach to help better educate them on what’s working for them based on all the data the coach (or voice assistant) has to work with.

If you’re working in or around either the voice space or in some type of medical profession, you should check the episode out. If you’re interested, subscribe and listen to The Voice of Healthcare Podcast @ http://bit.ly/VoiceHealthcare and you can read the full transcript of this interview here: http://bit.ly/DrEricTopolVoiceofHealthcare

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

audiology, Daily Updates, Future Ear Radio, hearables

Sonova’s SWORD Chip (Future Ear Daily Update 8-22-19)

8-22-19 - Sonova's SWORD chip

Last May, I wrote a long essay about how the most interesting innovation occurring in the world of hearing aids and hearables is the stuff that’s happening inside the devices. Nearly all the cool new features and capabilities that are emerging with our smart ear-worn computers are by-and-large derived from the breakthroughs transpiring underneath the hood of the devices. As I wrote about in that essay, the core of the innovation is based around what 3D Robotic’s CEO, Chris Anderson, describes as the “Peace Dividends of the Smartphone Wars.”

It’s estimated that more than 7 billion smartphones have been produced since 2007, which means tens or hundreds of billions of the components comprising the guts of the devices have been produced for said smartphones as well. All of the components contained in the smartphone supply chain, are the same components housed in the vast majority of consumer electronic devices, ranging from drones, to Roombas, to TVs, to smart speakers, to Apple Watches, to hearables. Components such as microphones, receivers, antennas, sensors, DSP chips, etc. have become dramatically cheaper and more accessible for OEMs, as they represent the aforementioned, “peace dividends.”

A good example of this phenomenon is Sonova’s SWORD (Sonova Wireless One Radio Digital) chip. When hearing aids began to become Bluetooth enabled, the hearing aid manufacturers initially opted to work directly with Apple to use its proprietary 2.4 GHz Bluetooth low energy protocol, creating a new class of made-for-iphone (MFi) hearing aids. The upside for hearing aid manufacturers to use this protocol, rather than Bluetooth Classic, was that it represented a battery efficient solution that paired very well binaurally, so long as the hearing aids were paired to an iPhone. That’s Apple in a nutshell: if you’re part of its ecosystem, it works great, if not, don’t bother.

So, in 2018, when all of Sonova’s competitors had their MFi hearing aids on the market, some for years, the market expected that Sonova would soon release it’s own, long-awaited line of MFi hearing aids. Instead, Sonova released the Audeo Marvel line and incorporated a new chip, SWORD, which supported five Bluetooth protocols, allowing users to pair to iPhones using the MFi protocol (Bluetooth low energy), or to Android handsets using Bluetooth Classic.

One of the reasons the MFi protocol was initially more attractive relative to BT Classic is that bluetooth low energy is inherently more power efficient and capable of streaming binaurally. Sonova’s SWORD solved the power dilemma, due to a new chip design that included a new power management system relying on voltage converters. It solved the binaural issue, by allocating one of the five BT protocols to its own proprietary, Binaural VoiceStream Technology (BVST).

This week, Sonova took it a step further by ushering in Marvel 2.0 and allowing for RogerDirect connectivity. This allows for the various Roger microphone transmitters to directly pair with the Roger receiver built into the Marvel hearing aids. This is done by allocating one of the five BT protocols to the Roger system. Abram Bailey at Hearing Tracker wrote a great recap on this new firmware update that will soon become available to all Marvel owners through their hearing care provider. You can also check out Dr. Cliff Olson’s video on the update below.

All of the innovation occurring inside the devices might not be the most glamorous, we’re talking about computer chips after all, but it’s what all the cool new features of today’s hearing aids and hearables are predicated on. That’s why I am so intrigued by what Sonova is doing with SWORD – it makes the device so much more capable in what it can do. If you’re curious about what features on the horizon are most likely to appear next with our little ear-worn computers, start looking at the what’s going on underneath the hood of the devices and you’ll start to get an idea of what’s feasible based on the component innovation.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

 

Daily Updates, Future Ear Radio, hearables

The Hearables & Hearing Aid Convergence (Future Ear Daily Update 8-14-19)

Andy B Screenshot

For today’s update, I want to hone in and expand on this short, but powerful twitter thread by Andy Bellavia of Knowles Corp pertaining to the convergence of hearing aids and hearables. I think Andy is right on the money with his analysis, as the two worlds of hearing aids and consumer hearables are beginning to converge and blend together. As he points out, the two each have their own sets of advantages, and can steal innovation from one another, which ultimately might make them resemble each other.

For me, the fact of the matter is that nearly all ear-worn devices will ultimately transform into hearables, which really just means that all these devices will become progressively computerized in time (remember, a hearable is an ear-worn computer). However, just because they’ll all be computerized, doesn’t mean there won’t be room for the devices to specialize.

For example, there’s a variety of different types of tablets that are tailored to different end-users. I might want an iPad Pro because I need the professional software baked into the device, while my dad wants an Amazon Fire tablet to read and watch media, and my sister wants a LeapFrog Kids Tablet for her daughter. All of these are tablets, have similar form factors, user-interfaces, and baseline functionality, but they’re also differentiated by various aspects in the functionality, hardware and software tailored to the targeted end user.

Along the same vein, we’ll likely see a similar pattern with hearables. My mom might want sophisticated Bluetooth hearing aids, I might want to wear AirPods, and my brother might want to wear Bose QC35 II. Today, all of these can stream audio from a smartphone and serve as a hub to communicate with a smart assistant, while differentiating around things like amplification, form factor, and active noise cancellation. It’s early days for hearables, so we should expect there to be aspects that become universal and ubiquitous among all devices, while specialized aspects in the hardware and software are simultaneously emerging or being expanded upon.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio, hearables, Podcasts, VoiceFirst

Beetle Moment Marketing Podcast Appearance (Future Ear Daily Update 8-13-19)

8-13-19 - Beetle moment

One of the best things about attending the Voice Summit was meeting so many sharp people working in and around the voice space. One of the people I was fortunate to meet and spend some time with was Emily Binder, founder of Beetle Moment Marketing. Emily’s marketing firm specializes in helping brands differentiate by leveraging emerging technologies, which includes all things voice.

She has an impressive portfolio of work, which includes the creation and management of Josh Brown and Ritzholtz Wealth Management’s flash briefing, “Market Moment” and Alexa skill/mini podcast, “The Compound Show.” (Josh, aka The Reformed Broker, is one of the most popular internet figures in the finance world, with over a million twitter followers). This is just one example of the type of work that she does on a regular basis for all types of clients.

Emily approached me at the Voice Summit about coming on her podcast to record an episode centered around hearables, which we recorded last week. It was a quick, 18-minute discussion about the evolution of the hearables’ landscape, where the technology is going, and some of the challenges that have to be navigated to get there. We also touched on Flash Briefings and shared the same sentiment about how much potential there is with this new medium, while simultaneously being a bit disappointed by Amazon not giving the Alexa-specific feature more prominence (it should be the star of Amazon’s smart speakers!).

Check out the episode, and be sure to engage on Twitter to let me know what you think!

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio, hearables, Live-Language Translation

Hearables & Live-Language Translation (Future Ear Daily Update 8-12-19)

8-12-19 - Hearables & live language

Last Friday, Chris Smith published an article in Wareable breaking down the state of live-language translation. Chris reached out to me a month ago to gather my thoughts on the exciting, hearables-specific use case, and I thought Chris did an awesome job weaving some of my thoughts into the broader piece. Also, very cool to see my buddy Andy Bellavia interviewed as well, providing some perspective into the hardware-side of things, and what’s necessary from that side to push this all forward. Give it a read to get a better idea of how far along this use case has progressed and what’s required to bring the Babel fish to life.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio, hearables, Smart assistants

Podcasting + Search (Future Ear Daily Update 8-9-19)

8-9-19 - Podcasting + Search

A few months ago, I wrote a two-piece article for Voicebot titled, “The Cambrian Explosion of Audio Content.”(part 1, 2) In the articles, I laid out the development of all the necessary “ingredients” required to be combined to create this explosion. We’re seeing significant movement in the financial markets around podcasting, largely fueled by Spotify’s slew of podcast-centric acquisitions made this years. AirPods are increasingly at the core of Apple’s growth strategy and Voicebot just released an article announcing that the smart speaker install base has now reached 76 million. Hardware that is tailored to audio content consumption continues proliferating at scale. Tools designed to create audio content continue to emerge and mature as well, continually reducing the barrier of entry for content creators.

One of the remaining ingredients needed to make this explosion go atomic is intelligent search and discovery. Voicebot reported yesterday that Google will begin adding Podcasts to its search results. This is the beginning of the formation of one of the last pieces of the audio content puzzle to make this all go boom. Initially, Google’s foray into podcast search will be no different than the way it displays search results for the variety of other types of content it surfaces. Where this appears to be headed, however, is where things start to get very interesting.

In the blog post Google published announcing this new aspect to its search engine, it mentioned that later this year this feature will be coming to Google Assistant. This is a really big deal as the implications go beyond “searching” for podcasts, but rather sets the stage for Google Assistant to eventually work on the user’s behalf to intelligently surface podcast recommendations. In the two-part piece I wrote, I mentioned this as being the long-term hope for podcast discovery:

This is the same type of paradox that we’re facing more broadly with smart assistants. Yes, we can access 80,000 Alexa skills, but how many people use more than a handful? It’s not a matter of utility but discoverability; therefore, we need our smart assistant’s help. The answer to the problem would seem to lie in a personalized smart assistant having a contextual understanding of what we want. In the context of audio consumption, smart assistants would need to learn from our listening habits and behavior what it is that we like based on the context that it can infer from the various data available. These data points would include signals such as, the time (post-work hours; work hours), geo-location (airport; office), peer behavior (what our friends are listening to), past listening habits, and any other information our assistants can glean from our behavior.

Say what you want about Google’s privacy position, but Google is leaning into the fact that it knows so much about its users (i.e Duplex on the Web). Obviously, not everyone will be gung-ho about Google having so much data on its users and the ways in which it will use said data. That being said, I’m not sure there is a voice assistant on the market today that is capable of the level of contextual understanding required for these intelligent, context-rich applications, such as podcast recommendations through learned behavior. Time will tell, but Google just took a sizable step toward enabling this type of future.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here

To add to you flash briefing, click here

To listen on your Google Assistant device, enable the skill here 

and then say, “Alexa/Ok Google, launch Future Ear Radio.”