Daily Updates, Future Ear Radio

Future Ear Daily Update: 4-30-19

Designing for Voice

Image result for smart speaker and smart display
Image: Voicebot.ai

There have been a lot of really great content and discussions lately generated by the #VoiceFirst community around designing for voice. Up until recently, the emphasis in computer design has been largely based around visuals and digital branding. As we move into a more ambient setting when interacting with our computers, the emphasis begins to shift from our eyes to ears, and therefore, the designers’ role in helping to shape how we interact and communicate with our computers in this epoch is paramount.

The first piece that I recently read that really made me think hard about the challenge designers are tasked with was Mark Webster’s piece in Fast Company. Mark founded the design startup, Sayspring, which was acquired by Adobe last year. Mark’s now the leading voice user experience (VUX) at Adobe and in the article, he lays out his top four suggestions for anyone wading into the waters of voice design. I particularly liked this quote here:

For voice to achieve its potential and truly change how users interact with, and move throughout, the world, it needs designers. After all, voice assistants aren’t people. To be effective, a voice interface needs to be intentionally designed. And so it falls to the creative community to take the reins of a voice-first future. – Mark Webster

I appreciate this sentiment because as important as the technical work is to building this new generation of computing, it’s equally as important to have creative input help shape the field too. Then I started to see a lot of discussion around multi-modal:

4-30-19

With the surge in smart displays, the discussion around multi-modality appears to have taken center stage and it’s interesting to see how this topic of adding new modalities, like screens, impacts designer’s thinking, like Scot Westwater. Fellow VUX designer, Brielle Nickoloff, piggy-backed off this line of thinking from Scot with a thread of her own:

Brielle 4-30-19 1

Brielle 4-30-19 2

I’m not a designer, but to me, Brielle’s point about, “the decision being rooted in the use case,” makes a whole lot of sense. This interim, transitional period between the past computing paradigms and what lies ahead, seems to be trending toward being incredibly nuanced. During this period, it will be the designers’ job to balance new modalities with legacy modalities – audio and visuals – and determine on an experience-by-experience bases whether the experience is better suited to be primarily driven by an audio interface, with the screen playing a complementary role, or vice-versa.

There are two sides to building out this new paradigm, the technical and the creative, the yin and the yang. As Mark mentioned in his article, voice design skills will be as critical to design jobs as digital branding today. Fortunately, there are a lot of really smart designers taking the lessons they learned from designing on the web and mobile, and thinking about how those lessons apply to an audio-based modality and sharing with each other what’s successful and what’s not as they learn from trial and error. It’s exciting to see the role and the impact that the creative community will have on the burgeoning voice computing ecosystem by designing how we interact with our technology as computing becomes more ambient.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

To listen on your Google Assistant device, enable the skill here: https://assistant.google.com/services/a/uid/00000059c8644238

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

Future Ear Daily Update: 4-29-19

Apple’s Biggest Threat

microsoft-voice-report-01
Image: Microsoft’s 2019 Voice Report

Voicebot featured an article today breaking down some of the findings from Microsoft’s recently published white paper, 2019 Voice Report. One of the key findings from Microsoft’s research was that Siri and Google Assistant are the most widely used smart assistants, which makes sense considering the large user bases of smartphones, a luxury that Amazon does not afford. The market share between Google Assistant and Siri should be flagged inside of Apple as the single largest threat the company has faced since releasing the iPhone. Why? Because Google Assistant offers the most compelling reason yet to leave the Apple ecosystem. There are three reasons why Google Assistant represents a new type of threat than Apple has yet to see.

Compounding Value

The first reason is around the type of value that Google Assistant provides. Throughout the first decade of the iPhone, Apple’s defense to the various Android handset competitors was to more-or-less match hardware improvements being rolled out by Samsung, HTC, Sony, etc. Whether that be around the camera, the battery life, waterproofing, and any other incremental hardware upgrade ushered in by Apple competitors, Apple was able to consistently appease its user base enough to avoid any type of mass switching to Android. Google Assistant on the other hand, is such a problem for Apple because unlike the incremental hardware improvements which offered static value, Google Assistant compounds in value. The more that you begin to use Google Assistant, the better the service gets by integrating into the full Google suite and learning from you.

Changes How You Use the Device

On top of the compounding nature of the value, there’s another aspect to Google Assistant and that’s how it redefines how you use your device. Whether you were using a top of the line Samsung smartphone with a better battery or an older version of an iPhone, the way that you used the two devices was largely the same – an OS that was based around tap/touch/swipe and an app economy.

If Apple continues to neglect Siri and not move toward something similar to what Google is doing with Google Assistant, such as a SiriOS, Apple risks an increasing level of disparity between the user experience of using an Apple to an Android. This would ultimately mean that potential buyers wouldn’t be considering the two on a feature-by-feature basis, but instead, on whether they want a smartphone that is increasingly AI-oriented or not. It’s different than when the overall nature of how the devices worked was similar enough that people’s buying decisions were more generally based around brand, features and price.

Exposure to Google Assistant

The last reason this threat is different than threats in the past is due to exposure. Unless you were to switch from iPhone to an Android device, or vice-versa, you wouldn’t really know what you were missing. Google Assistant, on the other hand, can be used and accessed in a number of different ways with new hardware, such as smart speakers and smart displays, as well as through Apple products (you can download the Google Assistant app on your iPhone). Again, this is a problem for Apple if they do not put more of an emphasis on Siri, as the gap between Siri and Google Assistant will continue to grow. The more that Apple users are exposed to said gap, the more they might think to switch their handset and accessory hardware to Google’s ecosystem where Google Assistant is natively baked in.

It’s hard to think that Apple introduced us to Siri in 2011 and today has the same smart assistant share that Google does with Google Assistant, even though Google debuted GA in May of 2016. It feels like I’m becoming a bit of a broken record here, but I continue to maintain the belief that Apple needs to throw its weight and attention squarely behind Siri as we head into this year’s Apple developer conference, WWDC. The iPhone represents the company’s cash cow, comprising more than 62% of the company’s total revenue, but more than that, it serves as the “mothership” for all its offerings and is crucial to the two largest growth segments of the company, wearables and services. Google Assistant is the most formidable threat the Apple ecosystem has encountered in the past ten years and therefore should be handled and viewed by Apple as such.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

To listen on your Google Assistant device, enable the skill here: https://assistant.google.com/services/a/uid/00000059c8644238

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

Future Ear Daily Update: 4-26-19

My Wish List of Flash Briefings

Image result for flash briefing

When I wrote my two-part Voicebot piece on the, “Cambrian Explosion of Audio Content,” one of the conclusions I kept coming back to was that it felt like the building blocks were starting to mesh for something akin to an “audio-based Twitter.” The more that I’ve thought about this concept, the more I believe that Flash Briefings are the closest representation to this idea that we have currently.

So, today I wanted to have some fun and throw out ideas of how I could see certain people, brands or entities approaching a Flash Briefing. I’d love it if you’d respond on Twitter with some ideas of your own, and if you have a flash briefing of your own, then definitely share it on Twitter so we can all start to follow along. Hey, it’s Friday – Let’s have some fun with this!

  • Ben ThompsonStratechery (high-frequency bloggers/commentators): Ben would be a phenomenal addition to any FB, as he could share some quick, high-level insight on the daily update he writes. If you follow Ben and Stratechery, it’s one on-going narrative and to be able to keep up with that narrative through a quick flash briefing, so that even for the updates that you don’t get around to reading, you still know the gist of it.
  • Michael Batnick & Ben CarlsonAnimal Spirits (podcasters): As someone not working a job in finance, I absolutely love this podcast from Michael and Ben of Ritzholtz Wealth Management. I could see the two alternating days around who does the FB, which would be an ongoing narrative, or anecdotal stories, around the broader themes they talk about regarding the world of finance. Anyone with a podcast could do something like this.
  • St. Louis Blues/Cardinals (sports teams): I’m a huge STL sports fan. I think it would be so cool if teams across all leagues started to get into FB. You could go soooo many different directions here. Having former players share commentary around last night’s game or the upcoming game today. Lineups. Random stats. “On this day.” The list goes on and on, and I think this would be an awesome way for teams to engage fans.
  • Adam Schefter – ESPN (Sports Insiders)- I’m surprised Shefty doesn’t already have a Flash. As much NFL news he breaks around trades, free agents and all-around league scoop, FB would be perfect for any sports insider like Woj, Shefty, Rovell,  etc.
  • Taylor Lorenz – Atlantic (any journalist): I picked Taylor because I’ve read like ten of her articles recently about the evolution of the internet and meme culture. Any journalist, writing on any subject would be able to share or tease all of their stories, as well as provide background info or examples of how the themes they write about are manifesting.
  • Voice Designers – Cathy Pearl, Mark Webster, Chris Geison (subject matter experts): I do not have a background in design, but I do understand the importance of properly designing for this new medium. For designers that are shifting toward designing for voice, it would be awesome to have on-going tips and advice from the people that are at the forefront of design in this space, such as Cathy, Mark and Chris. The same could be done for any type of subject-matter experts who use a FB to help share their knowledge.
  • Music Artists – It would be awesome to be able to hear from your favorite artists/bands each day. Where are they on tour? What was last night’s show like? Did they play a song they don’t typically play? Any new music coming down the pipeline? Like sports, there are a million different directions you could go with this.
  • Michael Lewis – Big Short, Flash Boys, Moneyball, etc (non-fiction authors) – authors like Michael Lewis who write on so many topics could utilize FBs as a tool to provide on-going commentary on what’s happening to the world and relate those back to themes from their books. For example, Lewis could comment on what’s happening in the stock market, and apply it to Flash Boys or the Big Short. Or how a team like the Tampa Bay Devil Rays or the Oakland A’s continue to prove Moneyball, or the evolution of Moneyball, works.
  • Joss Whedon – Marvel (movie directors): Like sports and music, movies are another area that is so wide open with what you could do here. Stories of what’s happening behind the scenes with the cast. Honing in on certain scenes from movies to talk about the motivation, etc.
  • George RR Martin – Game of Thrones (fiction authors): I actually don’t want George doing a FB because I want him to finish The Winds of Winter! (yes, I’m a big GOT book nerd). Any author that has developed a world that’s as dense as something like Game of Thrones or the many other Fantasy, Sci-Fi, other fiction series could provide a daily update on a random aspect to their world. Again, this is another area that is so wide open.
  • Ninja – Gaming: Not a big gamer, but I know who Ninja is so I’m using him kind of generically here to say that top gamers could share tips and ways that noobs like me can improve their skills. E-sports seems to be growing pretty fast, so just like any celebrity could use a FB, I think FBs would work pretty well with gamers.
  • Politics – any candidate running for really any level of office would be wise to start a Flash. Use it to communicate your message, where you are on the campaign trail, weighing in on the news around the world, country, and local community.
  • Dave Chappelle (comedians) – can you imagine how awesome it would be to get a few minutes of new Chappelle material every day? Just something as simple as joking about the way the world is turning, or going into one of his characters. This is my dream.

I could go on and on here, but I think you get the point! As we enter into an era where we’re constantly pairing our ears to the internet via AirPods (and all its incoming competitors), smart speakers/displays, and other hardware built for audio, consuming our flash briefings will increasingly become more and more seamless. I’ve been enjoying curating my flash briefing and am looking forward to see more and more people, brands and entities start a flash of their own.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

To listen on your Google Assistant device, enable the skill here: https://assistant.google.com/services/a/uid/00000059c8644238

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio, hearables

Future Ear Daily Update: 4-25-19

Nuheara Is Still Standing

I’ve recently written quite a bit about the changing landscape of hearables and how we’re seeing companies like Samsung and Amazon offering, or poised to start offering, in-the-ear devices, while former players like Bragi have exited from selling hardware. In the midst of all this change, it should not be forgotten that one of the original hearables companies, Australian-based Nuheara, is alive and well and I wanted to use today’s update to highlight their intended path of success.

I caught up with co-founder of the company, David Cannington, for this week’s episode of Oaktree TV (the weekly video series I produce for my company) to talk about how Nuheara is navigating the hearables’ waters that are seeing new predators enter and old foes go belly up. David told me that the company started out as a consumer electronics company, but has since entered into the hearing healthcare space as it was a sweet spot area for the company’s technology.

This is an interesting shift, as it moves Nuheara’s products further away from offerings like AirPods/Galaxy Buds/Pixel Buds/”Alexa Pods”, and more toward med-tech, like hearing aids. While hearing aids cater to the full spectrum of hearing loss, a significant portion of the people with mild to moderate hearing loss do not opt to purchase and wear hearing aids, typically due to the cost or stigma associated with hearing aids. This mild-moderate portion of the hearing loss spectrum is what Nuheara aims to cater to with its IQBuds Boost and the forthcoming IQBuds Max.

Nuheara’s IQBuds Boost use a proprietary calibration software called “EarID” which the company partnered with the National Acoustics Lab to create. The idea behind EarID is that when you go to set up your IQBuds Boost, you start by establishing your “EarID” through a hearing test in the devices’ companion app. Hearing loss is incredibly variable and different to each specific person, hence the name, EarID. Some people might hear low frequencies perfectly fine, but have a hard time hearing high or mid-range frequencies, other’s it might be the opposite. So, by taking EarID, you identify the way the device needs to amplify sound specific to you.

EarID is at the heart of Nuheara’s ecosystem that it is beginning to build out, starting with a proprietary TV streaming adapter that pairs with the IQBuds Boost so that you can listen to your TV through your EarID profile. The idea here is that Nuheara will roll out a slew of companion devices that pair with your Boost and stream audio that is then filtered through your Boost with your EarID. The sounds of life, tailored specifically to you.

As Nuheara moves toward the intersection between consumer electronics and med-tech, it will be interesting to see if they’re successful capturing the market in the middle. As we move toward a future where everyone tethers their ears for prolonged periods of time to the internet, it stands to reason that a wide variety of in-the-ear device options will emerge that focus on different features and functionality. Nuheara is targeting the folks that might need hearing aids, but don’t necessarily want “hearing aids” and instead want a form factor and price point that is more aligned with something from the consumer electronics space.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

To listen on your Google Assistant device, enable the skill here: https://assistant.google.com/services/a/uid/00000059c8644238

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

Future Ear Daily Update: 4-24-19

Does Apple Know What It Has On Its Hands With AirPods?

From Ming-Chi Kuo’s most recent investor note

CNBC reported today that one of the most prominent Apple analysts, Ming-Chi Kuo of TF International Securities, is predicting two new versions of AirPods due for production in late 2019. TF International Securities is a financial services group based in the Taiwan, and what makes Kuo’s insight so unique and valuable is because he has sources within all of the Asian suppliers within Apple’s supply chain. So, the prediction of these new versions of AirPods are coming from contacts inside the supply chain that are privy to changes or updates to the components housed in the various Apple products.

Of the two new versions, Kuo believes that one of them will be an entirely new form-factor with a new internal design called “SiP,” or systems-in-package, which will increase the processing power and battery life. As Brian Roemmele pointed out in the tweet below, Apple appears to be in the process of developing a true mini ear-computer, or hearable.

4-24-19

Which brings me to the question, “Does Apple fully realize what it has on its hands with AirPods?” I don’t mean to ask this question facetiously – I’m serious. Although Apple doesn’t break out unit sales for AirPods, Kuo estimates that Apple will sell 52 million AirPods in 2019 and ~75-85 million in 2020. Additionally, AirPods have a 98% consumer satisfaction score, so I think that it’s fair to say those AirPods owners will upgrade into new versions, the same way many do with their iPhones. That’s a foundation of 100+ million users that Apple should be able to move up its AirPods iteration ladder.

In other words, Apple could be siting on a massive host of Trojan horses smuggling in the next era of computing that it doesn’t even need to convince users to adopt. While everyone initially adopted AirPods because of their truly wireless form factor and simplistic nature, the future for AirPods in my opinion is SiriOS. Apple’s ecosystem of phone, wearables and services needs an interface that’s conducive to all three, and mobile’s touch, tap and swipe is not the answer. I’m not saying that Apple should forego iOS for SiriOS, I’m saying that Apple needs both.

With the incoming “Alexa-pods” and the advent of Galaxy Buds, it’s becoming increasingly obvious that one of the new battlegrounds for smart assistants is the ear (I would image Google is revamping Pixel Buds). Apple has been largely successful with AirPods because there hasn’t been much competition among its tech titan peers, but that’s very quickly about the change.

The last thing that Apple needs is for AirPods owners, who own Alexa smart speakers, to look at “Alexa-pods” and think… “hmm those are cheaper, just as good in terms of quality/pairing/battery life, and Alexa blows Siri out of the water.” This should be incentive enough for Apple to begin allocating its massive resources toward Siri to ensure that no piece of its ecosystem gets poached by a competitor.

It’s times like today when I feel like maybe Apple is turning a corner on its approach toward voice and Siri. I find myself saying things like, “maybe there’s finally someone at Apple, like John Giannandrea, who has convinced Tim Cook that Siri is critical to the company’s future.” To be certain, there have been moments across the past few years where I had this same line of thought, only to be let down by the lack of attention Siri is given. I continue to hold out hope that “Apple will Apple” and enter into the voice space in a meaningful way last, and evolving AirPods into a true hearable adds fuel to that fire for me.

That said, It’s SiriOS or bust at this year’s WWDC for me. While AirPods offer the best opportunity for Apple to succeed in a major way in the next epoch of computing, they’re also one of the most susceptible pieces in the apple ecosystem to being supplanted. If Apple doesn’t focus on Siri to ward off AirPods’ competitors, starting with this year’s WWDC and then with the introduction of a true SiriOS hearable, I’m not sure where Apple plans to succeed in a world that’s rapidly moving toward ambient and voice-based computing.

-Thanks for Reading-

Dave

 

Daily Updates, Future Ear Radio

Future Ear Daily Update: 4-23-19

Pondering Enterprise + Voice

4-23-19
From Bret Kinsella’s Presentation at the Alexa Conference

At the Alexa conference this year, Bret Kinsella kicked the event off with his talk about how we’re entering into the second phase of voice technology. The first phase was all about exposing people to the technology, growing the user base and getting people comfortable using their smart assistants in their home. The second phase, as Bret described, is all about our smart assistants moving onto new devices and into new environments. This phase will be defined by specialization and habituation.

There are a number of different environments that we’re seeing momentum building as smart assistants start to enter into the space. In the healthcare space, Amazon announced its HIPAA-compliant developer kit and the initial, limited roll-out, providing us a glimpse of what’s to come. The Voice of the Car summit just wrapped up, which brought together a large number of folks working at the intersection of voice technology and auto. As Mark Cuban mentioned on This Week in Voice, the car is basically a second home, so it makes a lot of sense that the area we’re seeing voice technology extending into most quickly is the car.

The area that has been a bit quiet lately that I feel is poised to have voice enter into in a huge way is the enterprise space. Back in late 2017, Amazon announced “Alexa for Business” with the intention of integrating Alexa into conference rooms, co-working sites like WeWork, and at employees desks to provide control and enhance communication. While I think Alexa serves a purpose in the office, the enterprise is still Microsoft’s for the taking.

Microsoft could greatly enhance all of its software offerings by layering Cortana into the whole Microsoft Office suite. Just think of the efficiencies that could be had if Cortana were to be baked into all the various Microsoft offerings:

  • Word: Cortana-based app that allows for voice recording and speech-to-text transcription so that you can write any document you want, just like Farhad Manjoo, right into Word.
  • PowerPoint: “Cortana, space all of these images on this slide out evenly.” “Cortana, add a transition here.” “Cortana, bring the bottom row of images to the front.” and so on…
  • Excel: “Cortana, pivot all this data for me.” “Cortana give me the sum of column J.” “Cortana, run a formula for me…”
  • Outlook: “Cortana, when is the next time all the marketing team members are available for a 30-minute meeting? Ok, send out a calendar invite to everyone on the team.” “Cortana, which emails do I need to respond to first?” “Cortana, find Larry Thomas’ phone number in his email signature.”

Adobe products like Premier Pro, Photoshop, InDesign would be awesome to be able to control with voice. The same with Apple products like Final Cut. I’m curious what you think here – what other applications do you see voice and smart assistants being impactful in the enterprise space?  Share them with me on Twitter!

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

To listen on your Google Assistant device, enable the skill here: https://assistant.google.com/services/a/uid/00000059c8644238

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

 

Daily Updates, Future Ear Radio

Future Ear Daily Update: 4-22-19

“You Get Into These Simplification Habits That You Know Make Things Better, And Now You Want Them Everywhere” – Mark Cuban

Last week, Bradley Metrock interviewed Mark Cuban on Bradley’s podcast, This Week in Voice (TWiV). This was a really cool moment for the #VoiceFirst community as a whole, as I think it represents the fact that voice technology is starting to be recognized as one of “the next big things” in tech by highly regarded investors like Mark Cuban. It was a fascinating discussion and it’s really interesting to hear how someone like Mark views the business and investment opportunities around this space, as well as where the dead-ends are.

There were a number of topics discussed throughout the course of the podcast and rather than recap all the interesting things said, I want to hone in on one specific portion of the discussion around continuity (around the 31-minute mark). They’re talking about this concept of having continuity anytime you are moving from one environment to another (i.e. moving from your home into your car). Mark summarized this concept here,

“You get into habits, right I mean, I get into places and I say, ‘Alexa, what’s the weather?’ and you realize there’s no Alexa around. You get into these simplification habits that you know make things better and you want them everywhere.”

This is effectively my thesis on why I believe hearables are a critical component to the continuity equation. In a piece I wrote back in July, 2018, I wrote this:

“I’m of the mind that as we depend on our smart assistants more and more, we’ll want access to our assistants at all times. Therefore, I believe that we’ll engage with smart assistants across multiple different devices, but with continuity, all throughout the day. I may be conversing with my assistants in my home via smart speakers or IoT devices, in my car on the way to work, and in my smart-assistant integrated hearables or hearing aids throughout the course of my day while I’m on-the-go.”

It wasn’t like I was the first one to come up with this line of thinking, as this is something that many people likely think as they begin to move day-to-day tasks to Alexa or one of the other smart assistants. This is why smart speakers are so important, because as Bret Kinsela has pointed out through his research, smart speakers are the “training wheels” to using the voice interface. Smart speakers train you to use this new method of computing and once you begin to get comfortable with it, you start to think like Mark is thinking with “simplification habits”, where you just want this type of functionality around you all the time.

This idea of “simplification habits” ultimately ties into the reduction of friction, which is at the very core of the adoption of the voice user interface and smart assistants. We spend less time tapping, typing and swiping through apps on our phones doing the mechanical work of “issuing commands” and “executing jobs.” As all of the things we depend on our smartphones for become more simplified through voice, the more compelling this interface becomes.  In essence, the better voice computing gets, the more important continuity and always-available, ambient or in-the-ear smart assistant access becomes for the user.

As you can see in Andy’s video below, nearly invisible hearables that serve as a home to our smart assistants are already here:

-Thanks for Reading-

Dave

Daily Updates, Future Ear Radio

Future Ear Daily Update: 4-18-19

The Future of Wearable Tech Is Called a Hearing Aid

undefined
Image: PIPPA DRUMMOND FOR BLOOMBERG BUSINESSWEEK

Today, Bloomberg published a lengthy article on American hearing aid manufacturer, Starkey, and the way it has reinvented itself as a company following a rather nasty trial in 2016. The former leadership team was largely removed, either through convictions in the trial (lots of embezzling) or fired, and replaced by a number of industry outsiders. One of the most notable hires was Achin Bhowmik, formerly of Intel’s Perceptual Computing group, who came on board as the Chief Technology Officer.

Achin seems to be the brain behind Starkey’s new flagship hearing aid (which they’ve dubbed “healthable”) Livio AI, which in my opinion is the truest version of an “ear-computer” that we’ve seen to date. I’ve written a lot about Livio AI and won’t rehash here why I think this is such an important device not just to the hearing aid industry, but to the entire tech industry. Instead, what I want to write about today is an excerpt from the article near the end that is very revealing about true hearable adoption (emphasis mine):

“The next day, the audiologists go back to their practices to begin selling the Livio AI to patients. Which isn’t hard. Within just four months, the device will account for 50 percent of all product sales worldwide at Starkey. For 2019, the projection is 80 percent.

Since Starkey is a privately held company, we don’t see the type of public disclosures that reveal unit sales. So, to see that Livio AI is driving such a massive chunk of the company’s revenue is incredibly revealing. As bullish as I have been on hearables and the trend that we’ll all be adopting mini ear-computers across the next 5 years, I had absolutely no idea that Livio AI was thriving and penetrating the market as quickly as it is. This is evidence that the market is hungry and ready for sophisticated in-the-ear wearable devices.

The title of this article might seem odd and off-base, but as counter-intuitive as it seems, hearing aids really are on the bleeding edge of wearable innovation. We’re talking about a nearly invisible device, that offers 45-hours of battery life, a tandem of smart assistants that work in conjunction (Thrive Assistant for local queries; Ok Google for general, cloud-based queries), live-language translation in 27 languages, embedded inertial & heart rate sensors, and a companion app to support and visualize all these features and data. What other device on the market is capable of all that?

I understand that the price point is considerably high today, but in light of the sales volume of Livio AI, I would guess that the consumer market and fellow hearing aid manufacturers are going to follow the path that Achin and Starkey are blazing. My hope is that competition will lead to lower costs and make this type of technology accessible to the masses. With that being said, I would imagine that Starkey is going to continue to double-down on Livio AI and continue to transform in-the-ear devices into computers, the same way the iPhone transformed the phone into a pocket computer.

Who would have guessed that the humble hearing aid would one day be the poster child for body-worn technology and computing?

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

To listen on your Google Assistant device, enable the skill here: https://assistant.google.com/services/a/uid/00000059c8644238

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

Future Ear Daily Update: 4-17-19

The Significance of the Alexa HIPAA-Compliance News

Earlier this week, Teri Fisher published an episode of his Voice First Health podcast that was a compilation of eight different perspectives (plus Teri’s) on what the news surrounding Alexa becoming HIPAA-compliant meant to them. The folks that contributed their input to Teri’s Podcast were:

  1. Nate Treleor – President of Orbita
  2. Bianca Phillips – Founder of Electronic Health Consulting Group
  3. Yours Truly
  4. Heidi Culbertson – – Founder/CEO of Ask Marvee
  5. Dr. Neel Desai – Co-founder of MedFlash Go
  6. Stuart Patterson – CEO of Lifepod Solutions
  7. Timon LeDain – Director of Emerging Technologies at Macadamian Technologies
  8. Dave Isbitski – Chief Evangelist of Amazon Alexa
  9. Dr. Teri Fisher – Creator of the Alexa in Canada and Voice First Health podcasts and websites

Everyone had something unique and interesting to say. Some, like Nate Treleor, mentioned how significant this development was to their business (a secure platform for creating voice-enabled virtual assistants for the healthcare industry). While others, like Stuart Patterson, offered a different take, stating that it wasn’t as significant of a development as it would appear considering that the company Nuance has offered HIPAA-Compliant voice technology for years.

Some of my favorite comments from this podcast:

Heidi Culbertson: “I think this move by Amazon is one small step for voice, and one large step for making life better. I think its a huge opportunity for innovation and partnership among health organizations and third party development and design shops. Innovation happens and this is a really exciting time.”

Dr. Neel Desai: “This will help to cut down on all the telephone calls back and forth between the patients and the doctors. This is a great short cut that will improve communication and save time.”

Dr. Teri Fisher: “One of the things that I’m very excited about, even before the HIPAA announcement, is the idea that voice technology allows us to communicate with computers in the most natural way we know. I’m of the opinion that in the future we’re each going to have mini clinics in our home that are run by a voice-first device and its going to create a personalized, decentralized approach to healthcare. I believe this is the big next step in doing that.”

There’s clearly a lot of excitement around this development in the voice technology space and it’s going to be pretty fascinating to watch how this all unfolds at the intersection of healthcare and voice. This was an important first step in making this all become a reality and as the gates open to the developer and design communities, we’re sure to have a lot of new applications and use cases flourish.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

To listen on your Google Assistant device, enable the skill here: https://assistant.google.com/services/a/uid/00000059c8644238

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Daily Updates, Future Ear Radio

Future Ear Daily Update: 4-16-19

“I Didn’t Write this Column. I Spoke It.”

Image: Jim Wilson/ The New York Times

Last week, New York Times columnist, Farhad Manjoo wrote a piece titled, “I Didn’t Write this Column. I Spoke It.” It’s an interesting look at how Farhad writes his articles through a combination of using AirPods, the voice memos app RecUp, and then the transcription app Descript. He’s able to walk around the city and speak his columns into life, which he then uses as a draft to work off of when he transitions to the traditional means of a computer and keyboard to finalize and publish his pieces.

Here’s what Farhad had to say about his experience:

“Writing by speaking has quietly revolutionized how I work. It has made my writing more conversational and less precious. More amazingly, it has expanded my canvas: I can now write the way street photographers shoot — out in the world, whenever the muse strikes me (or more likely, when I’m loafing around, procrastinating on some other piece of writing). Most of my recent columns, including large portions of this one, were written this way: first by mouth, not fingers.”

As he points out in the piece, “there is something more interesting here than a newspaper columnist’s life hack.” He refers to this new phenomenon as the “screenless-internet” (which is more-or-less adjacent to the term voice-first, which I tend to use). The internet of tomorrow is shaping up to be much more ambient and multi-modal, meaning more device types (think every device being connected and part of the network), with voice at the center of it all as the core user-interface (UI).

Voice as a UI has been made viable in recent years thanks to advancements in natural language understanding (NLU), speech to text processing, a more connected & powerful cloud processor, and hardware that is designed for communicating with our machines via smart assistants (AirPods; smart speakers & smart displays). All of this equates to more intelligent smart assistants that don’t constantly ask you to repeat yourself (this still happens, but it’s decreasing in frequency).

Farhad’s experience is a great example of  the shift toward the internet of tomorrow. As he points out, it’s not as if the keyboard and computer method of writing his articles is no longer relevant, instead, this experience of using the “screenless-internet” augments his writing process. That’s the key point about moving into this new era of computing – it’s an additional layer and UI to begin leveraging so that you are less dependent on legacy interfaces if you so choose to be. The choice now exists and will continue to become more viable as the underlying technology powering our smart assistants and the voice UI continue to mature.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

To listen on your Google Assistant device, enable the skill here: https://assistant.google.com/services/a/uid/00000059c8644238

and then say, “Alexa/Ok Google, launch Future Ear Radio.”