This week on the Future Ear Radio podcast, I’m joined by Giles Tongue, CEO of Chatable Apps, and Andy Bellavia, Director of Market Development at Knowles Corp. By now, every one should know Andy as this is like his 20th guest appearance, and since this is Giles second appearance on the podcast, I hope you’re becoming a bit more familiar with him too. These are two of the most knowledgeable people I know when it comes to the emerging technology side of the hearables market, so I wanted to bring them on to help me tackle a topic that I think is going to become increasingly relevant these next few years — latency.
For a long time, one of the most pervasive issues that has plagued the hearing loss population is the ability to decipher speech-in-noise. I’d gander to say that the vast majority of the population with mild and moderate hearing losses primarily struggle to hear conversations in noisy settings. Unfortunately, as prevalent of an issue as speech-in-noise is, there haven’t been all that many solutions that are designed to solve the problem, especially economic solutions. And even hearing aids don’t fully solve the issue by-and-large.
The answer might ultimately lie in advanced processing capabilities being ushered in by AI-based products. However, in our current state of technology, all that number crunching and advanced calculations being done by the AI engines require tremendous power relative to the size of the device, and in lieu of having enough power, a latency effect is created. So, how then do we solve the inherent latency presented by AI-based solutions? That was the basis for this week’s conversation.
Takeaways:
- Shortly after recording this episode, Chatable announced a partnership with fellow hearing solution startup, Noopl. Our conversation today should help to illuminate why this partnership makes sense. Chatable’s AI engine with its advanced latency reduction capabilities can be housed inside Noopl’s iPhone dongle. So, the solution here is Noopl with Chatable baked into the dongle + some type of earbuds (such as AirPods). This is flying under the radar, but could be potentially be a killer combo.
- I see there being two different challenges to tackle when it comes to situational amplification solutions. The technical side and the societal acceptance side. It’s the latter that I see being the bigger challenge of the two. In order for situational amplification devices to really catch on (i.e. the actual act of wearing something in public to augment your hearing), there needs to be a collective acceptance of that behavior change. As I mention in the episode, this is perhaps the single biggest contribution AirPods has had and will continue to have toward helping to combat hearing loss at scale. If we as a society can normalize wearing situational amplification devices in public venues like noisy bars and restaurants, in the same way that glasses are completely normalized, that will be a gigantic win for us all.
- I love the comparison of struggling to understand speech in a noisy setting to squinting with your eyes. As Andy mentioned, one of the biggest problems with having hearing loss is the sheer strain you’re putting on your brain. This results in fatigue and as he described, his life post-hearing aids has been way less fatiguing.
- AI is the current king of the buzzwords, just like “big data” was before it. It’s overused and often times it’s not actually even accurately descriptive of what’s being described. What was nice about this conversation is that we actually tried to describe in simple terms what exactly it was that the AI-engine was being used for and how it manifests into the market (solving latency).
- The reoccurring theme lately on the podcast has largely revolved around new solutions – hardware and software – representing new tools for the hearing care providers’ tool belt. I continue to believe that as the professional continues to decouple from the widget, and continue moving toward a business model tied to their time, these new solutions present added opportunities to exhibit the provider’s core value – their expertise & education.
-Thanks for Reading-
Dave
EPISODE TRANSCRIPT
Dave:
Okay. So we are joined here today by two recurring guests. We got Andy Bellavia in Giles Tongue. So let’s go around real quick, introduce ourselves, starting with you, Andy.
Andy Bellavia:
Thanks, Dave. Always a pleasure to be on the show. And I think by now most of your listeners know who I am, but I’m the director of market development for Knowles. I’m responsible for all the in-ear devices, which are actually not regulated hearing aids. So music earphones, in-ear monitors for musicians, communications earpieces, and hearable devices, including hearing hearable devices. And I’m also a connected hearing aid wearer. So I get to experience it from the customer point of view, as well as supplier.
Dave:
Great to have you here. And Giles.
Giles Tongue:
Thanks for having me back on Dave. So I’m Giles, I’m CEO of ChatableApps. We’ve created the world’s first AI for speech enhancement with zero latency that can fit onto a hearing device or hearable. Our approach is neuroscience led AI, where we’ve reverse engineered how the brain processes speech and noise and utilized AI to put this on a chip without any latency. And because we’re targeting the brain, it also works for autism and ADHD.
Dave:
Love it. Well, great to have you two here. For those that maybe recall, Giles was on about 30 episodes ago, I think I had him on with Geoff Cooling and I really think he’s got a cool company, that’s ChatableApps. We’re going to talk a lot about it today, so I won’t go too much into that past episode, but really, a novel solution that stems from the world of neuroscience. So we’ll get into that as this conversation goes on, but really I wanted to have these two on today because there’s been just so much momentum that’s been build in from a lot of the big players in the tech space.
Dave:
You look at Facebook with a lot of the work that they’ve been doing with their reality labs, in particular around things like solving speech and noise and really trying to attack a lot of the commonly cited problems in the hearing health space. So you have Facebook that’s now their scopes are on the hearing health market. Just recently you saw with Google and their project Wolverine, another type of hearing health moonshot project that again, leveraging all of their AI prowess and all of their cloud computing and all the different buzz words that we hear today. But these are the companies that are very, very well-suited to throw all of that way at these problems. And I think it’s just creating a really interesting scenario here that we’re watching unfold in real time.
Dave:
And then the third of the headed beast would be Apple. Apple has been slowly moving in this market with the AirPods pro and the headphone accommodations. But I think what’s really interesting about Apple, and we’ll probably talk about this throughout the episode is with research kit, this is something that kind of flies under the radar. So research kit allows for them to partner with a lot of different universities and because of their giant user base, they can quickly assemble huge studies of people. And so they’ve been working on a study that they recently published the results on all around basically hearing loss and some of the different findings that they’ve had from their hearing study around the prevalence of hearing loss, the prevalence of noise that people encounter every single day.
Dave:
And I think that it’s just a really kind of a roundabout way that allows Apple to solidify themselves in the academic community as a facilitator of a lot of this research. And, oh, by the way, they happen to have this product that a ton of people use that I think they’re lending the argument toward there’s beneficial ways that you can leverage AirPods and the like into the future. And so for this conversation, what I really wanted to do was bring Giles on in particular, who I think has a lot of really interesting thoughts as to why we’re now suddenly seeing a lot of this interest from these major tech titans, what they might ultimately be able to bring to the table.
Dave:
And then as someone that’s operating very much down in the trenches in this space, what Chatable is doing and how Chatable is actually working to solve some of the different things that he believes these major tech titans are encroaching into. So with that, Giles, why don’t I kick it over to you and let you expand on this a little bit and share why you think there is this newfound interest from these major companies at solving some of these different problems that they’re attacking.
Giles Tongue:
Sure. Thanks, Dave. So if we think about the Apple study recently, they fed into the WHO numbers were we want to talk to about 466 million with disabling hearing loss. Now the headline number is one and a half billion people with hearing loss. So we’ve now moved into one in five people with hearing loss. And we know from industry data that something like 80% plus of that’s going to be mild, moderate. So the World Health Organization number still doesn’t to my mind, include autism, ADHD, and APD, other groups who might have hearing related or listening related issues.
Giles Tongue:
So I tend to talk about one in four people who struggle with speech and noise. So they’re helping to broaden our outlook really on the problem and the issue that we’re trying to address here. And we talked about Google and the moonshot just now, I think an interesting thought experiment is if we forgot everything else that we have in you and started from scratch today, and we were Google or Apple or one of these others, and looked at those statistics and thought, what is the problem that we’re trying to solve? How can we address the most people, et cetera, then they’d probably focus on speech and noise, and they’d probably be turning to things like AI and machine learning to try and solve these problems.
Giles Tongue:
Why is it a moonshot? Well, at the moment, it’s a moonshot presumably to these companies because they think it’s a crazy idea with near to zero chance of success. Of course I’m here to tell you that we’re already on that journey, and we don’t see it as a moonshot, but we see is we’re already there, but that’s probably something for us to go into in a minute. But these are companies who have huge resources and huge numbers of people working in the space of machine learning, AI, et cetera. They’ve just got to figure out where the solution is going to come from. And I’ll tell you that [inaudible 00:07:02] visits and auditory neuroscience, and AI, and then with that, they can work with their huge resources on trying to provide a solution for that.
Giles Tongue:
And latency is going to be the big problem that they need to overcome in order to get there. Any kind of AI or machine learning type solution is going to have apparent latency within it, and anyone who’s tried to work on a hearing device will tell you that any kind of latency is going to be problematic for the user. So the battleground really, once we’ve understood all of the above is now latency, who’s going to be able to produce a zero latency AI that could work on the bucks?
Dave:
Yeah, that’s interesting for a number of reasons. When I think about this whole hearables ecosystem right now, you think back to that first iteration of devices, that first wave you had Bragging, [inaudible 00:07:54] Doppler, you name it. And the huge obstacles, the huge barriers of entry at the time were largely around battery life, solid pairing and just more or less identifying solid use cases that could differentiate yourself. And here we are in 2021, about four or five years later, and I think that those problems have all largely been solved. You think about the buying decision behavior around hearables today are largely not even really related to those issues that had presented such problems initially. I think most people now get a device that has pretty good battery life.
Dave:
There’s obviously room for improvement and we’ll continue to see that as time goes on, most devices are able to pair almost instantaneously now, but that was a really big issue early on. And that’s really what I think AirPods solved, was with especially around the instantaneous pairing. And so as we enter into this next phase of hearables, and I really think that we’re graduating up the ladder and that is then going to present new challenges. And I think this is a really interesting one to think about, which is latency. Because I think that as we’ll go into throughout this conversation, latency, I think presents a new age challenge that we’ll need new age solutions tied to that. And so that’s where my head’s at here to just further frame this conversation is to say that here we are, and I just think that there’s just a new set of problems that are presenting ourselves. And so the question now becomes, okay, how do we solve those different problems. Andy, over to you. What do you think here?
Andy Bellavia:
I actually like to take a step back and think about this in a greater context, that if you think about hearing devices today, let’s call it hearing version one. Hearing version one is 100 years old. Hearing version one is all about amplification. The first experimental devices had vacuumed dudes and you carried it with a handle and then you had little double packs with a wire running up to your ear. And of course I appreciate the part, as far as it goes, that our founder Hugh Knowles was one of the people who helped get hearing devices up on your face and make it more realistic, and we’ve been driving that innovation ever since, but it’s all about amplification within an increasingly finer and more sophisticated levels of control.
Andy Bellavia:
You have selective amplification, you have selective compression, you have beam forming mics, do you have automatic mode changing, all these sorts of things it’s gotten really good, but it’s all about amplification. That’s hearing version one. Now I think we’re on the stage of hearing version two of which amplification still remains a part, but you think about all of the ways that the brain comes to interpret and recognize sound aside from just the straightforward hearing path. I mean, that’s what Wolverine is after, that’s what Facebook reality labs is after. They stated in their mission statement for the device, that it would work in conjunction with hearing aids. Hearing aids with AR applied will give you a much finer understanding of the sound and the extraction of the sound you are interested in from the noise that’s around you. And of course that’s where Chatable is working as well.
Andy Bellavia:
And then you have the Neo sensory buzz, the last podcast that was just aired in which you’re actually working on getting the recognition of sound through the skin rather than through the ears. So all this collectively I think of is hearing version two, and it’ll really add an entirely new dimension to the hearing space often in combination, right? Like your guest and her daughter who had very profound hearing loss, she talked about how the combination of the cochlear implants and the neurosensory buzz together worked much better than either one alone. And so I see all of these things coming together to just take hearing science and hearing loss mitigation, even at very severe and profound levels, to a whole different level than we’re at today. And it’s all very, very intriguing and very promising.
Dave:
Yeah. I love that. And yeah, you were referencing the conversation I had with David Eagleman and in which I had Jackie Shull, who’s an audiologist and also her 13 year old daughter’s deaf. And just as you mentioned, bilateral cochlear implant user, and what’s fascinating about that is she’s wearing it in conjunction with the buzz now and the way they described it was trimodal hearing. So basically in a combination of lip reading, being able to hear the sound from the cochlear implant and then an in addition, the buzz, it’s just so much more context. And I think that as she cited in the podcast episode, her speech discrimination is going through the roof right now because she’s now really being able to decipher, okay, this is a three-letter word, this is a five letter or a three syllable word, a five syllable word.
Dave:
And so again, I totally love that whole point. I think that’s a brilliant way to put it, is this idea of version two. And I do agree with you. I think that a lot of what is being done right now is taking a lot of the components that we have and assembling them almost like Lego blocks, to come up with an entirely new way that we might be able to get there. And that is why I think Chatable is so interesting is that it provides another building block that you can layer onto this whole thing that I think is going to yield some very, very interesting opportunities.
Giles Tongue:
Yeah, totally. I suppose I can jump in there. I mean, what we have is an auditory neuroscience-based approach, which is then enabled and executed through AI. And we would argue that the fields of neuroscience and AI are merging and going to become one field eventually, but essentially what we’re talking about here is attention. We’re solving the problems of attention, which is the underlying brain-related foundation or problem that exists between all of these different situations that we’ve described, hearing loss, ADHD, autism, et cetera. So and that can also be described to some extent as the cocktail party problem, which is, you’re trying to hear the person in front of you talk to you whilst everybody else in the room is also talking.
Giles Tongue:
Your ability to do that in a inverted [inaudible 00:14:36] is normal functioning brain. That’s your ability to attend and focus on what you want to, and if you lose that ability, you’ve lost your ability to attend to that. So the problem that our AI is really solving is one of attention. So it an entirely different school of thought, it’s got nothing to do with amplification, hearing science and so forth. And of course we’ll need an element for that as well, because there are many hearing, most cases mechanical interrelated issues, but as far as the brain and all science is concerned, what we’re focused on is giving the people the ability to attend what they want to listen to, which more often than not is speech.
Andy Bellavia:
Yeah. I’ve come to appreciate that actually, as I’ve gone down my own hearing journey, because there are actually times if somebody behind me addresses me, I’ll completely miss it. And I’ve had conversations with my audiologist who says, actually, yeah, that’s your brain’s failure to recognize that someone is speaking to you. And so that 100% of the time, if I’m looking at somebody, even in a crowded room with where I’m at today, I can still focus in on them and attend to them. But there are times when somebody back there will refer to me or say something to me, and it doesn’t even register.
Andy Bellavia:
It’s all about the brain recognition and not the hearing per se. And the minute I catch on that, all of a sudden I can still talk to them when they’re behind me, but it’s that brain recognition that somebody is talking to you. And what you’re saying is through the techniques you’ve developed, that same sort of thing that you’re talking about in the context of somebody with autism would also apply to a person without autism, but with hearing loss as well. Correct?
Giles Tongue:
Totally, totally. So one day, just as an experiment, I went through all of our testimonial videos and extracted the phrases where somebody was either saying listening effort, focused, attention, and created a montage. And pretty much without prompting, everybody said something using one of those, or if not all of those phrases. So we’ve got cochlear implant users, we’ve got people who’ve got severe, profound hearing loss, all sorts of different people in these videos, but all of them are referring to this listening effort or focus or attention issue.
Giles Tongue:
So if you were to watch back, you’ll see it. I mean, my own experience, I struggle with speech and noise, and you can feel yourself really, it’s like your brain is doing what your eyes do when you squint. You’re really trying to suck yourself into what that person’s saying, and that’s to do with listening effort and attention. So that is ultimately the problem we’re solving here with this technology.
Dave:
Yeah. That’s actually a really good analogy, is like squinting because I think you’re right. I think that’s kind of what’s going on. And I think that’s what’s so interesting here, I think is that, and Andy you might be able to speak to this a little bit is like, there’s a difference between the all-day wear hearing aid user who has a problem that really warrants an all day device. But I think a lot of people, particularly the people that are being cited in these WHO studies, one in five people around the globe, I would imagine that the majority of them are probably falling into this category where they experience it at a certain point throughout their day, but not all day. And so it’s the situational user that could use a boost here, situationally on demand.
Dave:
And that’s where this is getting interesting because I think that what does that actually look like in practice though? So are you going to have something that maybe you’re at that cocktail party that’s always cited or you’re in that noisy bar? Andy, I’ll just present this to you. So Andy of seven years ago, where maybe you started to detect, okay, I’m struggling in some of these different situations, as somebody that now wears devices all day and you’re pretty immersed in the options that exist, how do you actually see that working in practice? Would you have something in your pocket like AirPods that you would just pop in and feel totally normal doing that if it provided that situational boost? I mean, I think that this is where I’m going with it is to say that it’s one thing to be able to technically do it.
Dave:
It’s kind of like in the podcast I have with David Eagleman where we were talking about that Linears project, where they were trying to solve tinnitus and the solution where these little things that you wear on your tongue that zap your tongue. Maybe in research it makes sense, but in practicality, absolutely no way anybody’s going to be actually walking around with that. I just have a hard time imagining it, at least at scale. And so it’s kind of the same thing here is like, how does this actually manifest in a way that we think would be societally accepted and people would do it?
Andy Bellavia:
Yeah, that’s a really good question because you brought up two points there. One is the personal effect it will have by wearing such a device, and the other is the cultural acceptability. I think the cultural acceptability is coming quickly. I mean, for years I could walk around Silicon Valley and every third person would leave their AirPods in all day long. They would walk in and order food and things like that while that hang in there. And so I think that’s very quickly coming. And especially as you start to see the larger players putting more attention to it, wait till Apple really tunes up their hearing enhancement and actually starts to advertise it. I have this vision of a commercial, they do the really cool AirPods commercials with a person walking and they’re jamming to their tunes, and the world gets a little unreal around them and people are dancing.
Andy Bellavia:
Well, imagine that kind of commercial, where a person walks into a crowded place and you see somebody over there struggling to hear, and this person is interacting normally. And I think that’s going to very, very quickly become a cultural norm. Now on personal level, I can tell you, now close seven years ago when I first started having these difficulties, I probably wouldn’t have thought about wearing a device around, it would get a little too weird back then. But what I always tell people is it’s not just that you have a hard time hearing, it’s the fatigue factor that comes with it. Okay. So you’ve talked about mentally squinting, right?
Dave:
Yeah, totally.
Andy Bellavia:
You do a day like I would do all days of meetings or conferences, especially in foreign countries China, I’ll name most of all, because I don’t speak Chinese and English is spoken variably with different accents and you spend a whole day struggling to interpret people talking when you have a hearing loss in the first place, you are wiped out, you’re drained. And so when you put that in the context of say, a party that’s four or five hours in a noisy restaurant or bar, you start to kind of check out, you get tired of struggling to hear what’s going on for people over you. You more or less just zone in on the two people that are in your immediate region. You start to isolate yourself from everything that’s happening and you find you’re more tired afterwards anyway.
Andy Bellavia:
I was absolutely amazed right after I got mine. I did a conference in Shenzhen in which I was both a speaker and a participant and the whole day was done and we had a dinner meeting afterwards. I got back to my hotel at the end of the night and I’m like, I feel great. I mean, the cognitive load that comes with struggling with hearing loss, especially in noisy environments is really something you have to experience. I don’t recommend you put yourself in this position, but if you are, the reduced cognitive load of being able to hear properly is something else.
Andy Bellavia:
And I can understand why that also would apply to people with other issues like autism, because the ability to concentrate in a noisy surrounding would just be overwhelming. And so solutions that help you isolate speech from noise, even situationally, both for mildly hearing impaired people and people with autism and other similar issues is really an amazing thing coming.
Giles Tongue:
Yeah, I would agree that. And I’ve heard both of you speak about different aspects of this in the past. And Andy, I know you’re excited about all the different functionality that your hearing aids can bring to your ear. And it can’t be far away from the moment that the person wearing the hearing device is the smartest guy in the room, right? Because they’ve got all this amazing functionality. And the people like Apple and Google and all the rest of it have that marketing power to make that happen. Augmented hearing, augmented reality, augmented life, that’s where this goes eventually.
Giles Tongue:
Which probably takes us into the idea of how TWS is a technology that’s going to help us get to that eventuality for more people. But I always envisaged this kind of really small form factor maybe in-ear form factor device, which is enabling someone to do so much more with their life, whether it’s [inaudible 00:24:06] hearing, taking phone calls, getting the voice UI, et cetera, that to me makes you the smartest person in the room, and no longer do you have the stigma, that [inaudible 00:24:18] can now disappear.
Dave:
Yeah. No, I agree. Here I am, I’m having this conversation with two people that wear glasses and it’s just like… But the point I’m making is it’s something that doesn’t even register. It’s just so normal. And I do think that that’s kind of the power that I think Apple in particular has here, is they can almost create that AirPods are almost like the new glasses. Where it’s like maybe you’re wearing them for… I guess there’s not a parallel in terms of, I don’t know if anyone would say they’re fashionable, but the equivalent of that would, I think be like for any kind of functionality that you’re using them for. And then maybe you’re using them in the same way that if you have prescriptive lenses, you’re using them for augmentation, making that sense better, more or less.
Dave:
And so it could be the same thing. And so I think that there’s a lot that is probably going to be needed to be done in order to make the socially more and more just normalized and normalizing this idea of like, I would love for us to get to the point as a society where you’re at a noisy bar and it’s no thing if somebody just pops in one ear AirPod or pops in something like a new hair IQ bot or something like that, where it’s just like, oh, okay. Yeah. I just need a little bit of an enhancement as I’m in this setting, because so much of the distractor there is actually going to be on like, am I comfortable doing that? And that’s why I always try to cite like, if you go back three years ago, AirPods were actually something that people were almost embarrassed to wear because they were like dragged when they first came out and then they became totally normalized.
Dave:
And then they became extremely popular and kind of cool. And so that actually has a huge societal effect because it’s become behaviorally acceptable to now wear these things for longer periods of time and that cascades to all these different devices. So I think that’s one part of the equation is, will we actually be comfortable wearing things like this in these situational amplification settings? The other side is on the technical side, right? And this is where I want to go with this next is have you really speak to this Giles. I mean, this is something that I think you feel passionate about. Can you really help us to understand the state of latency, what that means in terms of the limitations that exist today, why that’s such a limitation? What would that actually manifest as, and then maybe a path forward in what that would look like if you really do have state of latency.
Giles Tongue:
Yeah, sure. So if we talk about TWS, because that’s ultimately where probably we’re going when we’re talking about addressing the mild, moderate side of things. So what we want to do is address the speech and noise problem which we’ve discussed as a situational thing. With TWS we have the benefit of the full range of sound. So it’s not a limited sound and it doesn’t have a compression which can make speech and noise more challenging. So then we have a TWS form factor which doesn’t carry the stigma to it and it can have affordability. So what’s next is an AI to solve the speech and noise problem. Now, the chips that we need for running AI exist already, we’re developing on a Knowles chip at the moment, which has both processing power and the memory which we need to grate our situation or use case.
Giles Tongue:
With TWS, we have the full bandwidth and that enables us to create a really attractive proposition to someone who’s used to [inaudible 00:27:57] normal sound. Now the trouble with using AI generally for this sort of thing is AI is a complex thing, a lot of calculations and so forth. And with that, there is an inherent latency. And when the Google’s of the world are talking about moonshots, this is where the problems are. So you need to have an AI that can work on a chip to begin with, which is uncommon. You need to have the chip, which is still, it’s available now, but in limited distribution, and then you need to have this AI working at a latency which is going to be imperceptible, which the industry, the hearing industry will tell us is less than six milliseconds, which is incredibly fast, that’s virtually nothing.
Giles Tongue:
So those are the challenges that are faced when trying to develop such a thing. Now, because we’re approaching this from a different point of view and a different starting point, and we know some better stuff through Dr. Andy Simpson [inaudible 00:28:53] we’ve talked about for one of our founders, we’re able to do this. So we have this zero latency AI, but typically the problem with that is that it’s a complex thing, trying to do lots of calculations on limited resources and therefore there’s a long latency.
Andy Bellavia:
Why does that matter? I think it’s worth explaining to people listening why the latency [inaudible 00:29:15]
Giles Tongue:
Well, that was a curiously convenient moment for your signal to just go there because what I-
Dave:
Let’s try that again. Andy, say that again [inaudible 00:29:24]
Andy Bellavia:
Oh, I’m sorry. I think it’s worth explaining to your listeners why the latency actually matters.
Giles Tongue:
Yeah. So the problem with latency is you end up with a situation where with a long latency, what you see and what you hear become distant relatives and in its worst case that becomes unusable. Your brain just gets overwhelmed by he difference in what you see and what you hear. So really what you’re trying to do is crunch that down to the point where there’s an imperceptible difference between the two things. And there’s lots of studies in the hearing industry that tells you that anything less than six milliseconds is imperceptible, anything more than 40 milliseconds is really, really difficult to use. So your target is going to be six milliseconds, but really you don’t want to be anything over 40 milliseconds. That’s a real problem. And one of the consequence of that, well, people just won’t use it. That’s when it goes back in the drawer. So that’s the kind of product definition that you want to be working towards.
Dave:
So I know that you had been alluding to some of the ways that you’re getting down to that six milliseconds. But I’m curious, you say, oh, Google and Facebook, these are moonshots for them, without spilling the secret sauce here, I am just curious about what is it that you think is the crux of why this is so challenging?
Giles Tongue:
It’s the approach. So typically the approach involves things that require a lot of compute power and a lot of calculations. Our approach doesn’t require that, and I’m not going to try and give away too much, but the traditional approach to separating sounds, labeling sounds and so forth, does tend to lead to a requirement of needing a lot of compute power, a lot of calculations. And on the whole, that’s impossible on a small device, but if you could get it to a small device, you’re still dealing with a lot of latency. Inherently within that [inaudible 00:31:25] to work is latency.
Dave:
And so just to put a nice bow on this, the way that this would actually look and feel is you and I are having this conversation, we’re in the noisy pub, maybe I have a directional setting on to where I have it facing you. And then so your speech comes in and it is run in real time in near zero latency through that AI processor. And then it comes out into my ears through the device that I’m wearing as this highly refined piece of audio, more or less. Correct?
Giles Tongue:
Exactly. So you’re sitting there with your TWS on and you’re just listening to me and it comes in, gets treated by our AI and pass through in less than six milliseconds. Well, less than two and a half milliseconds, we’re working at them. So that is totally imperceptible to you. You wouldn’t know that there’s a calculation of extreme [inaudible 00:32:26] of magnitude going on in that process. It’s just imperceptible.
Andy Bellavia:
I’m also curious about the neuro scientific part of all of this. All right? I realize you are not a scientist of the crew, but at the same time, you’ve also said that it’s not only just simply the way you treat the noise around you, but how you are delivering the sound to the brain in a different way than normal. Can you explain that a little?
Giles Tongue:
Yeah. So the problem that we’re helping with is impaired speech processing in the cortex. So if the solution is typically found in the healthy cortex, it’s a brain-related issue, and what we’re doing is replicating that impaired function. So that’s what makes it an artificial intelligence in a sense. We’re artificially doing what the healthy brain would be doing and how we do that, well, that’s possibly [inaudible 00:33:23] question.
Dave:
I love it. This is by the way, kind of like a neuroscience podcast. Now, I just realized that we’ve got a lot of brain talk on here recently, but this is great. I mean, I think it’s really fascinating to hear these different approaches. It’s like a lot of these are issues that stem from the brain, and so I think it makes sense that you’re now seeing this attention, and maybe it’s always been here, but I’m just now realizing it, that there’s a lot of novel approaches from the world of neuroscience that I think that really will jive nicely with the world of audiology. I think there’s a lot of really cool synergies that are starting to become apparent. And I love this intersection of the two. I think that there’s just tremendous possibility as the two become more and more blended, more or less.
Giles Tongue:
Yeah, totally. I mean, we’re in a world here with Chatable, of neurons and neuroscience and processing of signals. So it’s quite a fascinating and interesting world, and I should warn you that I’m not the one with the PhD and so I probably will have to draw the line there in how much further I can go in this [inaudible 00:34:36]
Dave:
No, no. You’re fine. This is really interesting though, because like I said, at the beginning of the podcast, I think it’s like what I constantly hear people ask, what’s the next three to five years look like in the space? And I think there are two answers to that question. One is what’s going to happen with the software. What’s going to happen off of the notion that we’re all wearing devices like AirPods or hearing aids in higher prevalence, right? And so that’s where I think you really see like Clubhouse, Twitter spaces, social audio, spoken word audio, just basically in audio internet more or less. So I think that there’s going to be a lot built around that notion that we have things in and around our ears for longer periods of time. I also have mentioned on this podcast before that I think it’s really fascinating to think about the precursor to augmented reality might really start with the ear which there’s a whole tangent of conversation that we could go on there.
Dave:
But I do think that the other side of this is the hardware. It’s like what’s going to happen with these innovations that happen on the actual hardware front? And this is, I think a really interesting concept is maybe the approach to speech and noise, we keep hearing about like AI applications for hearing aids and just like an AI integration into the hearables and all that. But I think this is a really tangible example of what that actually might manifest into into the market, is that one of the largest problems in hearing health, speech and noise, the thing that’s probably the root of most mild to moderate hearing losses, the best solution to that might actually be around an AI engine like Chatable. And so then the question then again comes back to A, how do you all do that from a technical standpoint? And then also from a behavioral standpoint, what does that look like when we have these types of solutions that work? Are they going to be socially acceptable?
Dave:
So I think there’s a lot of different running things that are kind of like parts of this whole equation. But I do think that AI in particular, these are the kinds of use cases that I think will come from them because you’ll hear a lot about the buzzwords themselves in the top level kind of application. We’re going to have AI in hearing aids, but there’s not a lot that’s being said about what that actually means in terms of how it manifests in the product set. And so I think this has been a really interesting conversation to hear about the tangible way that we might see this kind of come to market with that. What do you think about that, Giles?
Giles Tongue:
Yeah. Spot on. And I like the mention of a reality there, that is a whole nother tangent, and we could go down that route, but just to indulge for a second, if we are able to enhance and improve the sound that’s coming in, that effectively is changing your reality. So your reality is what you perceive, Andy, I know you’ve talked about this. Your reality is what your brain is perceiving around you. The signals that you’re sending in are being accepted. So we’re able to improve the signals that your brain and cortex are receiving, then effectively we’re improving your reality, which is pretty interesting. And then what if we add some sounds and layers on top of that?
Giles Tongue:
Then effectively we’re augmenting your reality. And that takes us into a whole new world of [inaudible 00:38:02] which we’ll probably save for another day, but what this looks like in terms of if we could paint the picture of what it looks like, this is a TWS or a hearing aid being worn in a noisy place, and you switch on a mode, it’s a situational thing, you switch on the mode and now that sound is being processed to, with our technology, we’re recreating the voice. So there’s no background noise on it. So you just get the voice through of the person you want to speak to, which is giving you that attention back.
Giles Tongue:
So you’re attending to that person, this is the solving of that cocktail party problem. I want to speak to that person in front of me, so I press the button and now I just get the speech of that person in front of me. Fantastic. And that’s helping people with hearing loss, with autism, ADHD, APD, and all the others, where there is a noise and disturbance and distraction related problem. You see? So now if we have that technology inside the TWS, all of those people can be helped in those difficult situations.
Dave:
Yeah. I think that’s really spot on. And I just do want to say one thing that you mentioned earlier, it’s like, what’s kind of interesting that’s going on in my mind is that I look at Andy with his hearing aids and I have almost a little sense of envy because I do think that I don’t envy his hearing loss, I don’t envy the reason why he’s wearing them, but there is an element of like super powers to that. And I think that’s actually going to just become more and more pronounced. And I do think that ties into what you were saying about augmented reality, where yes, there is a version of augmented reality that’s a visual overlay that we’ve seen in a number of different movies, but there’s other aspects to it too, which is maybe it’s just providing you with super hearing, taking a hearing loss, giving you a device that allows for you to augment it in such a way where he has abilities in many ways of things that not many others do.
Dave:
And so there is an element of that that I think is a really interesting way to think about this is that in many ways, I think that the hearing aids of today and in a lot of the new hearables that are coming onto the market are actually going beyond just restoring a sense and actually going a step further and saying, and what if we actually give you the ability to do this? So I find that to be interesting as well.
Giles Tongue:
Yeah. We’ve thought often about this, the sense of how does one market such a product and we’ve always been conscious to not… And I’m thinking about our app when we were pushing our app, not to come across as this is a walking stick, this isn’t the kind of disability aid tool slash thing. This is something that’s going to really help you and have a better time in those situations. So yeah, it comes back to the, you’re going to be the smartest guy in the room.
Dave:
Yeah. And I just want to say, we mentioned Geoff Cooling at the beginning of the podcast. Geoff actually wrote a really, really good piece on Hearing Aid Know not long ago, where he basically was saying the same thing, which is stop positioning hearing aids as this thing that’s connotated with older adults and aging and all that, on the contrary, it’s something that can help you live a more vibrant, more youthful life. And I think that’s actually a really interesting way to reposition the whole conversation around this notion of stop with a walking stick and the cane and the stroller and all that, and start using it more along the lines of like, this is actually a restoration of youth in many ways. Andy, thoughts?
Andy Bellavia:
Yeah. No, I loved that article for just exactly the reason you said, it’s not about mitigating a deficiency, it’s about enhancing your life 100% at any age, at any age.
Dave:
Yeah. Totally agree. As we kind of come to the close here, I mean, I think that we’ve done a great job of, not to pat myself on the back here, but no, I think you two have done a really good job of steering this conversation along very efficiently, because I think that we’ve gotten down to what I do think will be a big part of the next wave of hearables and the next wave of hearing aids is this idea of utilizing the new advances and the underlying technology, IEAI, to solve some of the traditional challenges that have presented themselves and really seriously speech and noise has to be up there with the most commonly cited thing that I always hear people cite as being one of the biggest things that plagues the hearing loss population. So I do think that that’s probably ripe for some new thinking and some new methods of how we attack that.
Giles Tongue:
Yeah. And I think so, and I think bringing it back to the moonshot projects and why all these big tech companies are getting involved here is that let’s remind ourselves one and a half billion people with hearing loss, 80% of them are struggling with speech, and where 100% [inaudible 00:43:09] the speech and noise, but 80% of them are mostly struggling with speech and noise, and yet 15 million hearing aids are sold a year. So there’s [inaudible 00:43:19] gap here between what’s happening in today’s solutions and what could be happening. And it’s the availability of new technology such as AI using the processes that are now available in tiny form factors, which is going to enable this next generation of devices to become useful to that enormous group.
Andy Bellavia:
Yeah. And it’s funny when I was shoveling mulch and listening to the David Eagleman episode, and then over the course of this conversation, one thing that really occurred to me is how essential the role of an audiologist will become in all this. Dave, you and I have talked about this multiple times, right? What are the threats and what are the opportunities in the audiology profession going forward. And just listening to Jackie describe how she was able to tune up the combination of their cochlear implants and the neosensory buzz to get an optimum solution for her daughter. Or do you take something like what charitable is doing, especially when it gets implemented into a true wireless device, how are people going to sort all of those out? I mean, at the end, David talked about how they’re working on a variation for people with high frequency hearing loss, which would be another route to the restaurant problem.
Andy Bellavia:
If you were getting the sensory input through the skin to help you augment your high frequency hearing when your low frequency hearing was still okay, that’s fascinating. I’d love to try that one because that’s the situation I’m in. But if I’m an ordinary person, how do I know what’s the optimum solution? I mean some really intriguing high level solutions are proliferating, but because they are proliferating, it’s going to be difficult for the ordinary consumer to sort it out. And I think therein lies the opportunity for the audiological profession, once they’ve unbundled from device sales and so that they can provide the whole range of solutions and advice for people with all different kinds of hearing or other hearing sensory issues. So a lot of opportunity there and really a lot of exciting possibilities.
Giles Tongue:
Yeah. And just to bring that round, I mean, I’ve spoken to Jackie myself way back going about technology and how it could be useful to her, but it may be interesting for you to know that in our investment team, we have Mark Cuban who’s a big fan of AI and we have five of Europe’s leading audiologists as investors as well. So that probably tells you how this is all coming together. And absolutely, it’s the case that audiologists are interested in just trying to help the person who’s in front of them, whatever their need is right now.
Giles Tongue:
And there’ll be lots of people who are coming in to see audiologists looking for a solution for their mild [inaudible 00:46:16] say hearing loss, which is particularly a speech and noise issue. And I think that devices using our technology to help with speech and noise is going to be something that they’ll be really excited for. I calculated about third of the people going into a clinic or currently leaving with nothing, whereas soon they’ll be able to leave with something that’s going to help them just for that situation or infrequent speech and noise challenging environment type moment in their lives.
Dave:
Yeah, no, I agree with everything that was said here and Andy, just to follow on your point, it’s like the landscape is just becoming more and more populated and therefore it’s becoming more complex. And so there’s actually a huge opportunity that’s being built around all of this, which is to say that the more knowledgeable and familiarized you are with all of these different kinds of solutions and the more tools that you have in your tool belt, the more value that you ultimately can provide. You can match people appropriately to all of these new types of solutions.
Dave:
You can augment them, you can combine them. And what does that do? That allows for you to wow and blow your patients’ minds. So they walk out of there and they say, “I sure am glad that I’m seeing Dr. Dave, or whoever, where I’m able to make sure that I’m getting the optimal type of solution for me.” And so that’s what’s so exciting about all of this to me, is that there are just more and more types of things that you can add to your suite of services, and ultimately what that I think equates to is just a higher perceived value proposition in the patient’s eyes.
Dave:
And that to me, is going to be at the root of success across the next five to 10 years, because a lot of the just device sales is getting commoditized. And so where does the value lie? The value lies in matching people to all kinds of new solutions and stuff like that. So with that, I just want to wrap this one up. It’s been a great conversation with you two, really appreciate you for coming on and thanks for everybody who tuned in here to the end. We will chat with you next time. Cheers.