Audiology, Daily Updates, Future Ear Radio, Hearing Healthcare, Longevity Economy, Podcasts, Wearables

063 – David Eagleman – Neosensory’s Buzz and The Next Generation of Hearing Science (feat. Jacque Scholl, AuD & Kevin Liebe, AuD)

Top Left to Right: Jacque Scholl, AuD, David Eagleman, Dave Kemp, Kevin Liebe, AuD

This week on the Future Ear Radio podcast, I’m joined by David Eagleman – Stanford Neuroscientist, New York Times best-selling author, and Co-founder/CEO of wearable startup, Neosensory. To accompany me for this interview, I brought on two others. Jacque Scholl, AuD is both an audiologist and a mother of a 13-year-old deaf daughter, Jade, who has been wearing David’s device, The Buzz, for months. Kevin Liebe, AuD is an audiologist, CEO of HearingHealthMatters.org, and a scientific advisor for Neosensory.

Our conversation revolves around Neosensory’s wrist-worn wearable device, The Buzz, which is designed for deaf individuals and people with severe-to-profound hearing loss. The Buzz converts ambient sound into haptic vibrations using an array of vibratory motors. The idea is that as the user continually wears The Buzz for extended periods of time, the user’s brain will naturally begin to re-wire itself to better process and make sense of the subtleties in vibrations and eventually be able to infer meaning from the vibrations, similar to how blind individuals can infer meaning from braille.

Introducing Buzz - Neosensory
Neosensory’s The Buzz

This device stems from the research and work David has been conducting in his lab for years around sensory substitution, which he goes into thorough detail about throughout his newest NYT best-selling book, Livewired. In Livewired, David explains how, in the absence of a sense, our brains are capable of rewiring themselves in order to retrieve the data our brains rely on in an alternative fashion, that otherwise would be transmitted to our brains via one of our senses. In essence, transmitting the data traditionally sent by one of our sense, via modern technology instead.

The goal for this conversation was to better understand The Buzz, its various uses, and why this device represents a new generation of hearing science. Science fiction is quickly becoming reality.

My Takeaways:

  • I read Livewired in preparation for this interview, and hoo boy is this a doozy of a read. I admittedly know little about neuroscience, but David does a terrific job of distilling really complicated topics into easy-to-read literature. I guess David is referred to as the, “Carl Sagan of the Brain” for a reason.
  • I loved the portion of the conversation around “tri-modal” hearing. It can be used to augment sound and lip-reading, to provide even more context through a third modality (haptic vibrations).
  • Jacque’s story of her deaf daughter, Jade, using The Buzz stole the show. This was an incredible story in-and-of-itself, but to hear her share it with David live was something special. You could just tell that these type of stories are extremely motivating for David to keep building.
  • Jacque mentioned that Jade’s seen a dramatic improvement in her speech and ability to discriminate speech since wearing the Buzz for about a month. She said that she’s also much more aware of the sounds around her, such as Jacque calling her name from behind or the dogs barking downstairs.
  • David mentioned that his team at Neosensory is currently working on 20 different applications for The Buzz. The two that he specifically cited during the podcast were around a new setting that you would be able to enable that’s designed for high-frequency hearing losses. The other was around tinnitus management, as the device can be used for bi-modal stimulation, similar to the LENIRE tinnitus management method.
  • David mentioned that there have been attempts at sensory substitution going back to the 19th century. Technology has now caught up to the point where you can pack a sensory substitution device into something like that looks like a Fitbit.
  • Finally, each day I grow more excited about the emerging technology that’s being designed for deaf individuals and folks with hearing loss. Having someone of David Eagleman’s caliber (+ the whole Neosensory Team) operating in the hearing health industry and building totally ingenious new solutions for hearing professionals and their patients is incredibly cool. I can’t wait to watch this emerging tech continue to mature and proliferate.

-Thanks for Reading-
Dave

EPISODE TRANSCRIPT

Dave:

Okay. So we have an awesome episode today. We are joined here by three great guests. I am joined by Kevin Liebe, Jackie [School 00:00:10], and Dr. David Eagleman. So why don’t we go one by one, introduce ourselves? Tell us a little bit about who you are and what you do, starting with you, Jackie, ladies first.

Jackie Scholl:

Oh, thank you, Dave. Well, it’s great to be here. Thank you. I am an audiologist. This is a second career for me and I was in private practice. I’ve worked in ENTs, primarily focused in cochlear implants. I had my own practice for about 12 years and I sold it about a year and a half ago. So I’m working primarily with my daughter who is 13 now, Dave, if you can believe that. And she is deaf and we adopted her from China when she was five and a half years old. She had never heard a sound, she had no language. And unbeknownst to me, I thought she had a mild to moderate hearing loss, but actually Jade is the deficit patient I’ve ever had in my life. She has neuroplasticity or she has neuroplastic eighth auditory nerves, the right worse than the left. But she hears with two cochlear implants, she has full access, and she’s learning to read and speak and do all the things that… And sass. She’s 13. I did tell you that, right?

Dave:

Well, great. Thanks, Kevin. Thanks for being here, Jackie. Kevin.

Kevin Liebe:

Yeah. Dave, thinks. This, I think, is the second time I’ve been on the podcast. So thank you for having me back. I’m also an audiologist about last 12 years or so, mostly in private practice, but I did spend about four years on the industry side of things. So I’ve kind of had a pretty good mix of ENT private practice hospital, kind of a broad spectrum of patient experience. But also a lot of people know me through Hearing Health & Technology Matters, which is the website that I do own. And it’s kind of opened the doors to a lot of interesting opportunities, which is kind of how I came to meet David Eagleman. I know who’s going to introduce himself next and learn about the Buzz, which I think is a pretty cool technology. So yeah. And again, I guess that’s about it.

Dave:

Awesome. Thanks for being here, Kevin. And last but not least, David Eagleman.

David Eagleman:

Hey, Dave, and Kevin, and Jackie. So yeah, I’m a neuroscientist at Stanford and I run a company called Neosensory. And for many years I’ve been studying how signals can get to the brain and how we might push signals to the brain otherwise, and especially a noninvasive manners. And so that’s how I ended up spinning off this company which we’ll talk about

Dave:

Beautiful. Well, if you can’t discern already, this conversation is going to largely pertain to a lot of David’s work. With his company, Neosensory, he’s also written a book called Livewired, which is basically detailing a lot of the science that goes into the Buzz and around sensory substitution. So I figure between the four of us, having Kevin to kind of provide his clinicians perspective and Jackie not only as an audiologist and providing that perspective, but as a mother of a deaf child who has been using the Buzz, I think she would have a really good perspective on this as well. So why don’t we kick things off, David, and share a little bit about Livewired? Maybe even the definition of what that means and how it might be a little bit more suitable than say neuroplasticity and ultimately the basis of this book in some of the key takeaways from it.

David Eagleman:

Okay, great. So yeah, Livewired is my eighth book. And the reason I wrote this is because when you learn about the brain, we tend to learn about this as a static system, like, okay, here’s the visual part of the brain, here’s the hearing part, the touch and so on. But in fact, the whole system is extremely dynamic and is always moving around and learning, and every memory you have, every new skill you learn and so on represents this change in the brain. Now this is technically in the field, we call this neuroplasticity. I happen to not be that impressed with the word plastic because it was quite in 100 years ago to represent the way that we can mold this material plastic into place and it will hold that shape. But in fact, you’ve got 86 billion neurons in the brain that are constantly moving and shifting and changing every second of your life.

David Eagleman:

And so it’s a much larger concept than plasticity. So that’s why I coined the term live wiring, but the detail doesn’t matter. It’s all about the flexibility of the brain. One of the things that really got my attention starting about 15 years ago was this issue of sensory substitution, which is this idea of look, we’ve got eyes and ears, nose, and mouth and so on, and we’re used to these sensory devices, but of course, what these are all doing is turning information from the outside world into spikes in the darkness of our brain. And the question is, could you get those spikes in there through a different unusual channel? And so the idea of sensory substitution is that you take something let’s say like hearing, and instead of pushing it into the ears, could you get that same content in some other way?

David Eagleman:

So that’s what I ended up building in my lab is a device. We originally built this as a vest that’s covered in vibratory motors and it captures sound, and it turns that sound into patterns of vibration on the skin. So it’s essentially doing exactly what the inner ear does, the cochlea. It’s breaking up sound from high to low frequency, and then putting that on different spatial locations on the torso. And I ended up getting venture capital funding for this and spun this off as a company called Neosensory. And under the aegis of that company, we shrunk this down to a wristband. So it’s like a Fitbit sized device and it captures sound, it has a very sophisticated processing chip on it, and it breaks it up into frequencies from low to high. And so you’re feeling these vibrations on your skin.

David Eagleman:

Now, the question is could a person who is deaf come to understand what’s happening in the auditory world via vibrations on the skin? And the answer is, yes, it works because people, because it doesn’t matter how the information gets up to the brain, the brain looks for things that are relevant. It looks for correlations in the world. So if you see the dog’s mouth moving and you feel the barking on your skin, your brain puts that together. And I think the way to understand this is you didn’t know how to use your ears as a baby. You’re born into the world and you figure it out. You watch your mother’s mouth moving, you’re getting singles in your ears, and you put these things together, and eventually you get good at understanding how to interpret auditory data and what to make of it.

David Eagleman:

And of course, what quickly happens is that you don’t think, oh, I’m hearing Eagleman’s voice, and I’m hearing some high frequency and some low frequencies, instead of you just feel like you hear my voice. And that’s the same thing that happens with people wearing this wristband band called Buzz, is that on day one, they’re actually better than chance at understanding particular, and being able to identify sounds. So we present a sound to the wristband and we say, “Hey, was that a doorbell or a dog barking?” And they make their best guests. Then we present another sound. We say, “Was that a car passing or a fire alarm?” Or whatever. And we do lots of these tests. In fact, other people are pretty good on day one at being able to understand the sounds of the world through their risks, but as time goes on, they get better and better at it.

David Eagleman:

And so, to my mind, the key thing about this is it’s totally non-invasive. You just putting it on like a Fitbit and it’s inexpensive. It’s a fraction of the price of a hearing aid and a small fraction of the price of the cochlear implant. And so this is what we’re doing with Buzz.

Dave:

Yeah, I think when I was reading your book, what I really appreciated was some of the different analogies and just comparisons that you make to nature. So you draw a lot of different animals in the ways that they sort of naturally evolved to the way in which a mole, for example, sees the world is through all of its little fingers. And so that’s how in a bat with sonar. And so I find this really interesting. You had a quote in your book where you said, “Your brain is locked in a vault of darkness in silence, and all it ever sees are electrochemical signals.” And so I find this to be really interesting because I think as somebody that isn’t super familiar with the world of neuroscience, you sort of assume that the reason that you hear the way that you do, or the way that you see the world, the way that you do is just that’s just the way that it is. But in reality, it’s a natural evolution.

Dave:

And so there isn’t necessarily a reason as to why other than through an evolutionary standpoint, as to why our eyes operate the way that they do, because in the grand scheme of things, you can effectively get those same chemical reactions in your brain to happen through your different senses. Is that right?

David Eagleman:

That’s exactly right. Yeah. And so the reason we chose touch is because your skin is actually the largest organ of your body, but it goes completely wasted nowadays in modern life. We’re not using our skin for much of anything, but we can actually push a lot of information into that. So that’s why we chose that. And as I said, we started with this vest because you can push a lot of information through the torso, but it turns out nobody wants to wear a vest around even under their clothing. And so that’s why we ended up with this thing that looks like a Fitbit, which is really convenient.

Dave:

Yeah. And one thing I wanted to ask about this too, so with the form factor that you’re using now, when you did initially develop the vest, was part of that because of the limitations of technology at the time, and here we are in 2021, and so that you can pack more into a wearable device today? And I’m curious of where your thinking is as to a year or two from now, is that going to present even more opportunities? Will you be able to do even more with a Fitbit-sized device that five, six years ago was previously unimaginable given the constraints of the technology?

David Eagleman:

Yeah, that’s exactly right. As tech gets better, we’re looking at different ways of passing this information in. Right now, we have what are called linear resonant actuators, which are these vibratory motors. It’s the same thing that Buzzes in your cell phone. But there are new technologies coming out where it’s like a strip of little bubbles, essentially, they’re getting filled with air and these are even more rapid and you can feel these and you can fit more on there. One of the things that we measure and keep careful track of is what’s called two-point discrimination, which is, can your skin distinguish two different points on the skin and different parts of the body have different thresholds for how far apart the things need to be.

David Eagleman:

But what we’re doing nowadays is we’re using haptic illusions. In other words, illusions of touch that allow us to get even more information in there. So just as example, if I turn on two of these motors that are right next to each other, you’ll actually feel a virtual single point in the middle. And as I changed the amplitude of these two motors, I can move that little virtual point around. So we can actually hit a lot of very fine specific points, even with the technology we have, but yes, it’s only getting better.

Dave:

Yeah. Because I think that what’s so interesting about this is this idea of live wiring, rewiring your brain to create this new sense more or less. And what I’m curious about is with, say the Buzz, and I’m going to be really curious to get some of Jackie’s input here too, as her daughter wears this, is how long typically does it take for that to occur? Because I know in your book, you mentioned that with young children, for example, their brains are maybe more malleable and they’re easier to… They just sort of rewire at a even quicker pace. But for say an adult that was born deaf, or is profoundly hearing impaired, if they were to be given this device, how long would it take for their brain to start to naturally perceive what signals are being communicated through either the vest or the wrist-worn cause?

David Eagleman:

Yeah, I’ll just say we’ve done studies on this and the younger you are, the faster you can learn on it. But that said, even people in their 60s and 70s can learn this. It just takes a little bit longer. It also has to do with how much you’re out in the world. So if you’re running around and seeing things and looking at stuff happening and feeling it on the Buzz, it happens faster. So there’s two things I should mention. One is that the improvement goes up linearly with time. And the reason I mentioned that little detail is because there are some tasks that you can learn where there’s a sudden jump instead of a linear progression. And that indicates that there’s some conscious aha moment, but in our case, it’s not a conscious thing. It’s that your brain is actually learning better and better how to interpret these signals, presumably just the same way your ears did when you were a baby.

David Eagleman:

And the second thing I was going to mention, the main thing is what we have found is that at about three months, three to four months, there’s a change that happens where people start perceiving this consciously. So in other words, it’s not that they’re feeling stuff on the wrist and translating and saying, “Oh yeah, that’s a dog Barker, that’s a lawnmower, that’s a doorbell, that’s a microwave beeping.” But instead, they’re just having what is called a qualia where it’s the subjective internal experience. Just the way that your ears say, “Oh, I’m just having an experience with a dog barking, I’m not translating this thing,” it’s the same thing where… So I’ve interviewed people who’ve been wearing the Buzz for a long time, and I say, “Okay, look, when you hear, let’s say, the dog bark, do you feel a buzz on your wrist, and then you think, okay, that was, what was that? Was that a dog?” They say, “No, no, I’m just hearing the dog bark.”

David Eagleman:

And as crazy as that is, it’s precisely what we’d expect, because that’s how all of our organs work. Your brain works to extract patterns and eventually make just a private subjective experience for you.

Dave:

Which is how like braille works too, right?

David Eagleman:

That’s right. When you watch a person reading a novel in braille, they laugh, they cry, they have tears running down their cheeks. It doesn’t matter that the information is getting in there via bumps on the skin, all that matters to them is the content of it.

Kevin Liebe:

Yeah.

Dave:

Yeah. So Jackie, I’m going to kick it over to you here. As a mother of a child Jade who’s using this, what’s been your experience like? What’s been your child’s experience like? Very curious to hear about this.

Jackie Scholl:

Sure. Well keep in mind, Jade’s not your typical cochlear implant recipient. She was very late at five and a half, never having sound, having hypoplastic nerves on top of that. But we moved mountains for her. She has completely full access with her cochlear implant. She has a custom made device from the company which actually is triphasic pulse tone versus biphasic, which allows us to give her more without facial STEM. So she’s very unique in that she does a lot of stuff that anyone ever anticipated her being able to do, let alone a kid with hypoplasticity. But when we got the Buzz, what’s been really, really interesting is the first thing that I noticed, and I have to tell you, I just put it on her. Actually, I didn’t know until recently that I could turn the mics down a little bit. She’s probably been getting overstressed with the dogs and stuff.

Jackie Scholl:

But the thing I noticed first was even though she has two cochlear implants, she hears at normal levels, she doesn’t hear. If you’re behind her and you say, “Jade,” let’s say we’re in the grocery store, “Jade, Jade,” sometimes I have to like actually go up, but immediately when with the Buzz, she feels it before I think she hears it sometimes, or before she notices that she hears it. So I get much faster responses from her because it’s like, oh. And like you said, David, I don’t know that she’s consciously saying, “I feel that,” it’s just it’s signaling her quicker than what she’s getting with… And for the mother of a child, who’s very deaf, wow, that’s pretty safety is a huge… I mean, you think about are people who are deaf and hard of hearing and how very vulnerable they are to any kind of attacks or kidnap, all of that stuff that we are scared about as parents, that was the first thing I noticed.

Jackie Scholl:

The second thing, and one of the things that we’ve worked tirelessly on is speech, speech, speech, speech. Speech is important. And language. I don’t mean to say one without the other. We do sign support and we talk because that’s what Jade wants. One thing I’ll tell you is kids know what they want and that’s what they’re going to do, but you can’t always understand her because everything has one syllable. So with Buzz, it has been a particularly fascinating thing that we’re able to do because I’ll hold it too, and I’ll go, baseball. There’s two syllables baseball. Do you feel it? It’s two. And so we’re really her language development in the last month has just shot off the charts. I just had a conference with her. She does one-on-one language literally 30 minutes before this, and they’re like, wow, it’s crazy. And to voice. And if we’re trying to get her to voice versus the call, it’s a much stronger feeling, right, David? You’ve designed it to… Yeah.

David Eagleman:

Yes, exactly right.

Jackie Scholl:

And so what we try and do is say, this is how it feels and sounds, aah and then she has to try and make that feel the same way. It’s been amazing.

Dave:

That’s fascinating.

Jackie Scholl:

Yeah, it’s been really… I have to tell you, David and Dave and Kevin, everyone, I am really thrilled for our children, especially our children who have hypoplasticity. There have been very few options for them. Most places would not have implanted Jade. Me being in the community, I had kind of a special case, but this on top of that, I think it’s a game changer for those kids. Thank you, David.

David Eagleman:

Thank you. I’m so glad that’s working.

Dave:

Yeah. I mean, I think that what’s fascinating is I knew that there were a lot of opportunities around the alerting features. Like you mentioned at the beginning, you’re at the grocery store Jade, Jade and so she can feel that there’s somebody behind them trying to communicate. But I do find this whole notion of her having to sort of mimic the sound around her and almost harmonize it so that she can see, okay, this is the way that it sounds from a speech discrimination standpoint, and that just like the ability to discriminate between, like I remember you told me earlier Jackie, about avocado. She wasn’t able to ever really register with her that that was a four syllable word, but now that she can feel it.

Dave:

And so that to me is where the bigger opportunity is here. And I’m curious from you, David, when you hear this, I mean, is this the bigger picture? Is this a good… I guess again, going back to Livewired, I mean, is this a representation of if these are like, she’s getting these reports that she’s her ability to decipher speech is just going up and up and up, is that in large part due to the rewiring in her brain that’s sort of happening as a by-product of the experience of wearing something like this?

David Eagleman:

That’s exactly right. I mean, everything is about input and output and making these cycles of feedback. So obviously the reason that we are able to articulate and figure out how to speak is because we’re hearing, and babies go through a long babbling phase where they’re trying things out and getting feedback, and some utterances get a big smile from their mother and others get a look of confusion. And yeah, this is how you figure out language. But you need that feedback loop to get established. And that’s why it’s so unbelievable useful when you have inexpensive thing you can strap on and then you get that.

Dave:

Yeah. And so kicking it over to you now, Kevin, as not only a clinician, but also somebody that is very immersed in this industry with Hearing Health that Matters, you cover a lot of the emerging technologies that’s going on. And I feel like you have probably one of the best fingers on the pulse of the state of the industry and just the wide variety of professionals that encompass it. And so just from a business standpoint, how do you see this as being applied in Hearing Health broadly speaking?

Kevin Liebe:

Yeah. And thanks, Dave. And gosh, Jackie, I mean, that’s incredible. I mean, I’ve heard that third hand from Dave, the story about your daughter, but that’s pretty amazing. It makes me feel good about how much progress for me. That’s amazing. But Dave, to your point, so just if we’re talking kind of like business level or at least from the clinical level of where I kind of see this being positioned at is really, and we’ve had lots of conversations, I’ve had a lot of conversations with David and his team about where this technology kind of fits into here in healthcare, I think, more broadly. As a clinician, honestly, where a lot of this to me came in is, I mean, every single day, I’m seeing patients that run the gamut of mild to profound hearing loss. And what you very frequently see, and Jackie, having worked clinically, I mean, you’re going to encounter patients when they’re in that moderate and more severe territory that they’re going to have safety concerns.

Kevin Liebe:

So to me, I don’t infrequently hear patients saying like, “Well, I don’t take my hearing aids off at night because I’m worried I’m not going to hear my alarm, or I’m worried I’m not going to hear the door.” And I mean, this is just a really awesome compliment to existing technology. So it doesn’t have to take the place of a cochlear implant or a hearing aid. But I really see this as if somebody has a moderate hearing loss or worse, they could really benefit from this just from even a safety standpoint. But then again, the more significant the hearing loss, the more speech benefit. So to me, just as a clinician, if anybody is telling me they’re concerned about that they’re not hearing things at home, and then they’re worried about safety, I mean, I’m always going to recommend the Buzz because you can use a… We didn’t even talk about it, but you can use a smartphone app and you can customize the frequency response for my audiology nerds out there that would be interested in that.

Kevin Liebe:

But I mean, you can adjust the frequency response or you can adjust intensity, but you can still do that on the wrist in terms of the intensity levels. And there’s different things you can do with or without the app. But David mentioned earlier about this being a fraction of the cost of a hearing aid or a fraction of the cost of a cochlear implant. I mean, not only in patients I see on a regular basis, but I mean, this type of technology really is needed, not just in the U.S, not just in Western Europe, and countries, but the third world really could benefit from something like this, people that don’t have access to this kind of tech. I mean, I really think that technologies like this not only to have the speech discrimination benefits, but the safety issues that often don’t really get a lot of airtime. We don’t talk about it a lot really in the literature very frequently.

Kevin Liebe:

I mean, it’s kind of always a side conversation of, oh, maybe you need a loud alarm to wake you up or something like that. But I mean, I’ve had people tell me that have used the Buzz, they say, “I don’t have to pack my giant alarm clock in my bag that’s going to wake me up and vibrate when I go on a trip.”

Dave:

Yeah.

Kevin Liebe:

So you could just have this little Fitbit-sized device, and it’s super easy and they can set it up on their phone. I mean, there’s just a number of applications. And really the really intriguing part to me was like, wow, I could really see how this could really compliment people that wear cochlear implants and that have super power instruments. I mean, really everybody that has a cochlear implant, I think in my personal professional opinion, could benefit from this. I mean, if you have a CI or you have super power or hearing aids, I mean, this is a great device that could compliment even just daily use with your hearing aids. It’s not just in spite of your hearing, but with the hearing aids as well, because it can give you… You think of people with midyears disease, you think of people that have these really significant word discrimination problems? I mean, they could benefit from this because they benefit from every extra little bit of context.

Kevin Liebe:

And if I can give somebody 5, 10, 15, 20% improvement on their speech discrim and their discrim is very low as is, I mean, that’s a big deal. As a clinician, to me, anything I can do to improve somebody’s word discrimination, I’m going to recommend. So I think even just, again, from a business standpoint, I just see it as being a great compliment for anybody that has moderate to severe to profound loss. I mean, sure, like people that are musicians and stuff are people that really like to integrate different technologies want to have different sensory experiences, might really be into this too. But just from a clinician standpoint, I really see that kind of moderate to severe to profound cases where people are really going to probably see this to be a really good fit.

Jackie Scholl:

And I’m going to-

David Eagleman:

And on that note this is… Oh, sorry, you go ahead, Jackie, you go.

Jackie Scholl:

No, you go first.

David Eagleman:

Okay. I was just going to say, this is one of the first pieces of feedback we started getting is from people who already had cochlear implants. For example, one person described this as three dimensional hearing by which she simply meant she was hearing something through her cochlear implant, she was feeling vibratory information on the Buzz, and she was reading lips. And together, this really sharpened the probability to distribution, and she was able to really get what was being said. And I talked with Kevin about this, and he started calling this trimodal hearing. And that’s a thing we’ve heard many times as feedback.

Dave:

I love that.

Jackie Scholl:

What I was going to throw in on what Kevin was saying is this last year has been very challenging in our profession. It’s been challenging for a lot of people. And I think that what we’re seeing is we’re coming back full circle. We are now looking at all the ways, not only that we can fit hearing aids and cochlear implants, but oral rehabilitation, how do we help our patients communicate? Because in the end, that’s really what we want to do. And so I see this after my experience with watching Jade and her immediate just because you have on cochlear implants and just because you have on hearing aids, you still miss a lot unless you still, you still miss a lot.

Jackie Scholl:

So I see this as in addition too, okay, so you have your hearing aids and they’re fine-tuned in your cochlear implants. So what is it we always try and tell our patients to do? Get that person’s attention first before you start talking, right? So David, David, they’re going to feel it, they’re going to hear it, they’re going to turn right away. And so then you can engage in conversation. So I think it’s going to become an invaluable tool for us as clinicians to… And honestly, had I not seen this firsthand, I don’t know that I would’ve believed it so much, but it’s been incredible really to watch my own daughter’s speech and language increased, but more importantly, her ability to know someone’s talking to her right away. The dogs, if they’re barking, she hears them all the way upstairs.

Jackie Scholl:

She didn’t hear that before. She’s not hearing. It’s her trimodal input, juggle, running down the stairs. And my husband said there, and he goes, “How’d she hear that?” I’m like, “Well, I think it was a combination of things.”

David Eagleman:

Yeah, that’s great.

Dave:

Yeah. I mean, I think that what’s fascinating is I love that trimodal standpoint. And again, this comes back to this notion of if your nose and your ears and your tastes and your touch these are just inputs, right, David? I feel like one of the big takeaways I have for your book, the big thing that I’m really walking away from this is like your brain doesn’t really care how it gets those different inputs. At the end of the day, it’s just data. And that’s a really profound sort of insight for me is to say that, yes, the way that humans typically receive sound is through our ears. But in reality, there’s no reason why your brain can’t interpret the world around it through our skin. So long as you have a means in which you can communicate that same thing, we see it in nature, different animals use different sensors in order to capture that.

Dave:

And so that for me is just been so interesting to think about. And again, in the year 2021, it’s now feasible. That’s another thing that I really took away from everything was it’s kind of like you were drawing on all these examples from the 50s and the 60s, right? Where a lot of the limitation was like you have these gigantic helmets that people are wearing and things like that, where the technology just hadn’t really caught up. And I think that what’s so exciting is that here we are, and you can pack in a new sensor that you can communicate an entire sense more or less to your brain through a $400 or less Fitbit, more or less.

David Eagleman:

Yeah, that’s exactly right. People who are doing this, even in the 70s, typically people who are doing it for blindness. This has always been a very thin thread in neuroscience about sensory substitution, but it actually reaches back to the 1890s. And I actually didn’t know that when I first started. In many of the concepts that I was coming up with, I thought were completely original, but then I found this paper from 1890, where a guy was working on. He stood up a little photo diode on someone’s head on a blind person’s head and turned that into sound in the ears, and it kind of worked. But exactly, as you said, the technology was so terrible. And by the 70s, people were doing stuff, but they had a carry around essentially a computer on their belt with them all the time.

David Eagleman:

And so yeah, we’ve just now hit this point where we’ve got this really sleek. It just the fact that we were able to build it as a wristband with nothing else. I mean, often when we’re first showing this to people, and then this also plugs into what Jackie just said about how she wouldn’t have even believed it until she saw it. And a lot of the people I’ve noticed don’t even quite understand what it means or what it is, because it just, as one example, sometimes we put this on somebody’s wrist and what they’ll do is they’ll put their wrist up to their ear and they’ll say, “Oh, am I supposed to be doing it this way?” They’ll say, “No, no, no, you just walk around with it. It’s coming in through your skin.”

Kevin Liebe:

Yeah.

Dave:

Yeah, that’s really interesting, but that does speak to maybe the novelty of it. And I think that that it’s obviously such a foreign concept because we all have the natural inclination to say that, “Well, if I’m deaf, therefore I need to somehow stimulate the inner ear or something like that.” And so I just think that it’s such a different approach than what we’ve ever really seen before in this industry. And other than maybe the tactile hearing aids that used to exist, or I guess in a way this is kind of like that, but again, I think that it’s really refreshing to have this neuroscience approach coming in, where it’s just coming at a different angle than I think it’s ever really been addressed before so far as I’ve seen.

David Eagleman:

That’s right. You know what it is, we’re right at this intersection. And I’m happy to live in Silicon Valley, especially because we’re right at this intersection between neuroscience and technology. And we’re actually doing several other things with the same hardware. One of the things that we are finalizing right now and we’ll release as a product in the near future is something for high frequency hearing loss in particular. So it’s when people just aren’t hearing the higher frequencies, and what happens is, of course you guys know, is it makes it hard to understand conversation because particular phonemes, particular parts of speech are just getting mixed or muddled and you can’t quite discriminate them. And so what we’re doing is a completely different algorithm where instead of capturing all the sound, we’re just listening in real time for Ts, Ss, Fs, Vs, Bs, these high frequency phonemes, and then just signaling when that’s happening. We have different signals on the skin and people can pretty quickly learn within a few days, get really good at understanding what is the phoneme being said?

David Eagleman:

So I padded this and I call it cross sensory boosting where your ear is doing most of the work, all the low and middle frequency stuff, and your wrist is just taking care of the high frequency stuff and just clarifying for you, oh, that was a T, that was an S, that was a V. And so we’ve been running tests on that and that’s been very cool. And we’re also doing a different thing also which is something that Kevin got us started on probably reading some literature, and we started doing some testing with tinnitus. So it turns out that there are some papers out of Europe and out of Michigan showing that when people have ringing in the ears, if you do stimulation where you’re playing tones for them, and they’re feeling something, this can actually relieve some of the symptoms of tinnitus, the loudness and the aversiveness. And it’s not 100%, but it lowers it.

David Eagleman:

And so the original papers on this have this thing where they give you a shock on the tongue with electric protector paddle on the tongue. And they had an argument for why they think this should work because they said it has to do with the dorsal cochlear nucleus. I’ll skip the details, but that’s where the auditory stream first confronts touch from the head and neck. But we decided to try this with touch from the skin, which would have nothing to do with the dorsal cochlear nucleus. Anyway, the point is it works. It works exactly as well as it does with the shocking on the tongue. So we’re running some big tests on that now, and we’ll be releasing that very soon as a program for managing tinnitus.

Dave:

Yeah, I saw that. And I think that the… What was that? Linear or linear?

David Eagleman:

Yeah.

Dave:

So yeah, I was reading about that in this idea of bi-modal stimulation and how that might be a really good solution. And the other thing I thought about that too, is it didn’t seem quite feasible that people would walk around with like tongue zappers on. And so when you have… Again, it’s the feasibility of all this, right? And that’s what I like about your approach is that it speaks beyond just the tinnitus application to the trimodal augmented ability in which to hear the sound around you is you’re not asking a lot, the onus isn’t a whole lot, whether it’s the price point, where it’s the day-to-day wear of the device. It’s the behavior change, I guess, more or less that you’re demanding on the user, isn’t a whole lot. And that’s part of why I think this is so compelling too, is that the notion that in order to maybe increase dramatically, in some cases hear the world around you and augment your world, all you have to do is wear a Fitbit type device, isn’t asking a whole lot.

David Eagleman:

You know, that’s exactly right. I got to tell you, I spent my whole career as an academic until I spun off this company five years ago, and I’ve learned so much and that has been one of them is about really understanding product market fit. And I have to say, I run into fellow academics all the time who say, “Well, wait, you have a higher density of touch reception on your fingers. So why don’t you build a glove for people to wear instead of wristband?” And you know what, that’s a terrible idea. Nobody wants to wear a glove. We’ve actually done research on this. Nobody wants to stick stuff in their mouth to wear on their and so on. So there has to be this combination between the academic knowledge and the product market fit knowledge.

Dave:

Yeah, I fully agree.

Kevin Liebe:

I’ll just jump in there, Dave. So yeah, David, to your point, I mean, and as an audiologist, I’ll just say I’m just using my own example, is that it’s hard for most of us to think beyond the ears in a sense of like that you really could get that much benefit from a completely different sense, but it’s actually improving our ability to hear and decipher sound. I mean, it’s pretty incredible. And most audiologists when I’ve actually had the opportunity to really explain how this works, they get it once they understand it, and it makes sense and it clicks. And then when you have patients that can use it, and to Jackie’s point some of the examples she just talked about with her daughter, I mean, it makes a lot of intuitive sense, but on the face of it, you think about, well, this thing’s just going to buzz? This thing is just going to vibrate on my wrist? Of course, I can do.

Kevin Liebe:

But when you start realizing the nuances here, and the fact that in the case of the Buzz that is actually stimulating different points on the wrist, and that’s responding to of course plenty of specific frequencies, then it starts to make a little more sense as an audiologist, because we often think in terms of most of us, not all of us, most of us think in terms of like, okay, well, this person has a hearing loss. I can utilize these frequencies to program for that loss to accommodate their hearing issue. And then when I think, okay, well, not only could I get a hearing aid, then maybe this person needs a remote microphone because they have signal noise problems. Well, me personally, I think of this like a remote microphone, but it’s just a different type of assistive device.

Kevin Liebe:

So I like to think of it in those terms, because the same kind of people, or the same type of patient rather that needs a remote microphone could probably benefit from the Buzz almost certainly could benefit from a Buzz. So it’s just we got to get out of the box, so to speak. We got to get out of the booth, we got to think a little differently. But anyway. So that’s just kind of my two cents here, is that I think it’s just we’re so familiar with what we’re doing on a day-to-day basis. It’s a little, sometimes it’s we got to kind of look at it from another perspective. And to David’s point, it’s like the brain is an amazing thing. I tell patients this every single day, our brain is such an amazing thing, that it is adapting to your hearing loss over time.

Kevin Liebe:

And that’s why when you have hearing aids, all of a sudden, you’re hearing all these sounds, it sounds so different in maybe they don’t like it. Well, hey, the brain just can take a little time. It’s going to get used to that new amplification, same thing with a Buzz. If we use that same concept, your brain is going to adapt, and this is going to be a compliment to whatever you’re doing right now. So I just think we have to think of it in those terms that it’s not this or that. It’s a compliment to existing tech and it’s, and it not only for safety, but speech discrim. I mean, it’s pretty incredible really for like you said, a cost-effective device, it’s a lot cheaper than a remote mic, put it that way.

David Eagleman:

Yeah.

Kevin Liebe:

FM system.

Dave:

Do we have a special guest?

Jackie Scholl:

We do have a special guest. Jade just got finished with her speech. You want to say hello? Hello.

Jade:

Hello.

Dave:

Hello, Jade. I hear you-

Kevin Liebe:

Hi, Jade?

Dave:

I hear you wear the Buzz. Do you like it?

Jackie Scholl:

Do you like the Buzz? Do you like the Buzz?

Jade:

[inaudible 00:42:20].

Jackie Scholl:

Yeah, she does. One of the things I was going to kind of toss in, I’m finding that being a parent versus being a clinician. Wow. I really thought I knew a whole lot about hearing loss and I knew nothing, but my husband is a musician. And so he’s tried to work with Jade, but it’s like even with like speech, I don’t know if you guys follow any of Dr. Nina Kraus’s stuff, but music for the brain. So he’s teaching her the drums because we figured she can bang around on drums, and we’re using the watch for timing. My husband is like let’s… I think that if we have some really creative people out there, you’re going to find that there are many different ways to use this thing. And sky’s the limit. Jade may be a musician, who knows.

Dave:

Well, actually that’s a perfect way to kind of bring us home here. So I guess kind of, as we wrap, David I’m curious, I’ve heard you talk about the oomph belt and I’ve heard you talk about sensory addition and all that, and it’s super fascinating. And I’m curious, does it feel like you’re just kind of scratching the surface here with the Buzz, and your work in general? I mean, I think it’s really cool, but in the short amount of time that I’ve been familiar with the Buzz that you’re already looking at it as maybe this is a good solution for tinnitus. And I just learned just now about the whole frequency range and in targeting the higher frequencies. But do you feel as if you’re just kind of scratching the surface?

David Eagleman:

Oh, yeah. I mean, the truth is we have 20 projects that are underway in the lab, in the R&D section of the lab, because we’re doing things for blindness, we’re doing things for balance, we’re doing things for prosthetic legs, and then we’ve got things that are far wackier than that. In fact, we just had our second developer contest sensory, where we asked developers from all over the world to come up with stuff. We had 196 centuries on our last contest. And people are doing things about sensing where there’s electrical fields or sensing the carbon dioxide in the room, which tells you about the room’s ventilation, which serves as a proxy for how likely you are to get COVID in that room-

Dave:

Wow.

David Eagleman:

… or picking up on the emotion of the speaker that you’re talking with. And this is for children with autism who understand the meaning of words, but can’t at all read the emotion of the person, are they angry? Are they happy? Are they being sarcastic? Are they sad? And so the wristwatch, the Buzz does those calculations and then buzzes to tell the user what the speaker is feeling. Anyway, there’s just a million things that we’re working on, but they all fall under this umbrella of, hey, what kind of information can we pass to the brain via this channel?

Dave:

Via the skin. The skin is like this giant… It’s our largest organism, and it’s almost entirely untapped, right?

Jackie Scholl:

The skin is in.

Dave:

The skin is in.

Jackie Scholl:

The skin is in.

Kevin Liebe:

Love it.

Dave:

Well, anyway, this has been a fantastic conversation. David, I think I speak for the three of us and I speak for the whole industry. We’re really happy that you’re here and that you’re building these really, really cool things. It’s just so refreshing to see these new approaches, and I’m really excited to see where the Buzz goes and how it continues to evolve. And I think that it’s going to be great to hear from people like Jackie that have a loved one that’s actually using it and seeing a lot of value from it, and hearing from Kevin about how is the industry receiving this? And I do think that in time, this is going to become just an awesome, another tool that professionals can add to their tool belt, and help to set themselves apart. So this has been an amazing conversation. I really, really enjoyed this conversation. Thanks, everybody. This has just been a real treat.

David Eagleman:

Thanks for having me.

Kevin Liebe:

Okay. Thanks, Dave.

Dave:

All right. Awesome.

Jackie Scholl:

Thank you.

Dave:

Thanks everybody who tuned in here to the end, and we will chat with you next time. Cheers.

1 thought on “063 – David Eagleman – Neosensory’s Buzz and The Next Generation of Hearing Science (feat. Jacque Scholl, AuD & Kevin Liebe, AuD)”

Leave a Reply to Bill SchiffmillerCancel reply