Biometrics, hearables, Smart assistants, Trends

The Innovation Happening Inside Hearables and Hearing Aids

Smartphones White Flag

The Peace Dividends of the Smartphone War

One of the biggest byproducts of the mass proliferation of smartphones around the planet is the fact that the components inside the devices are becoming increasingly more powerful and sophisticated, while simultaneously becoming smaller and less expensive. Chris Anderson, the CEO of 3D Robotics, refers to this as the, “Peace Dividends of the Smartphone Wars,” where he says:

The peace dividend of the smartphone wars, which is to say that the components in a smartphone — the sensors, the GPS, the camera, the ARM core processors, the wireless, the memory, the battery — all that stuff, which is being driven by the incredible economies of scale and innovation machines at Apple, Google, and others, is now available for a few dollars.

The race to outfit the planet with billions of smartphones served as the foundation for the feasibility of consumer drones, self-driving cars, VR headsets, AR Glasses, dirt-cheap smart speakers, our wearables & hearables, and so many other consumer technology products that have emerged in the past decade. All of these consumer products directly benefit from the efficacy and improvements birthed by the smartphone supply chain.

Since this blog is focused on innovation occurring around ear-worn technology, let’s examine some of the different peace dividends being reaped by hearing aid and hearables manufacturers and how those look from a consumer’s standpoint.

Solving the Connectivity Dilemma

Ever since the debut of the first, “made for iPhone” hearing aid in 2013 (the Linx), each of the major hearing aid manufacturers have followed suit in the pursuit to provide seamless connectivity to the users’ smartphone. This type of connectivity was limited to iOS up until September 2016, when Sonova released it’s Audeo B hearing aid which used a different Bluetooth protocol that allowed for universal connectivity to all smartphones. To keep the momentum going, Google just announced that its Pixel and Pixel 2 smartphones will allow for pairing of any type of Bluetooth hearing aid. The hearing aids and the phones are both becoming more compatible with each other. Every year, we seem to move closer and closer to having universal connectivity among our smartphones and Bluetooth hearing aids.

While connectivity is great and opens up a ton of different new opportunities, it also creates a battery drain on the devices. This poses a challenge to the manufacturers of these super small devices because while the majority of components packed inside these devices have been shrinking in size, the one key component that doesn’t really shrink is the battery.

There are a few things that the manufacturers are doing to circumvent this roadblock based on recent developments largely due to the smartphone supply chain. The first is rechargeability-on-the-go. In the hearables space, you’ll see that pretty much every device has a companion charging case, from Airpods to IQ Buds to Bragi Dash Pro. Hearing aids, which have long been powered by disposable, zinc-air batteries (which last about 4-7 days depending on usage), are now quickly going the rechargeable route as well. Many of which can be charged in similar companion cases akin to what we’re using with hearables.

Rechargeability is a good step forward but it doesn’t really solve the issue of draining batteries quickly. So, if we can’t fit a bigger battery in such a small space and battery innovation is currently stagnant, engineers were forced to look at how we actually use the power. Enter into the equation, computer chips.

Chris Dixon - computers are getting cheaper
Computers are steadily getting cheaper – From Chris Dixon’s What’s Next in Computing

Chip’in In

I’ve written about this before, but the W1 chip that Apple debuted in 2016 was probably one of the biggest moments for the whole hearables industry. Not only did it solve the reliable pairing issue (this chip is responsible for the fast-pairing of AirPods), but it also uses “low-power” Bluetooth, ultimately providing 5 hours of listening time before you need to pop them in their charging case (15 minutes of charge = another 3 hours). With this one chip, Apple effectively removed the two largest detractors to people adopting hearables: battery life and reliable pairing.

Apple has since debuted an updated, improved W2 chip used in its Apple Watch that will likely make its way to AirPods version two. Each iteration will likely continue to increase battery time.

Not to be outdone, Qualcomm introduced its new chipset the QCC5100 at CES this January. Qualcomm’s SVP of Voice & Music, Andy Murray, stated:

“This breakthrough single-chip solution is designed to dramatically reduce power consumption and offers enhanced processing capabilities to help our customers build new life-enhancing, feature-rich devices. This will open new possibilities for extended-use hearable applications including virtual assistants, augmented hearing and enhanced listening,”

This is important because Apple tends not to license out its chips, so for third party hearable and hearing aid manufacturers, they’ll need to reap this type of innovation from a company like Qualcomm to compete with the capabilities that Apple brings to market.

The next one is actually a dividend of a dividend. Smart speakers, like Amazon’s Echo, are cheap to manufacturer due to the smartphone supply chain and as a result have driven down the price of Digital Sound Processing (DSP) chips to a fraction of what they were. These specialized chips are used to process audio (all those Alexa commands) and have long been used by hearing aid manufacturers. Similar to the W1 chip, these chips provide a low-power method that can now be utilized by hearable manufacturers. More options for third party manufacturers.

So, with major tech powerhouses sparring against each other in the innovation ring, hearing aid and hearable manufacturers are able to reap that innovation at a cheap price, ultimately resulting in better devices for the consumers at a depreciating cost.

Chris Dixon - computers are getting smaller
Computers are steadily getting smaller – From Chris Dixon’s What’s Next in Computing

Sensory Overload

What’s on the horizon with the innovation happening within our ear-computers is where things really start to get exciting. The most obvious example of where things are headed are with the sensors being fit in these devices. Starkey announced at its summit this year an upcoming hearing aid that will contain an inertial sensor to detect falls. How can it detect people falling down? Another dividend – the same types of gyroscopes and accelerometers that we have in our phones that work in tandem to detect the orientation of our phones. This sensor combo can also be used to track overall motion, so not only can it detect a person falling down, but it can also serve as an overall fitness monitor. These are small enough and cheap enough now to where virtually any ear-worn device manufacturer can embed these types of sensors into their devices.

Valencell, a biometric sensor manufacturer, has been paving the way with what you can do when you implement heart rate sensors into our ear-worn devices. By using a combination of the metrics recorded by these sensors, you can measure things such as core temperature, which would be great for monitoring and alerting the user of the potential risk of heat exhaustion. You can also gather much more precise fitness metrics, such as intensity levels of one’s workout.

And then there are the efforts around one day being able to non-invasively monitor glucose levels through a hearing aid or hearable. This would most likely be done via some type of biometric sensor or combination of components derived from our smartphones as well. For the 29 million people living with diabetes in America, who also might suffer from hearing loss, a gadget that provides both amplification and glucose monitoring would be much appreciated and compelling.

These types of sensors serve as tools to create new use cases around both preventative health applications, as well as use cases designed for fitness enthusiasts that go beyond what exists today.

The Multi-Function Transformation

One of the reasons that I started this blog was to try and raise awareness around the fact that the gadgets we’re wearing in our ears are on the cusp of transforming from single-function devices, whether that be for audio consumption or amplification, into multi-function devices. All of these disparate innovations make it possible for such a device to emerge without limiting factors such as terrible battery life.

This type of transformation does a number of things. First of all, I believe that it will ultimately kill the negative stigma associated with hearing aids. If we’re all wearing devices in our ears for a multitude of reasons, for increasingly longer periods of time, then who’s to know why you’re even wearing something in your ear, let alone bat an eye at you?

The other major thing I foresee this doing is continue to compound the network effects of these devices. Much like our smartphones, when there is a critical mass of users, there tends to be a virtuous cycle of value creation spearheaded by developers, meaning there’s more and more you can do with these devices. No one could have predicted what the smartphone app economy would look like here in 2018, back in 2008. We’re currently in that same type of starting period with our ear-computers, where the doors are opening to developers to create all the new functionality. Smart assistants alone represent a massive wave of potential new functionality that I’ve written about extensively, and as of January 2018, hearable and hearing aid manufactures can easily integrate Alexa into their devices, thanks to Amazon’s Mobile Accessory Kit.

It’s hard to foresee what all we’ll use these devices for, but the ability for something akin to the app economy to foster and flourish is now enabled due to so many of these recent developments birthed by the smartphone supply chain. Challenges still remain for those producing our little ear-computers, but the fact of the matter is that the components housed inside these small gadgets are simultaneously getting cheaper, smaller, more durable and more sophisticated. There will be winners and losers as this evolves, but one winner that is obvious is the consumer.

-Thanks for reading-

Dave

 

 

hearables, Smart assistants, Trends

A Journey to the Center of the Ear

Ear valley

The Road Starts Here

If you examine the past 50 years of user interfaces in computing, what you’ll see is that a new one surfaces every 10 years or so. Each of these new interfaces has been an incremental step away from hardware-based interfaces, to ones that are more software-based. From the 1970’s – early 1980’s, in order to “communicate” with a computer and issue your intended command, you’d need to use Punch Cards and Command Lines.

PCs were introduced in the 1980s and as computers began to migrate from the military, government and academia, into our homes, so too did the Graphical user interface start to permeate as it was far more user-friendly for casual computer users than Command Lines. This was the preferred user interface until the mid-90’s when the Internet began to really take off.

As the Internet opened the door to an endless amount of new uses and functions for computers, the Hypertext interface (HTML) bloomed as we needed an interface that was more conducive to web-based functionality, such as hyperlinking and connecting parts of the web together.

Then in 2007, Steve Jobs famously ushered in the mobile computing era with the unveiling of the iPhone. Along with the introduction to our pocket-sized supercomputers, we were also presented with the Multi-Touch interface which has gone on to become the most widely preferred interface globally.

So, 10 years after the iPhone debuted and based on the history of new user interfaces surfacing every 10 years or so, it begs the question, “what’s next?” Since this is FuturEar after all, you better believe it will largely center around our ears, voices and how we naturally communicate.

User Interface Shift
From Brian Roemmele’s Read Multiplex 9/27/17

Reducing Friction

There are two underlying factors to consider when looking at why we gravitate toward each evolution in user interfaces. The first is the tendency for users to prefer as little friction as possibleFriction essentially represents the clerical, tedious work that you’re required to do in order to fully execute your command. Let’s use maps as an example and the idea of trying to get from point A to B in an unknown area.

In the past, prior to the PC and internet, you were limited to good, old-fashioned maps or asking for directions. Then, technology enabled you to use the likes of MapQuest which allowed you to print off turn-by-turn directions. Today, in the mobile era, you can simply pull up your favorite map app, punch in your destination, and let your phone guide you. Each progression reduced friction for the user, requiring less time and energy to do what you were trying to do: get from point A to point B.

The second factor to look at is the type of computers being used in conjunction with the user interfaces. When we shrank our computers down to the size of a phone, it wasn’t feasible to use a mouse and keyboard, so we shifted to just using our fingers on the screen. Nor was HTML necessary prior to the internet. The interface adapts as the computers we’re using evolve.

Which brings us to our über-connected world where we’re bringing everything we possibly can online. Gartner estimates that in this age of the Internet of Things (IoT), we’ve brought 8.4 billion devices online and that figure will climb to 20.4 billion devices by 2020. So, how then do we control all of these connected-devices, while continuing to reduce friction?

Walking through the woods

Abra Kadabra

The answer lies in what tech pioneer Brian Roemmele has coined the “Voice First” interface. He hypothesizes that as we move into this next decade, we’ll increasingly shift from issuing commands with our fingers, to issuing them with our voice. Which is great, because speech and language are humans’ most natural form of communicating, meaning there’s no learning curve in adopting this habit. This is an interface that is truly for all ages and levels of sophistication. It’s built to be as simple as conversing with the people around us.

So, what are we actually conversing with? That would be our smart assistants, which are primarily housed in our smart speakers and phones currently. Amazon took an early lead in the smart speaker market, but it didn’t take long for Google to introduce its own line of “OK Google” speakers, resulting in 20 million Alexa speakers and 7 million Google speakers sales thus far. This number will grow significantly before year’s end, as it’s estimated that 20% of US households will be purchasing a smart speaker for the holidays.

You might be asking, “but wait, we’ve had Siri in our iPhones since 2011, how is this different?” You’re right, but it wasn’t until recent machine learning breakthroughs that have drastically improved speech recognition accuracy in understanding us. Hence the recent popularity of these smart speakers and our voice assistants. There are far less, “I’m sorry, I didn’t understand that” responses and they serve an increasingly important role in facilitating our commands to control the billions of connected IoT devices we keep bringing online.

So, let’s look at the two criteria that we need to check off in order for this interface to be mass-adopted. We need to ensure the interface is conducive to the computers we’re using and do so in a way that reduces friction beyond how we’re interacting with them today. Voice provides us the ability to quickly control all of our IoT devices with simple voice commands, trumping the finger tapping and app toggling that multi-touch offers. When it works properly, speaking to our assistants should feel like talking to a genie, “Abra Kadabra, your wish is my command.”

  • TV – “Alexa change the channel to the Kansas basketball game,”
  • Thermostat – “Ok Google, bump the temperature up to 72°”
  • Security Cam – “Hey Siri, send me the last 10 minutes of my Dropcam footage to my phone.”
  • Groceries – “Alexa order me all the ingredients for Dave’s Famous Souffle recipe”

Ear welcome home

Heading Home

I believe that over the course of the next decade the Voice interface will continue to become more powerful and pervasive in all of our lives. Although we’re in the infancy of this new interface, we’ve quickly begun adopting it. Google confirmed 20% of its mobile searches are already conducted via voice, Pew Research found that 46% of Americans currently use a voice assistant, and Gartner projects that 75% of US households will own at least one smart speaker by 2020.

We’re also seeing smart speakers and voice assistants begin wading into new waters, such as the workplacecars, and hotel rooms. This will likely open up brand new uses cases, continue to increase the public’s exposure to smart assistants, and expand our understanding of how to better utilize this new technology. We’re already seeing an explosion of skills and applications, and as each assistant’s user network grows, so too do the network effects for each assistant’s platform (and the interface as a whole) as developers become increasingly incentivized to build out the functionality.

Just as we unloaded our various tasks from PCs to mobile phones and apps, so too will we unload more and more of what we currently depend on our phones for, to our smart assistants. This shift from typing to talking implies that as we increase our dependency on our smart assistants, so too will we increase our demand for an always-available assistant(s).

What better place to house an always-available assistant than our connected audio devices? This isn’t some new, novel idea, as 66% of all hearables already include smart assistant integration (this figure is almost entirely driven by Apple’s AirPods). In addition to AirPods, we saw Bose team up with Google to embed Ok Google in Bose’s next line of headphones, and Bragi integrate Alexa in the Dash Pro’s most recent update. Rather than placing smart speakers throughout every area we exist, why not just consolidate all of that (or a portion) to an ear-worn device that grants you access whenever you want?

I originally surmised that our connected audio devices will give way to a multitude of new uses that extend way beyond streaming audio. Smart assistants provide one of the first, very visible use cases beginning to emerge. I believe that smart assistant integration will become standard in any connected audio device in the near future – be it ear-buds, over-the-ear headphones or hearing aids.  This will provide a level of control over our environments that we have not yet seen before, as we simply need to whisper our commands for them to be executed.

Our own little personal genie in the bottle ear… what better way to reduce friction than that?

-Thanks for Reading-

Dave

hearables, Trends

The Power of Network Effects

Steve Jobs

Think back to 2007 when the iPhone debuted. If you recall, the device itself was pretty unique – multi-touch touchscreen, completely new user interface, iconic form factor – but, the initial functionality that it provided was not all that different from what existed at that time. Steve Jobs introduced the iPhone as, “an iPod, a phone and an internet communicator.” Aside from the iPod/music aspect of the iPhone, the other “smartphones” at that time provided the same combination of phone, email and (limited) internet. It just couldn’t do much more than what already existed.

The original iPhone was met with a lot of criticism and it was easy to point out the shortcomings and hard to see the potential. We know how this story goes, though, as the iPhone went on to be a smashing hit and many of us use one today. What’s interesting, however, is to look at why the iPhone was so successful. One of the primary reasons for its success was due to the power of network effects that Apple leveraged.

Network Effects

One year after the iPhone was released, Apple introduced the App Store with their iOS version 2.0. In less than two months, iPhone users had combined to download 3,000 different apps, 100 million times. This caught the attention of the software developer community, “100 million downloads in less than 60 days.” A pipe dream come true for anyone who had the technical wherewithal to develop mobile software. The gold rush was on, and thus began the virtuous cycle that is network effects.

worldwide-app-store-apps-by-2020

Each new person that bought an iPhone became a potential candidate to download apps.  This growing pool of users incentivized developers to create new apps, compete in existing apps to make them better, and introduce new features that could generate revenue. The more users there were, the more potential customers developers could acquire.

Simultaneously, as this third-party app ecosystem grew, it spurred further adoption of the iPhone because of the constant influx of new apps, or better apps that could be downloaded through the App Store. The more stuff you could do with an iPhone, the more compelling it became to purchase one. The value just kept appreciating.

Network Effects Comic Strip (2)

This is what was so revolutionary about the App Store – it created a marketplace that brought together third-party developers and users. By bridging the two, it allowed for the developers to produce an endless supply of utility, functionality and capabilities to be instantly downloaded and utilized by the users, enhancing the value of the device. It generated entirely new use cases and reasons to use a smartphone.

Ok, so what? Well, as I pointed out in my previous post, we’re all shifting to using “connected” audio devices. Furthermore, our connected audio devices as a whole represent a quasi-network, as one of the common denominators across these devices is the wireless connectivity to a smartphone. Therefore, these connected, ear-worn devices serve as new delivery mechanisms for software. Network effects can now begin to take hold because we’re using audio devices that can seamlessly access apps from our phones. In other words, we’ve erected additional bridges to allow developers to supply limitless value to our ears and wherever else we’re wearing computers on our bodies.

Thus, the virtuous cycle becomes enabled. As the number of connected audio device users steadily increases, developers become motivated to build apps specifically for said devices, resulting in more incentive to go buy Airpods, Pixel Buds, MFi Hearing Aids, or the many hearables to take advantage of all the new stuff you can do with these things. That’s why the shift to connected devices is so fundamentally significant. It has now become technically feasible and financially motivating for developers to create apps tailored to our little, ever-maturing ear computers.

Mini comic

We’re at day one of this new phase of software development, yet we’re already seeing applications specifically targeting and catering to this network. Smart Assistant integration, apps designed to collect and provide actionable insight on your Biometric data, live-language translation, and augmented audio. These are some of the first new applications and use cases for connected audio devices that will transform our single-dimensional devices into more sophisticated and capable pieces of hardware, enhancing their value.

That’s why I think it’s so important to point out that regardless of whether your interests lie in hearing aids, Airpods, or hearables, you should be excited about the innovation that is taking place in any one facet of the connected-device network. Over time, software and features tend to become made widely available throughout the network, so we shouldn’t really care where the innovation originated. Sure, some devices will be capable of things that others won’t, but for the most part, you’ll be able to do a lot more with your connected audio devices compared to what we’re used to with previous generation devices. Just as we learned over the past decade with our smartphones, network effects help to accelerate this pace of change.

-Thanks for reading-

Dave

hearables, Trends

Welcome to FuturEar

FuturEar plug illustration (2)

Hello and welcome to FuturEar! As the name insinuates, the purpose of this blog is to provide an ongoing account of the rapidly evolving audio landscape. My goal is to help make sense of all the trends that are converging toward the ear and then consider the implications of those progressions. This blog will feature both long-form assessments, as well as short, topical updates on news pertaining to the ear.

The inspiration for this blog was the realization that we’ve quickly begun wirelessly connecting our ears to the internet. For starters, at a broad level, Americans are buying more Bluetooth headphones than non-Bluetooth headphones:

BT Headphones vs Non BT Headphones
From Nick Hunn’s The Market for Hearables 2016–2020

If you look more specifically into any one segment of audio devices, you’ll see the trend applies there too. If we are considering hearing aids, Resound introduced the first Made for iPhone (MFi) “connected” hearing aid back in 2013 – the Linx. Flash forward to today, and all six major hearing aid manufacturers sell a MFi hearing aid (Phonak’s Audeo B hearing aid is actually compatible with Android too.) Similar to headphones, the majority of hearing aids now entering the market are connected devices.

Hearables, everyone’s favorite buzzword, have collectively attracted more than $50 million through crowdfunding on sites like Kickstarter and Indiegogo. These devices are inherently connected and wireless. Additionally, you have the 800 lb. gorilla in the room – Apple – that introduced Airpods last December. Airpods have accounted for 85% of totally wireless headphone dollar sales in the US since last December. Google launched its own flagship headphones, the Pixel Buds, at the beginning of October. So now we have two of the largest tech companies in the world competing and innovating audio hardware.

AirPods vs Pixel Buds

This shift to Bluetooth-connected devices represents a fundamental change to these devices, as this new generation of connected devices are able to leverage the power of software. Essentially, previous, non-connected devices would be considered “entropic” meaning that they flat-line then depreciate in value as the hardware deteriorates. There’s no new value created by the device.

On the flip-side, these connected devices are “exotropic” meaning that they appreciate in value, so long as the hardware permits (all hardware eventually craps out). Through over-the-air software and firmware updates, as well as software app integration, new value is constantly being created. In other words, we’ve essentially gone from using headphones and hearing aids that are akin to flip phones to one’s that more closely resemble iPhones.

 

This slideshow requires JavaScript.

This blog will explore all of that new value, honing in on specific new use cases, as well as piecing together how a multitude of seemingly disparate trends all relate and ultimately lead to the ear. Just like all software-powered hardware that’s connected to the cloud, these devices will evolve, iterate and advance quickly and shift in unexpected ways. Exciting times!

-Thanks for reading-

Dave