Daily Updates, Future Ear Radio

Google Ups the Ante at Google I/O: Local Assistant (Future Ear Daily Update: 5-15-19)

Google Assistant 2.0 – Speedier & More Secure

Image result for Google assistant 2.0
Image: Digital Trends

Google released a number of very interesting updates around Google Assistant at this year’s Google I/O, such as Duplex on the web which I wrote about last week. Another key revelation was the upgraded Google Assistant dubbed “Assistant 2.0” that will be made available with the release of Android OS, Q. As you can see in the video below, Assistant 2.0 is handling command after command in near real-time.

As Bob O’Donnell wrote about in his Tech.pinions article yesterday, the underlying reason for this upgrade in speed is that Google has moved the assistant from the cloud to on-the-device. This was made possible due to improvements with the compression of the algorithms processing the spoken commands, which Google cited a 10x improvement from when those commands are processed in the cloud. The end result is near-zero latency and exactly the type of step forward in terms of friction reduction necessary to compel users to interact with their devices via voice rather than tap/touch/swipe.

The other notable aspect of moving the processing away from the cloud and onto the device itself is that it helps to alleviate the privacy concerns surrounding voice assistants. As it stands now, when voice commands get sent to the cloud, they typically are logged, stored, and sometimes analyzed by teams inside Amazon and Google to enhance their machine learning and NLP modules.This has caused quite the controversy as publications like the Bloomberg have stoked fears in the public that big brother is spying on them (although, this article by Katherine Prescott does a very good job relaying what’s really going on).

Regardless, by localizing the processing to the smartphone, the majority of the commands fielded by the assistant no longer get sent to the cloud, and therefore, no longer can be assessed by teams inside the smart assistant providers. The commands that do get sent to the cloud, do so via a new technique Google announced called federated learning, which anonymizes the data and combines it with other people’s data, in an effort to continue training the learning modules.

Ultimately, Google I/O was a shot across Apple’s bow. Apple’s big theme across the past few years has been, “privacy, privacy, privacy.” Well, Google made privacy a focal point of this year’s developer conference, with Assistant 2.0 being one of the clearest examples. Additionally, Google is starting to paint a picture of how our assistants can be used from a utility standpoint with the massive reduction in latency in Google Assistant, along with the introduction of Duplex on the web. Apple has not yet shown Siri’s capacity to do anything near what Google is doing with Google Assistant from a utility standpoint.

The past ten years were all about shrinking down our jobs-to-be-done into apps on a single, pocket-sized super computer – our smartphone. Google is making the case that the next ten years might very well be about utilizing our assistants to now do those jobs for us by interconnecting all the bits and data stored on the smartphone and its apps, so that we don’t have to spend the time and effort communicating with our phones by tapping and swiping, but rather just speak to the phone and tell it what to do.

Abra Kadabra! Your wish is my command.

-Thanks for Reading-

Dave

To listen to the broadcast on your Alexa device, enable the skill here: https://www.amazon.com/Witlingo-Future-Ear-Radio/dp/B07PL9X5WK/ref=sr_1_1?keywords=future+ear+radio&qid=1552337687&s=digital-skills&sr=1-1-catcorr

To listen on your Google Assistant device, enable the skill here: https://assistant.google.com/services/a/uid/00000059c8644238

and then say, “Alexa/Ok Google, launch Future Ear Radio.”

Leave a Reply