Daily Updates, Future Ear Radio

WWDC 2019 Rundown (Future Ear Daily Update 6-3-19)

Image result for apple wwdc 2019
Image: 9to5 Mac

Yesterday, Apple hosted its annual developer meeting, WWDC, to unveil the most recent slew of tools and updates pertaining to the Apple developer community. Here are all the announcements from the event surrounding the software behind the Apple’s wearables and Siri:

Apple WatchOS 6.0

Apple introduced an update to Watch’s operating system and along with it, an Apple Watch-specific app store that can be accessed directly on the watch. This is significant as Apple continues to unbundle Watch from the iPhone to be a standalone product with its own line of independent apps. Previously, all of the apps that the watch uses were ported from the iPhone, so we should begin to see developers building apps and functionality specifically designed for the Watch.

In addition, Apple rolled out a new Health app feature to capture the sound levels in one’s environment to measure and notify the user of potentially dangerous noise levels. A simple glance at one’s watch will indicate to them how loud their environment is, and users can take it a step further by assessing their Health app data at the end of the day to understand which locations they’re being exposed to potentially harmful sound levels.

Siri & AirPods

Last year, Apple introduced Siri Shortcuts, which I believed to be a major step forward toward the future of the App store. The problem, however, was that while Shortcuts represented a great way to link apps together and make a flow of commands, the feature was ultimately too buried. Only 10% of iPhone users had ever attempted to create or even use a shortcut. This year, Apple built Shortcuts right into iOS13 and suggesting shortcuts for people based on their app usage. This has the potential to be huge as it allows for many menial tasks to be automated, and as Brian points out, could be the underpinnings for a full blown SiriOS down the road (hopefully next year).

Apple also showed off rather significant improvements in Siri’s underlying technology too, with an upgraded neural text-to-speech enhancement. Now, Siri’s speech is entirely software driven and uses machine learning to continually improve itself. This ultimately allows for a much more natural sounding Siri that will only get better over time.

One of the most obvious use cases for a more natural sounding Siri is with voice messaging used in conjunction with AirPods. It’s not just iMessages either as Siri can relay messages from third party apps too. As we move into an era where hundreds of millions of people own and use AirPods, it seems likely that voice messaging, powered by Siri, will be a killer use case for those walking around throughout the day with AirPods in their ears.

Apple also rolled out an “Audio Sharing” feature for AirPods, allowing two users to listen to the same source. This is essentially the Bluetooth version of a cable splitter. This is just another subtle feature that makes owning and using AirPods that much more attractive and compound its already powerful network effects.

Voice Control

One of the most interesting developments unveiled during the conference was Voice Control. This allows for complete control of your Mac or iOS device with your voice by “voice tapping and clicking” through a numbered grid. This feature is an awesome addition to Apple’s accessibility suite and it’s possible that we see a near-future where Apple begins to marry Voice Control with Siri to allow for the user to communicate with Siri with more context using the numbered voice grid. For example, I might want to reference something on my phone to Siri, and can more effectively teach Siri what I’m communicating through the numbered grid.

Race for the Future

Siri Shortcuts is arguable one of the most innovative areas within Apple currently, so by baking Shortcuts directly into iOS13 and adding the aspect of suggesting pre-made shortcuts will only expose more people to the power and utility of this feature. Moving Siri to a neural TTS makes Siri that much more usable and functional, which will be on full-display for voice messaging with AirPods. Apple continues to unbundle the Watch from the iPhone and work to create specific use cases built for the Watch, such as all the developments around the Health app. Apple’s Watch strategy and slow unbundling from the iPhone provides a blueprint for AirPods’ trajectory as the device becomes capable be unbundled as well.

A lot of what we saw from WWDC this year appears to be baby steps toward Apple’s next generation UI and OS. Many of these improvements are incremental on their own, but combined, they start to accrue into something more meaningful. Apple appears to be warming up to the idea that Siri and voice-as-a-UI will play an important role for the company’s future, but the question remains whether Apple is internally structured and motivated to significantly innovate around Siri and match parity with the blistering pace of innovation that we’re seeing with Alexa and Google Assistant, along with Amazon and Google’s build out of a third-party developer network. 

-Thanks for Reading-



Leave a Reply