Apple WWDC 2017
Apple held their annual developer conference WWDC (Worldwide Developers Conference) June 5-9 in San Jose.
While this is a developer conference there are usually some product announcements thrown in. This year there were incremental upgrades to the existing Mac lines, a new 10.5” iPad Pro on the hardware side. Unusually for Apple there were also announcements of new hardware products that would not be available until later in the year. The iMac Pro, a monster machine packaged in the same form factor as the current iMac 27” and the HomePod speaker (more on that below).
On the OS side there are updates to watchOS (4), macOS (High Sierra) and iOS (11) now available for developer preview. While iOS 11 is adding some interesting new features like drag and drop, a file system of sorts and person-to-person money transfers, macOS looks like a usual incremental update.
I felt the most interesting things were:
- ARKit - for developing augmented reality applications for iPhone and iPad.
- Core ML - provides a platform to bring computer vision machine learning and natural language processing into your applications that run on device (rather than remotely in the cloud). Core ML will be available on all Apple operating systems iOS, macOS, watchOS and tvOS.
- HomePod - home speaker
ARKit and Core ML are going to advance the applications on device and open up new areas for developers but it is the HomePod that is visible to the general public.
Apple have tried to differentiate the HomePod from the Amazon Echo and Google Home by emphasising its audio quality and moving it into competing with the Sonos speaker. At its $349 price tag it is also positioning it at the top end of the consumer market.
While initially the HomePod will be a speaker with Apple Music, Siri and HomeKit integration the more interesting thing in the longer term is the platform it is introducing. It has the Apple A8 chip so it will have similar processing power to the iPhone and iPad lines. I wasn’t clear what OS it will run but presumably it will be an iOS variant of some type. These combined with Core ML and other iOS API’s and the large Apple developer community opens this up to wider possibilities. The local OS and processing power will allow more autonomous action without needing to rely on a back end cloud services as much as the Amazon Echo and Google Home do. It will be interesting to see where the device and developers are by the time WWDC 2018 comes around.