Google I/O 2017: Making A.I. mainstream
The search giant's big day is here with the start of its massive Google I/O conference. Unlike previous years, where I/O was used to announce new services, sometimes devices and developer products, this year, Google was strongly focused on AI (Artificial Intelligence) and ML (machine learning), how deeply integrated they are across its services and products, and bringing AI and ML to a wider audience beyond just the academia. In fact, "Making A.I. Work for Everyone" is the key messaging of in this conference. How Google will uphold this mantra will be further explained in service specific updates and features. Without further ado, below are the list of products and services that are either new, or are enhanced by Google’s advances in ML.
Available on over 100 million devices with over 70 smart home partners, Google Assistant gets an additional boost from ML by having a new feature called Proactive Notifications. Using this, Google Assistant is able to infer context, proactively understand what the end-user wants, and responding appropriately. Through ML, Google Assistant is also able to infer questions that require visual responses, automatically displaying results on either a phone or a TV. For example, if you were to ask your Google Home device the whereabouts of a particular building, the built-in Google Assistant will not only convey the information in audio, but it can also show you where it is on your home TV by interacting with your phone to launch Google Maps and connect it to a Chromecast-enabled display! This functionality will be enabled later this year.
Oh and before we forget, Google Assistant is coming to the iPhone and is available for devices running iOS version 9.1 and later.
Through ML and advanced computer vision, Google Lens informs the user about the surrounding environment. One can even follow up with actions. An example will be to enable booking of a band performance, just by pointing Google Lens to a billboard sign. Point your phone to a flower to find out what it is, point it at a restaurant and find out ratings and other relevant information, point it to your router's network SSID/key details and automatically setup a local Wi-Fi connection! 'See more' and learn more with the new Google Lens feature. In some ways, it sounds like a more advanced version of Samsung's Bixby Vision, which is still at its infancy now.
ML is deployed in Android O in a number of ways, one of which is enabling Smart Text Selection, where the offline ML on the phone is able to recognize whole texts, such as a shop name, and automatically selects the texts when the user double taps.
Extending on ML on mobile devices, Google has also announced TensorFlow lite, a fork of the ML framework for mobile devices. More details on this at a later stage.
In Android Go – Google’s endeavor to reach out to the next billion users, ML is also used in better Android optimizations, and offline processes such as typing in different languages in Gboard (Google's Keyboard).
Aside from ML aspects, Android O will also see picture-in-picture options and something totally new called Notification dots to surface app activity in a new way. Here's GIF from Google to simply what these do:-
Wondering who to share your photos with? Google Photos will now use the power of machine learning to analyze one’s photos to determine faces, who to share, and which are the right photos to share. Say hello to Suggested Sharing. Here's a little clip from Google that shows how it works:
So if you're just back from a holiday with a thousand photos, Google Photos will make short work of it to pick out only the best shots and bunch them together in an album, ripe for sharing. Together with a new Photo Book service, It is also used to select the best photos to be printed on a hardcopy photobook, complete with laying them out so that you don't have to spend time fussing around which photos should go where (although you can go in to optimize it to your preference).
Google Cloud services
With the advancement of Google’s ML capabilities, Google has announced a new ML-aligned hardware product named Cloud TPUs. These new Cloud TPUs are capable of delivering up to 180 teraflops of computing power, thus greatly accelerating the computational time needed to run machine learning training and inference models. These Cloud TPUs will be deployed in the Google Cloud Engine as of today as an alpha stage.
In addition, Google has announced TensorFlow Research Cloud, an initiative to grant 1,000 TPUs available to researchers around the world willing to share their research.
Google has also announced new VR and AR frameworks and devices.
- VR: Standalone Daydream with WorldSense: VR headset without the need of a phone nor PC. It can also track motion via WorldSense!
- AR: Visual Positioning Service: the Tango platform is integrated into Google Maps to enable Visual Positioning Service, a way of determining one’s position indoors. For example, finding a store in a particularly large mall.
There are many more services announced today, both for consumers and developers, so stay tuned for in-depth coverage of some of these new services!