At Google I/O 2017, Google Assistant’s AI magic is going places (even iOS)!
Google
At the keynote address today which set the balls rolling for the three-day Google I/O, the company's annual developer conference, CEO Sundar Pichai once again reinforced the search giant's vision of an AI-first future which builds up over the mobile-first present we are currently witnessing. And instead of making lofty claims, Pichai actually backed the vision up with concrete numbers. There are now over 2 billion active Android devices in the world. This does not only include smartphones, but also the smart Android TVs, Android Wear watches, Android Auto-enabled cars and more. There are also 800 million people using Google Drive and over 500 million using Google Photos. Now that is a massive scale to operate at, and as Pichai humbly stated, Google is proud to serve such a vast user base.

And with such a scale propping up the AI-first future, Google trudged ahead with bringing its artificial intelligence and machine learning technology to almost every element of its products. After Google Inbox, Gmail will be getting the ' smart replies' feature which enables Gmail to scan an unread email and suggest replies based on the content. Machine learning!

Pichai announced Google Lens, the company's take on augmenting reality using the camera app to recognise objects in the screen and provide essential suggestions. Take a photo or point the camera on a flower and Google will tell what flower is that. Hover the camera over a restaurant and it will show you reviews and options to book a table. Scan business cards, receipts, connect to Wi-Fi, book movie tickets by pointing the camera at the posters, do it all. Google Lens will be integrated into the Google Assistant. It is a new recognition engine which can understand both texts and images and come up with relevant smart answers.

Google also opened up the Assistant to developers with the Google Assistant SDK which will allow third parties to infuse their products and services with the power of Google Assistant. Essentially, near about every device can now be controlled using either Google Home or the Assistant provided the developers have tailored for it. To make things easier, Google even recognised Kotlin, a statically typed programming language as a first-class language for writing Android apps.

Also, almost unbelievably, Pichai announced that the Google Assistant will be coming to iPhones and iPads. It will exist alongside Siri as a standalone app that will be able to do the basics like sending an iMessage, play a song, etc. But due to API restrictions, it won't be able to do much. For instance, you won't be able to set an alarm. It is only available in the US for now.

But for Android, Google Assistant is becoming more and more smarter. You can now ask Assistant to order you a pizza and even pay for it. Yes, Google has updated its payments processing system to allow Google Assistant to buy you things online.

The Assistant can now speak and understand more languages ( sadly, not any of the Indic languages, yet).

In line with more AI, Google Photos too received a truck load of new features. It could already filter out the best photos out of the myriads you'd take on a vacation and make a album for you to share, and even pester you to share it. You can now share your Google Photos libraries ( either partially or in entirety) and Google Lens is being integrated to the app to recognise objects and bring up relevant information about them.

With such improvements in the Google Assistant, how can Google Home be left out? Google's self-designed always-aware, contextual home speaker which comes with the Assistant in-built will now be able to work in more languages, support more third party devices and services, make free phone calls, send visual responses to the most relevant screen near you ( on the phone if you are about to head out, or in the TV if you need a bigger canvas.) Basically, Google is doing away with using our hands and fingers to control our devices. Our voice is just about enough to do that now.

But AI is only successful if developers are able to leverage massive amounts of computing power to train AI models. Google unveiled its second generation Tensor Processing Unit or TPU, its cloud computing hardware and software that is the underlying tech behind such ambitious AI-powered products. The chip is designed specifically for machine learning and has been propping up the swanky artificial intelligence in Google Translate, Google Photos and the likes. The new TPUs which can be rigged together to form supercomputers will be able to carry forward the AI-driven vision by helping developers train massive neural nets and even make inferences. Basically, everyone can now use Google's AI platform to build AI-driven apps and services.