AI takes center stage at I/O 2017

1/5AI takes center stage at I/O 2017
AI takes center stage at I/O 2017
Way back in 1998, Google came into being with a product that would change the way we used the internet. It brought Google Search to the world. Search made the gigantic world wide web shrink to a bite-sized chunk of a search results page. It was a simple interface which did not come in the way of the people. It made the lives of people easier so much so that Googling became an Oxford dictionary-approved verb. Since then, somehow everything Google came out with had been related to providing that kind of simplicity. The mission to make lives simpler using technology showed in every product of the company. It’s no surprise thus, that after helping people get around in the web, introducing the world’s most popular mobile operating system, and providing accessibility through its cloud storage service, the next logical step would be to make machines and bots servile to humans.

19 year after introducing search, Google is now putting its eggs in the AI basket. Almost every Google product has been infused with neural nets which learn the habits of the users they serve and become smarter and better. Deep data and machine learning allowed Google to provide the highest quality of service to over a billion users. And the results are now showing.

AI in all Google products

2/5AI in all Google products
AI in all Google products
Have you noticed your OK Google conversations are now more like an actual conversation? That’s because Google’s AI has been hard at work. Since July 2016, the incorrect word rate dropped from 8.5 per cent to 4.9 per cent. Take Google Maps too. Tap anywhere, on any street, on the map and Google will show you the address of the place because the AI inside the Maps app has been trained to analyse Google’s Street View photos to locate addresses of streets and houses. Your YouTube suggestions and recommendations are now tailored more to what you have recently watched or what is currently playing. Like similar music, or another interview of the same person - more of what you want. Google’s Pixel camera is a window to what AI can do to photos. The HDR+ camera mode can upscale grainy images shot in the dark and make them flawless. It’s everywhere.

Now the AI is getting smarter

3/5Now the AI is getting smarter
Now the AI is getting smarter
And now it’s becoming even smarter. Google Assistant showed promise right from the beginning. The AI butler debuted last year on the Google Pixel and straightaway was a threat to all the existing voice assistants. It leveraged your voice as a medium of input. Yesterday, Google introduced Google Lens. To now focus on your vision. Google Assistant will now see through your camera and augment it with smart suggestions and actions. It is also no more a highly moderated space. Pichai introduced Google Assistant SDK which will allow anyone to integrate Assistant to their devices. Anyone can now have the support of Assistant in their services.

With AI, Google can now not only arrange your photos, it can also enhance them. In Google Photos, machine learning recognises faces, sceneries, moments, objects and arranges your photos on the basis of them. Now, it can remove obstructions too. Say, you take a photo of a lion in a zoo, but there’s the cage that’s coming as an obstruction. Machine learning can now automatically remove the nets from the photo giving you a flawless capture.

AI for the masses

4/5AI for the masses
AI for the masses
Now we keep saying AI is magic. It’s how we imagined the world would be in those sci-fi novels. But then it’s not. To make such AI available at such a large scale, one would need tremendous amounts of computational power to train neural networks about deep data sets so that it becomes intelligent enough to guess your next step or understand the context. CPUs and GPUs are not enough to handle such high volume computation and so last year, Google had announced a new hardware chip called Tensor Processing Units for machine learning. Back then only, it was 15 to 30 times faster than anything else. Now Google has rigged multiple such TPUs to introduce Cloud TPUs which are designed to be stacked to data centers and will power Google’s Compute Engine which is the infrastructure which powers things like Search, Gmail, YouTube and the likes. Your Google experience is about to become a lot more magical.

And then Google went ahead and dropped the bomb. Google’s AI is now being opened up to be used by anyone. All of Google’s machine learning knowledge can be leveraged by chemists, biologists, pathologists, researchers and more to train their own AI models which can potentially help detect cancer better, improve gene sequencing capabilities, run high volume research models and more. Pichai dropped an inception joke- neural networks that learn to build neural networks to build a better neural network.

At the heart of it, is AutoML, a model that will help researchers and developers to use machine learning to train neural networks. A machine that will help machines in “learning to learn”. And all of this will be available in Google’s new website- Google.ai. It is touted as the go-to space for developers who wants to infuse a little AI in their products.

AI is now baked inside Android too

5/5AI is now baked inside Android too
AI is now baked inside Android too
In Android too, AI is spinning its wheels to make the experience better. It will speed up the booting time of your phone. It will autofill details based on context. And then there’s smart text selection. Android O can look for common things you copy off a piece of paper. Names, numbers, addresses. When you hover your phone over the paper, Android will highlight the text and copy it. In time, there will be more such instances of magic.

After yesterday’s keynote, it is clear that Google is now pivoting towards artificial intelligence. It has all the data it needs. And now Google is mechanising all that data to reimagine the mobile experience. Welcome to the future, people.