I did, but there's equally the chance that by releasing in other countries, they spread their resources even thinner.
I say doubtful. A company with more than $ 90 billion in cash would be hard pressed to "spread their resources even thinner" in an area of mission critical importance to their future.
Why? Here is my take away message from yesterday's keynote speakers......Google is trying to create an interface out of Artificial Intelligence. This is such a huge paradigm shift in the way we interact with information that it seems to me that Google is really just getting started here--but in a world where everyone expects everything instantly, this change will take time. This stuff is hard to do.
Historically and even now you mostly needed a keyboard or a mouse or touch to get to interact with your source of data or entertainment. Now the distance between input and output has shrunk. Soon you will do away with icons and text input boxes and swipes on your device. But my point goes even further than just using your voice in GA or GH as input.
It shows that Google can present its growing skills in AI and machine learning in the User Interface so that the result is just given to you immediately rather than you having to do anything with touch, keyboard, an app, etc. And by launching GA on the iPhone too, Google is basically saying that they are bringing voice interaction to everyone. Seems to me that while AI implementation is really hard to achieve, Google is laying the groundwork for GA's ubiquity (and hence functionality) in a very direct way. Saying they are now an AI company first, not a mobile company first is pointing the supertanker in an unmistakeable direction.
The signs of their push into AI and machine learning are everywhere at Google I/O. They have reduced their voice error rate dramatically in the last few months (4.9%). They have dramatically improved their AI/machine learning skills with the ProActive Assistant in GA to notify you of what they think you believe are important messages, they have opened GA to 3rd party developers, they have increased the number of smart home companies they are working with to more than 70 (eg recently LG and GE appliances), they have determined that they do not need to introduce a screen on GH (unlike what Amazon plans) because now Visual Responses uses the many screens you already have---phone, TV, etc, they have introduced Suggested Sharing into Google Photos, with Google Lens that have determined that their image recognition is better than a human's and can now have the human point their phone to an object to get better information than you already know.
For enterprises, they are making available to Google Cloud customers their Tensor Processing Units to infer information that companies can find valuable in interpreting their data. Their TPUs have dramatically improved performance in Google's own services including Search, Maps, YouTube, Google Play, etc.
Spreading too thin? I think not. Rather full charge ahead.