Aye, it is rather odd to say the least; all they've shown and mentioned is FP16 use cases and performance.
If they don't have the FP64 performance to compare with Tesla, it might mean they try and compete on price and FP16 alone.
Could be going after deep learning community with much lower costs than Tesla Gp100. Google did have an agreement with AMD for datacenter machine learning GPUs but that seems short lived since Google is veyr happy with the TensorFlow hardware which is much cheaper, faster and lower power than GPUs.