Wasn't aware of a Turing based Tesla card.
But yes you are correct the new, just announced 7nm MI60 card is just faster 14.7 TFlops than the year old last gen 12nm Tesla V100 at 14 TFlops, unless you look at the NVlink Variant of course.
Last page I posted some matrix computing benchmarks. All the Nvidia Volta advertisement about 100Tflop at FP16 and dedicated Tensor cores, cannot be materialized and the reality is that AMD competes on the Datacenter section on even footing with the 14/12nm using the ancient ROCm 1.3. Now with the V20 is 6-10 months ahead (Financial analysts saying that), and yet we haven't seen the improvements of ROCm 2.0 brings on the table.
Workstation cards probably completely different matter, yet again, someone could buy 4 WX9100s (Vega 10) for a single RTX6000 atm. True someone could buy 4 RTX6000 also if money doesn't matter.
However consider this. I use my Vega 64 on Tensorflow (with ROCm 1.9.1), and is great card to work with.
Is just around 18-13% (depending usage) slower (undervolted and overclocked) than the $9000 TV100, for 1/20th the price. A bargain not only for me but many small/medium businesses and especially enthusiasts.
I know a company in York which is buying even today used V64s and is using them for work projects (it is using MS AI) because it is great on number crunching and dirty cheap compared to professional cards.
I do not remember if it was AdoredTV or the guy at Anandtech but around May time, he said that actually the majority of the Vega 64s was grabbed and set straight to the entertainment industry not on mining rigs.
You see can still use the professional tools on consumer cards, it as AMD doesn't have licences etc and all is open source.
That is why AMD was pointing again that there is no licencing, and everything is open source.