Worst relative (to the competition) performance in history, and minor gains relative to even their own previous efforts, all while keeping prices nowhere near appealing.
That would depend on the pricing. 6800 XT-ish performance with lower power consumption and better ray tracing capabilities doesn't sound too bad to me if it's priced below the 4070. There's also absolutely no way that RDNA 3 is worse than the Fury/Vega years.
Power consumption gains will be minimal, just like the 7600, and will be plagued by the higher idle pw cns just like N31, as for better ray tracing, that's mostly smoke because we saw with N31 & N33 there's no significant gain there relative to RDNA 2, whatever gains it had was mostly from N31 just being bigger (more cores). Really RDNA 3 = RDNA 2 + ability to go multi-chip. Fundamentally the changes were to facilitate them doing that transition & to preserve their profit margins, but at an architectural level the gains were mostly pathetic in all other aspects.
And yes, RDNA 3 is 10x worse than Fury/Vega years, because at least back then there were more differentiating factors for which you could consider AMD, but now there are essentially none. F.ex. 980 Ti vs Fury X, they were close in performance and there were no real feature advantages for Nvidia like today (RT/DLSS etc.), plus pricing was sometimes more attractive for Fury & it had nice UV gains. For Vega, it had a definite compute advantage vs the 1080, and OC vs OC they were very well suited, with Vega even pulling ahead at 4K, and again, feature parity more or less. Overall Nvidia still had the better cards for gaming but it was easy for AMD to drop the price a bit in order to make it competitive, as that's all that was required to balance the scales. With RDNA 3 the prices would have to drop a LOT more in order to make an argument in many scenarios, in RT it's at least 2 gens behind Nvidia, in DLSS it's worse both in terms of upscaling & it completely lacks FG, a middling alternative to Reflex, then we have ancillary things like CUDA (which unless you want to be an ignoramus in the era of ML/DL, is a real PITA to do with AMD, ask me how I know...), better media engine, way better efficiency etc. etc.
Honestly, the most obvious example of how much worse RDNA 3 is than Fury/Vega, is to just look at outlier results. I can find not-broken (and not sub 30 fps scenarios) game examples where Lovelace is >=2x faster than RDNA 3 (their equivalents) but I can't recall a single such example for either Fury or Vega. There's simply no way they'd be able to price themselves out of such performance deficits, they need a whole new architecture to actually compete in such scenarios (and close the feature gap). The crazy thing is they've lost even more distance than last gen WHILE Nvidia gimped most of their own products even harder in order for more obscene profit margins. So in reality it's even worse than what we see.