Compared to RTX 3090 Ti, the last-generation flagship, the RX 7900 XTX is 19% faster, wow! The performance increase over the Radeon RX 6900 XT and 6800 XT is 47% and 58%, respectively. The mighty GeForce RTX 4090 is 22% faster than the RX 7900 XTX, at more than twice the cost right now. Today AMD has also released the RX 7900 XT, that card is 16% slower than the XTX, at 4K, at lower resolutions the gap is considerably smaller.
It's also possible that the press driver isn't fully optimized for all our games yet. RDNA3 introduces new dual-issue compute units, which require special code optimization, so that they can achieve the +100% performance uplift. In briefings AMD has made it clear they have been optimizing the driver for the new units, and I'm sure a lot of work has already been done in the compiler, but I'm just as certain that there's some cases where hand-optimization can yield further benefits. During testing I also encountered crashes in AC:Valhalla and Elden Ring, no doubt these will be fixed soon.
With those performance characteristics, RX 7900 XTX is a formidable choice for gaming at 4K, with maximum details and 1440p at high-refresh-rate. You can crank up everything and you'll still run at over 60 FPS. Things are different when you enable ray tracing though, here the RX 7900 XTX is considerably weaker than what NVIDIA offers. On average (new chart in the RT section), the RTX 4080 is around 15% faster than the RX 7900 XTX with ray tracing enabled, which isn't monumental, but definitely more than what I would have expected. I think everyone agrees that ray tracing is the future, and just disagrees on how quickly that future is happening. If you're part in the "I want this now" camp, then you should probably consider the RTX 4080, or RTX 4090. On the other hand, if you feel like ray tracing is just minor additional eye candy, that comes with a huge performance hit, then you can happily grab the RX 7900 XTX. That's not to say that AMD's new cards are useless with ray tracing, but if you consider the differences in price and RT performance, then the value-proposition of both cards is virtually identical, with NVIDIA RTX 4080 giving you the higher overall performance.
Power efficiency of the new Radeons is fantastic, clearly much better than the previous generation of RDNA2 and NVIDIA Ampere cards. NVIDIA's GeForce RTX 40 cards are a bit better still, by 10% (RTX 4090) and 16% (RTX 4080). During gaming the RX 7900 XTX uses around 350 W of power, sitting right at its power limit. While the choice for dual 8-pin makes a lot of sense, it slightly limits the card in what it can do in terms of power. I also noticed that as the card heats up, the frequencies will drop by a lot. In our thermal load test, the card starts out running at 2673 MHz, and stays in that state for around 20 seconds, good to get a boost on short running benchmarks, but then clocks go down to 2505 MHz and stay there until the card cools down again at the end of your gaming session. This 6.7% drop is clearly significant and costs AMD against the RTX 4080, which loses only 1% in the same test.
We measured a shocking power consumption result for multi-monitor and media playback. Here, just the graphics card alone consumes 103 W and 88 W, respectively. This is way too high, RTX 4080 uses only 20-23 W in the same scenario, even the last generation RDNA2 cards were less than half that with 40 W. This can only be some sort of driver bug, because it basically disqualifies the new Radeons for multi-monitor use. Remember, this is idle sitting at the desktop, not gaming. Wasting that much power is simply a big no-no, especially in these times. AMD has had a long history of drawing a lot of power in these power states, so I'm not 100% convinced this really is so easy to fix. I also find it hard to imagine that nobody at AMD tests multi-monitor power draw, so in some meeting somewhere, someone decided "we will release it like that."