Associate
It's not a great way of looking at performance.
A large amount of the additional performance every generation comes from shrinking the transistor size on the chips as this typically leads to lower power draw, higher clock speed and more physical transistors to do calculations with, for any given chip area. Nvidia/AMD do not actually fabricate the chips themselves, that's done by the likes of TSMC and Samsung and other chip fabricators, they do the R&D into getting the node size down. Nvidia/AMD simply design what is etched into that wafer. Generally speaking they're both limited to using the same silicon as each other, and while they can have their own optimizations and tricks in software, fundamentally they both have the same performance ceiling every generation which is the number of transistors to do calculations with.
A lot of the performance gap being closed this generation has come at the expense of Nvidia spending less transistors on improving rasterization performance and spending more on RT and Tensor cores. These take up physical space on the GPU itself and so they're trading rasterization performance for cores which accelerates other more specific tasks. With RT cores they're doing ray tracing math and with tensor cores they're implementing machine learning algorithms do to do things like DLSS.
While AMD do have good performance vs Nvidia this generation at rasterization, they've been quiet about both Ray Tracing and resolution upscaling. We now know after 3rd party reviewers have done benchmarks that their RT performance is poor which should surprise no one, they do have RT acceleration units inside each CU but they're probably spending way less of their transistor budget on that. And there's no Tensor core equivalent, so any kind of up-scaling is bound to be poor and eat more into rasterization performance. They have Super Resolution in the works but we've not seen or heard anything from that yet, my guess is that it's not going to be very good compared to DLSS both in terms of performance and quality, but we'll see.
A large amount of the additional performance every generation comes from shrinking the transistor size on the chips as this typically leads to lower power draw, higher clock speed and more physical transistors to do calculations with, for any given chip area. Nvidia/AMD do not actually fabricate the chips themselves, that's done by the likes of TSMC and Samsung and other chip fabricators, they do the R&D into getting the node size down. Nvidia/AMD simply design what is etched into that wafer. Generally speaking they're both limited to using the same silicon as each other, and while they can have their own optimizations and tricks in software, fundamentally they both have the same performance ceiling every generation which is the number of transistors to do calculations with.
A lot of the performance gap being closed this generation has come at the expense of Nvidia spending less transistors on improving rasterization performance and spending more on RT and Tensor cores. These take up physical space on the GPU itself and so they're trading rasterization performance for cores which accelerates other more specific tasks. With RT cores they're doing ray tracing math and with tensor cores they're implementing machine learning algorithms do to do things like DLSS.
While AMD do have good performance vs Nvidia this generation at rasterization, they've been quiet about both Ray Tracing and resolution upscaling. We now know after 3rd party reviewers have done benchmarks that their RT performance is poor which should surprise no one, they do have RT acceleration units inside each CU but they're probably spending way less of their transistor budget on that. And there's no Tensor core equivalent, so any kind of up-scaling is bound to be poor and eat more into rasterization performance. They have Super Resolution in the works but we've not seen or heard anything from that yet, my guess is that it's not going to be very good compared to DLSS both in terms of performance and quality, but we'll see.