I like to remind people that NVIDIA CEO repeats all the time for a while now that GPUs don't get much faster anymore, AI is the future instead. We're almost by the very wall of what's possible in physics without running into too many wild quantum effects. Maybe different materials will let us increase clocks, heat transfer, power etc. but transistors' size hasn't changed for years now - only their construction did, but the actual size of each transistor is pretty much the same since "10nm" processes (and that already didn't mean 10nm per transistor either!). The real size planned in few generations of production (so not even current top ones!) will be about 13nm in size per transistor, last I've seen online recently. To get down to 3nm is just not possible currently, as that would be right into quantum realm of weirdness - it's all just marketing names that means "we've improved things a bit here and there." without touching the transistor sizes.
Anyway, all the low-hanging fruits of performance have been already reached, now it's getting harder and harder each generation to get any speed-ups. Same with CPUs (vide 9000 Ryzen vs 7000 - faster clocks, bit less power used, but for gaming it's near the same CPU).