Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
AFAIR, the AMD PR person said something more along the lines that due to competitve pressure they won't leave performance on the table, which to me implies that they will once again clock as high as possible in most SKUs. I'm sure they'll have one SKU where the > 50% perf/watt will be true but the rest will be clocked as high as possible.
I thought the rumour was AMD won't go balls to the wall, and will leave Nvidia to take the power hungry crown. But if AIBs want to go all, out there is room and the option for them to do that.AFAIR, the AMD PR person said something more along the lines that due to competitve pressure they won't leave performance on the table, which to me implies that they will once again clock as high as possible in most SKUs. I'm sure they'll have one SKU where the > 50% perf/watt will be true but the rest will be clocked as high as possible.
The 2080Ti had 4352 cuda cores, the 3090Ti has 10496, that is..... 2.4X, its not 140% faster, according to TPU the 3090TI is 67% faster.
Because the Ampere cuda cores don't have the same shader performance as the Turing cuda cores, they increased the number of them so it amounts to more RT and compute cores but taking away some space from the shader portion of the core, its a rebalancing of the core to better suit the current direction of what an Nvidia GPU is.
AMD also need much greater RT performance, one way they can do that is the same way Nvidia did it, so that 2.4X cores probably isn't going to scale, just like Nvidia's didn't going from Turing to Ampere.
Just delivering cards whose performance doesn't fall off a cliff when turning RT on would be nice.
Even nvidia can't do that though, the 4090 gets 22fps on cyberpunk without DLSS.Just delivering cards whose performance doesn't fall off a cliff when turning RT on would be nice.
Ada aint it, 25 fps at 4k in cyberpunk.
Yes, was going to say similar.Even nvidia can't do that though, the 4090 gets 22fps on cyberpunk without DLSS.
Even nvidia can't do that though, the 4090 gets 22fps on cyberpunk without DLSS.
Okay but what NV did with Ampere was just double FP32 performance and called that 2.4x more cuda cores. Sure that is great if running FP32 code but if you have any INT32 in your code you effectively only have half the performance so INT32 was just 1.2x more execution units with 1.2x more clockspeed which totals 44% more INT32 performance.
So Ampere did and didn't have 2.4x the cores because it depended on workload.
RDNA3 seems to just be a flat 2.4x more INT and FP though so there is more room there for scaling to be closer to the theoretical max. With clockspeeds between 30% and 50% higher you are looking at a range of between 3.1x Tflops to 3.6x Tflops. Expected a meagre 50% performance from that level of uplift is just really pessimistic unless you are talking @300W where it would satisfy the +50% perf/watt claim.
If TBP is around 420W though I do expect 2x or so more performance from the 7950XT than the 6900XT which is going to put it at 4090 levels or ahead.
Just delivering cards whose performance doesn't fall off a cliff when turning RT on would be nice.
I meant relative to Nvidia's own drop in framerates when turning on RT. Nvidia's is bad enough but AMD's is even worse..
Your post was very clear and accurate but applies to both Nvidia and AMD. I think you will agree that it is hardly a win to claim Nvidia are better than AMD, based purely on the fact that one drops to 30 FPS compared to one dropping to 20 FPS. Neither are playable if 30 FPS is an average.
This is the difference between crashing into a brick wall at 150 mile per hour with a helmet on, vs without a helmet on. Neither will save your life and neither is ideal.
We have now reached a stage where almost £2,000 gets you a so called "top tier" GPU that has to use compressions techniques to get playalbe frames with RT on. In fact I can't tell if it is satire that we are being sold Q degrading compressions techniques and being told "better than native".
Your post was very clear and accurate but applies to both Nvidia and AMD. I think you will agree that it is hardly a win to claim Nvidia are better than AMD, based purely on the fact that one drops to 30 FPS compared to one dropping to 20 FPS. Neither are playable if 30 FPS is an average.
This is the difference between crashing into a brick wall at 150 mile per hour with a helmet on, vs without a helmet on. Neither will save your life and neither is ideal.
We have now reached a stage where almost £2,000 gets you a so called "top tier" GPU, that has to use compressions techniques to get playalbe frames with RT on. In fact I can't tell if it is satire that we are being sold image degrading compressions techniques and being told "better than native".
first of all i think it was uber RT thats not yet been released playing at 4k.. and not at 1440p, the generally accepted res for RT. the consensus is that amd is going to be manhandled in the RT dept during this cycle especially in more reasonable RT games (CP77 overdrive isnt reasonable, its a tech demo). i dont know if people have read the ada whitepaper but nvidia has found a nifty recursive hardware acceleration framework for RT and then there's this opacity mapping block which can do RT alpha testing in hardware. all you need is a game with lots of transparent items like foliage and amd's RT wil start looking like a jokeYour post was very clear and accurate but applies to both Nvidia and AMD. I think you will agree that it is hardly a win to claim Nvidia are better than AMD, based purely on the fact that one drops to 30 FPS compared to one dropping to 20 FPS. Neither are playable if 30 FPS is an average.
This is the difference between crashing into a brick wall at 150 mile per hour with a helmet on, vs without a helmet on. Neither will save your life and neither is ideal.
We have now reached a stage where almost £2,000 gets you a so called "top tier" GPU, that has to use compressions techniques to get playalbe frames with RT on. In fact I can't tell if it is satire that we are being sold image degrading compressions techniques and being told "better than native".
If the 7900 & 7950 XT has the chops AMD are going to want to charge you for it, that means most of their customer base is eying up the 4090 right now.
We will know by Wednesday imho, if there is still silence there's you answer they are just going to slot it in with Nvidia's overpriced stack at the high end. It may well be of course they are targeting the lower tiers which commercially makes a good deal of sense I guess.
But i want to temper expectations, i think one of the worst things we can do to AMD is high expectations, where matching or even beating Nvidia is expected, because if they don't that could depress what might still be a very good card launch in its own right.
Nvidia priced their cards high to sell 3000 series cards. If AMD fancies helping Nvidia sell old stock by all means, they should price their cards high as well.If the 7900 & 7950 XT has the chops AMD are going to want to charge you for it, that means most of their customer base is eying up the 4090 right now.
We will know by Wednesday imho, if there is still silence there's you answer they are just going to slot it in with Nvidia's overpriced stack at the high end. It may well be of course they are targeting the lower tiers which commercially makes a good deal of sense I guess.
Are people still arguing against this despite there being many in depth videos (including not just DF but also HU, TPU, gamers nexus) showing where dlss and even fsr can often produce better IQ in various areas over native + aa Of course there are plenty of areas where one could find some area of a game world and take screenshots and go "upscaling bad" but at the same time there are also plenty of areas where one can take screenshots or/and videos to show better and more detail being rendered or as is most often the case with upscaling tech. considerably less shimmering and better temporal stability, which for many can be far worse than a slight softness to the image.
I did the same calcs with 5700XT and it was bang on the money even though plenty of people then thought N21 would compete with the 2080Ti and wouldn't touch the 3080.