• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

RDNA 3 rumours Q3/4 2022

Status
Not open for further replies.
The Ryzen 1800X was 95% of Intel's Intel's IPC, it was behind in clock speed and gaming but had better power efficiency on a brand new 14nm node compared with Intel's very mature 14nm node, and matched the 8 core 3900K for performance at half the price.
The Ryzen 3800X was 110% of Intel's Intel's IPC, it still was behind in clock speed and gaming but improved on both fronts, with the 3900X and 3950X AMD had the fastest mainstream CPU's AMD's 16 core matched Intel 18 core HEDT. Again at half the power.
The Ryzen 3960X, 3970X and 3990X literally killed Intel's HEDT, stone dead.
The Ryzen 5000 series have a much higher IPC than Intel, the match Intel for clock speed, they have better gaming performance, better single threaded performance, are twice as fast in productivity and again much better power consumption. Ryzen 5000 is in every conceivable way better than Intel.

They didn't do it overnight, to expect them to match or beat Nvidia in one generation is disingenuous at best, AMD match Nvidia for Rasterisation performance and do it with less power, in one generation, at least give them credit for that.

I don't think it's the same with Intel as Nvidia don't appear to have been napping.

I didn't expect AMD to Beat Nvidia in one generation. That's why I had a 3080 on pre order. As far as rasterisation goes great job AMD, but on a smaller node and higher clocks why are you not winning outright in rasterisation? The lack of next gen tech on a card that looks to compete in the PC means it was worthless.
 
Nvidia obviously had an idea what they were up against performance wise as its not uncommon for competitors to have inside information. the 3080 could have easily been turned into the ti for release and sold for 1k while the 3070 bumped to the 3080 and no one would have complained at 2080ti performance for £650 and as you say Nvidia wants to maximise profits but at the same time give you as little performance as they are able to get away with so you need to upgrade again sooner.

I would have been surprised to see a 10GB Ti. So no, Nvidia designed the 3080 as a 3080.

Maybe for you but for the majority of people rasterisation is the most important factor. How many people would have got excited about ampere if Jensen said we are giving you the same raster performance as Turing but doubling RT and then AMD knock out a card that is 50% faster in raster.

The majority of people use 60Hz panels.
 
I didn't expect AMD to Beat Nvidia in one generation. That's why I had a 3080 on pre order. As far as rasterisation goes great job AMD, but on a smaller node and higher clocks why are you not winning outright in rasterisation? The lack of next gen tech on a card that looks to compete in the PC means it was worthless.

2Ghz vs 2.4Ghz why does it matter how they did it? core for core Intel was equal with AMD's Zen 2, they did it with higher clock speeds, it didn't matter, all that matters is the end result. At the time AMD couldn't achieve those clock speeds, Nvidia can't achieve 2.4 to 2.8Ghz.
 
I would have been surprised to see a 10GB Ti. So no, Nvidia designed the 3080 as a 3080.
Why would it have been 10gb?, nvidia would have just gave you 12gb like they are doing now. The 3080 PCB even has 2 empty ram spots.

The 3080 would have then had either 8 or 10gb but probably just standard GDDR6 and a 104 die.

The majority of people use 60Hz panels.
The majority also don't have raytracing.
 
Last edited:
2Ghz vs 2.4Ghz why does it matter how they did it? core for core Intel was equal with AMD's Zen 2, they did it with higher clock speeds, it didn't matter, all that matters is the end result. At the time AMD couldn't achieve those clock speeds, Nvidia can't achieve 2.4 to 2.8Ghz.

So where would AMD GPUs be today if Nvidia had also gone to 7nm? I'd guess anywhere from 20-30% further behind than they already are.

The point being next gen will most likely be on the same node. Do you see AMD managing to compete then?
 
Last edited:
Why would it have been 10gb?, nvidia would have just gave you 12gb like they are doing now. The 3080 PCB even has 2 empty ram spots.

The 3080 would have then had either 8 or 10gb but probably just standard GDDR6 and a 104 die.

WTF really? The BUS width dictates the VRAM sizes. The PCB has empty VRAM areas due to it being a shared design.

The majority also don't have raytracing.

The majority don't have hardware raytracing. This is why we see hybrid engines. Crytek have done some great work with non dedicated hardware and I think Pascal can also run RTX raytracing. Quake 2 RTX uses a full path tracing engine. I hope Metro Exodus Enhanced is fully raytraced.
 
They achieve that performance on a smaller node with higher clocks and focusing purely on gaming, yet are still a generation behind with raytracing and so far offer no alternative to DLSS 5.5 months after launch. Where would AMD GPUs be today if Nvidia had also gone to 7nm?

Amd's first gen of ray tracing is faster than nvidia's first gen.

As for dlss, it took nvidia months to get it into a game, that being battlefield v, and when it did arrive it was a blurry mess. It took several iterations since then to get it somewhat upto snuff, so i'm not sure why you're going on about it taking long when nvidia were in exactly the same boat and it took time to get it to a decent state. But of course you know this already and it's just a bit of pathetic point scoring.
 
Amd's first gen of ray tracing is faster than nvidia's first gen.

As for dlss, it took nvidia months to get it into a game, that being battlefield v, and when it did arrive it was a blurry mess. It took several iterations since then to get it somewhat upto snuff, so i'm not sure why you're going on about it taking long when nvidia were in exactly the same boat and it took time to get it to a decent state. But of course you know this already and it's just a bit of pathetic point scoring.

We are talking about GPUs today and in the future, not historic, which remains irrelevant.
 
I think he's afraid AMD might just pull it off.

Why? If AMD bring out a better GPU than Nvidia and Intel, I'll be buying AMD. I did my AMD 'fanboying' back when Nvidia locked down PhysX. I started buying Nvidia GPUs with the 980Ti when I realised it was just myself that was missing out.
 
We are talking about GPUs today and in the future, not historic, which remains irrelevant.

Last i checked i'm also talking about gpu's today, it was you that said amd are a gen behind with ray tracing but their first iteration is a bit faster than turing so they're starting at a better point than nvidia did.
 
WTF really? The BUS width dictates the VRAM sizes. The PCB has empty VRAM areas due to it being a shared design.

Maybe it was originally designed with the 3080ti in mind which was going to cost 1k use a cut down 8704 Cuda cores and have 12gb VRAM but then Nvidia caught wind of AMDs performance with cards like the 6800XT which would come in much cheaper so scrapped the 80ti and instead used the same die but with only 10gb VRAM for the 3080 so they could remain price and performance competitive.

The card that was originally going to be the 3080 with a GA104 was rebadged to a 3070. While the ampere chip design is not flexible the SKU naming certainly is.

This seems perfectly pleausable since it's the way nvidia went with the previous 2 generations when there was no real competition in the market.

Now we have the odd situation when the 3080ti is released where Nvidia will have 3 different cards ranging from £650 - £1000 - £1400 but separated by only 10% performance.
 
Last edited:
Last i checked i'm also talking about gpu's today, it was you that said amd are a gen behind with ray tracing but their first iteration is a bit faster than turing so they're starting at a better point than nvidia did.

They are starting on a smaller node than Nvidia did. Today they are a generation behind with a smaller node and higher clock frequency.
 
Status
Not open for further replies.
Back
Top Bottom