• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
*snip*

I thought their marketing overall was very good considering it was from his kitchen. Lol.

Let’s see if AMD’s new marketing team do as well.

AMD is not known for good marketing :P. However, you do see a change in what they've released for the last 1-2 years. So maybe that will change? I just hope it won't end up like some BS marketing machine like the green team. I cannot stand it. I understand the game. Gotta catch the attention of the sheep but I personally don't care much for it.
 
Neither was poor Volta, Overclockers dream etc. Let’s wait and see and hope for the best.

Haha :D


AMD is not known for good marketing :p. However, you do see a change in what they've released for the last 1-2 years. So maybe that will change? I just hope it won't end up like some BS marketing machine like the green team. I cannot stand it. I understand the game. Gotta catch the attention of the sheep but I personally don't care much for it.
Yeah I understand. But at the end of the day it just works :p
 
The worst part was when he unveiled the 3080 & told us that it was beautifully engineered, would look great in your pc. And that it lights up:p And what's up with all the spatulas?
 
Want MSFS benchies.

I'm playing at 7680x2140. I suspect the 3090 is the card to go for as it chews up VRAM. Should probably wait for AMD's offerings though.
 
And I never said anything to the contrary, but so far Ampere doesn't look like the stratospheric leap we suspected, and actually leaves itself (bar 3090) to be a very attainable, nay beatable, target for AMD.

Its looking pretty tough though in the 3080 space particularly.

So AMD definitely need that 80 CU part now with 16GB vram and performance above a 3080 (not necessarily beating a 3090 but close)
They need to prove RT and DX MR (or whatever its called) and the caching thing which to be fair is already present in console tech.
They need to get the specs out soon like days/weeks not months
They need to get volume production out in 2020

The big advantage AMD have at this point is TSMC node efficiency and a big price gap between the 3080 - 3090 but they have to outperform a 3080 to take advantage of that

In response Nvidia could release a 20GB 3080, 12GB 3090 or a 16GB 2070 + mess about with prices
 
but right now RNDA 2's best looks to be 80 CU and limited to GDDR6 & 384 bit bus, which puts it somewhere around 1.7x 5700 XT, or about 20% faster than a 2080 Ti. And that's just a best case on paper
Yeah, that's utter nonsense, and if you've paid even a smattering of attention to what AMD themselves have said and what the consoles have shown, you'd know it was nonsense.

If you were talking about RDNA 1 CUs then yeah you'd have a point. but we're not. There is a big performance per watt increase coming, there is IPC increase coming, RDNA 2 CUs are not RDNA 1 CUs. The 2080 Ti will be matched, and likely beaten, by 40 CUs. A mere 40 CUs. That leaves 56, 64 and 72 CU SKUs available (the 80 probably won't happen).

AMD are very, very unlikely to get in the 3090's face, but everything else is ripe for the taking, and I'd be very surprised if RDNA 2 doesn't give Ampere a good competitive smacking.
 
Its looking pretty tough though in the 3080 space particularly.

So AMD definitely need that 80 CU part now with 16GB vram and performance above a 3080 (not necessarily beating a 3090 but close)
They need to prove RT and DX MR (or whatever its called) and the caching thing which to be fair is already present in console tech.
They need to get the specs out soon like days/weeks not months
They need to get volume production out in 2020

The big advantage AMD have at this point is TSMC node efficiency and a big price gap between the 3080 - 3090 but they have to outperform a 3080 to take advantage of that

In response Nvidia could release a 20GB 3080, 12GB 3090 or a 16GB 2070 + mess about with prices

I disagree, If they can get on or about 3080 performance, with lower pricing and more VRAM. That'll do
I suspect that the 3xxx run hot from the look of the hype arounf their coolers, so AMD may well have a heat/power advantage on their lower node.
 
So AMD definitely need that 80 CU part now with 16GB vram and performance above a 3080 (not necessarily beating a 3090 but close)
No, they really don't, and it's quite concerning how people seemingly just haven't paid attention to what AMD have said and the consoles have shown.

I've said this time and again, humbug has done similar napkin maths; for argument's sake, let's say AMD achieve the 50% performance per watt uplift. Right there, the 2080 Ti is beaten by about 10-15% using a mere 40 CUs at 225W (5700 XT is our reference point). 40 CUs. Four-zero. At 225W. So what happens when you go up to 56, 64 or even 72 CUs? AMD will need 80 CUs if they really want to get up in the 3090's face, but everything below that is ripe for the taking.

Of course, it's all speculation until the products are real, but speculation like "AMD have an uphill battle" or "AMD really need that 80 CU part just to match the 2080 Ti" is utterly, utterly absurd, especially when this is not waffle in a vaccuum, we have literally seen the XSX in action to give us a starting point.
 
Wrong thread? :p

Was posted in the correct one already ;)

Trouble I am finding with this card is, it maybe too close to the price of the 3080. I mean I will likely buy a 3080 FE, sell the game code and after my cashback work out around £620. So if they wanted much more than £500 then it would be a no from me.

That said it all depends if Nvidia’s HBCC (RTX IO) works properly. I know I got HBCC to work a few times when I had a vega and it did help.
 
No, they really don't, and it's quite concerning how people seemingly just haven't paid attention to what AMD have said and the consoles have shown.

I've said this time and again, humbug has done similar napkin maths; for argument's sake, let's say AMD achieve the 50% performance per watt uplift. Right there, the 2080 Ti is beaten by about 10-15% using a mere 40 CUs at 225W (5700 XT is our reference point). 40 CUs. Four-zero. At 225W. So what happens when you go up to 56, 64 or even 72 CUs? AMD will need 80 CUs if they really want to get up in the 3090's face, but everything below that is ripe for the taking.

Of course, it's all speculation until the products are real, but speculation like "AMD have an uphill battle" or "AMD really need that 80 CU part just to match the 2080 Ti" is utterly, utterly absurd, especially when this is not waffle in a vaccuum, we have literally seen the XSX in action to give us a starting point.

I am all in with Napkin math as dogs like Big Navi needs food
 
If with 52 CU's an XBox Series X matches if not beats a 2080 Super, an 80 CU RDNA2 GPU should compete with a 3080.

Guys the 3080 was vs a standard 2080, the performance difference was 60 - 80% in Nvidia controlled benchmarks, so best case. The 3080 is about 35% faster than the 2080TI.

The 2080TI is 15% faster than the 2080 Super, 80CU's is 55% more than 52. Take the clock speeds from the PS5 as add them to that you get 330Mhz or +18%.

An 80CU RDNA2 dGPU at 2.23Ghz is probably 40% or more faster than a 2080TI, so at least as fast as the 3080.

The 3090 has 25% more shaders, Nvidia have pushed the 3080 pretty close to the top tier card, to in side of 25%, i suspect they did that because they know what's coming.
 
That leaves 56, 64 and 72 CU SKUs available (the 80 probably won't happen).

I honestly don't think I'm interested in a sub-80 CU part, and I'm not opposed to buying AMD but that is what its going to take to get me on side.

What does interest me is this whole concept of the cores being able to dual task between shading and rasterization or RT I'd like to see how that plays out vs the dedicated engines Nvidia have developed. So for example can Big Navi show significant gains in raw rasterization when not using RT effects vs its Nvidia counter part because lets face it most games don't have RT support and we all play legacy titles.
 
Status
Not open for further replies.
Back
Top Bottom