• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
So latest rumours I read this morning on twitter has Big Navi 21 having twice the cores of 5700xt and running at 2500 boost speed plus IPC improvements so latest estimate is that it will be 2.6 times faster than the 5700xt.

Since a 2080ti is at best 30% faster than 5700xt, wouldnt this put the performance at about double a 2080ti?

And since in reality now we know the 3080 is only 20% faster than a 2080ti, this should put big navi faster than a 3090!

But i think it wont be as good on rtx or DLSS (or equivalent) so there will be winners and losers in games.

It will all come down to price.

Not agreeing with your maths as in we are all speculating nothing is concrete yet, but yes we know its way faster than the 2080Ti and will trade blows in games being slower in some and faster in others against the 3080. The aggressive versions and binned flavour will be able to tail the 3090 so it offers customers choice (not expecting any nvidia people to be interested in this).

They will also release some lower spec cards to compete with the 3070 and 3060 - the usual run of the mill gen stacks. What we have not heard is if they were falling short in performance against the 3080 as we would know by now for sure and everyone would be sniggering again at AMD.
 
I am still confused how 5120 shaders are supposed to compete against 8700..could someone find an explanation.. seems like clock speeds alone cannot explain this, I was looking at few articles that suggest AMDs new cache hierarchy reduces cache misses..was anyone able to build a theory around this?
 
If big Navi is faster, no one will bat an eye at the 3070 the day after - everyone pull be putting in 6900xt orders. If its not faster, 3070 orders will fly off the shelves the next day.

The 3070 is either matching/faster than big Navi or its going to be DOA, its one of the two, there is no other option based on Nvidia's release date choice.

I disagree, check out the steam survey stats, people will still buy 3060/3070s because nvidia - reasons.

Nvidia will still sell shed loads of these. Only people that can be bothered to check real stats and dont mind brand hopping will be aware that the 3070 will suck balls, be no better than a 2080Ti, have limited stock and not be able to raytrace again.
 
I disagree, check out the steam survey stats, people will still buy 3060/3070s because nvidia - reasons.

Nvidia will still sell shed loads of these. Only people that can be bothered to check real stats and dont mind brand hopping will be aware that the 3070 will suck balls, be no better than a 2080Ti, have limited stock and not be able to raytrace again.

3070 is dead on arrival, Nvidia knows this already.
You and others will buy Big Navi cards but wont say so here as nvidia fans will shame you if you do.

Big Navis are coming for the gaming leadership
 
3070 is dead on arrival, Nvidia knows this already.
You and others will buy Big Navi cards but wont say so here as nvidia fans will shame you if you do.

Big Navis are coming for the gaming leadership

and for balance 3df is making a come back and will beat them all!!!!!!

see i can make stuff up as well. :D
 
I am still confused how 5120 shaders are supposed to compete against 8700..could someone find an explanation.. seems like clock speeds alone cannot explain this, I was looking at few articles that suggest AMDs new cache hierarchy reduces cache misses..was anyone able to build a theory around this?

In short Nvidia don't really have that kind of Shader power in games. The Cuda core has 2 x Fp32 units but one is shared with Int32.

It's a bit like having a production line with 2 workers. One worker deals with only putting a graphics card in a box but the other also has to put a sticker on and put cards in so in effect has less through put. When Int32 is not called for they have twice the power as has been shown in other apps but in games it's different.
 
I am still confused how 5120 shaders are supposed to compete against 8700..could someone find an explanation.. seems like clock speeds alone cannot explain this, I was looking at few articles that suggest AMDs new cache hierarchy reduces cache misses..was anyone able to build a theory around this?

Because NVIDIA have messed around with their numbers going from Turing to Ampere, half of their cores can only do FP32, the other half can do either FP32 or Integer Operations but not at the same time. If you compare the 3080 to the 2080Ti cores, the 2080Ti has 4352 whereas the 3080 has 8704 cores, so in other words, exactly double. Is the 3080 twice as fast as the 2080Ti, nope. It's about on average 30% faster.
 
I mean, yeah he might be aiming a little high but big navi could possibly beat a 3080. Why are you making an obviously ridiculous statement to try and make his sound more stupid?

have we seen any real hint of big navis potential performance numbers yet ?

do we have a hint at the price ?

and thats all she wrote. i hope amd pull it off but until we see launched cards running stuff and the price we haven't gotten a clue. no matter what leakers (bs artists) claim.

remember the 470,480,490 and then the 500 series rubbish that was flying about. just guess im getting sick of the constant posting of utter rubbish by a few on here of late.
 
Again we seen this meh before in similar fashion on the Vega and 7. They are compute cards or excel at another function. Gaming on the 30 series is like this. However from memory the nvidia guys laughed and said their proper gaming card was much better. Strange people that cant observe this pattern.
 
have we seen any real hint of big navis potential performance numbers yet ?

do we have a hint at the price ?

and thats all she wrote. i hope amd pull it off but until we see launched cards running stuff and the price we haven't gotten a clue. no matter what leakers (bs artists) claim.

remember the 470,480,490 and then the 500 series rubbish that was flying about. just guess im getting sick of the constant posting of utter rubbish by a few on here of late.

We have a lot of information to *estimate* expected performance yes. It may end up being completely wrong but AMD themselves have told us 50% perf/w over RDNA 1 which means a 5700XT would use ~140W with absolutely no changes. What do you think a 250-300W card might do even with that one single improvement, not including any IPC/Architecture/clock changes?

Then factor in what we know about the consoles. It is really not all that far fetched to expect big performance gains from RDNA2 so stop making it sound absolutely ridiculous.
 
And iof that's the case I'm expecting them to announce the 3070Ti at said launch to battle AMD :)

We already seen the placeholders and AIB patterns to engage on this, which is why some people got the hump when I said in the ampere thread that faster cards with more memory will appear pretty fast and they took it the wrong way.
 
We have a lot of information to *estimate* expected performance yes. It may end up being completely wrong but AMD themselves have told us 50% perf/w over RDNA 1 which means a 5700XT would use ~140W with absolutely no changes. What do you think a 250-300W card might do even with that one single improvement, not including any IPC/Architecture/clock changes?

Then factor in what we know about the consoles. It is really not all that far fetched to expect big performance gains from RDNA2 so stop making it sound absolutely ridiculous.

no its not, but the way some on here of late have veered off from educated guess work in to announcing amd's going to blast the doors off of 3080's is getting a bit silly, i posted a week or two ago saying they maybe able to have a beast at 300watts BUT they have to make one that big, what happens if they just make it at 150 or even 200 watts, amd have been championing the ole perf per watt for a while now, i could be wrong and they could come in with a 300watt monster but until they announce what their plans are we dont have a clue, and thats not even figuring out how much they are looking at charging for the thing.

as for using console specs and relative performance its hard to do apples to apples due to the way consoles are a closed system with much lower overheads. yes we can get a rough idea what maybe possible but until amd show a card and specs we dont know.

hopefully they will do all out and make a true high end card but yet again i get this sneaking suspicion its going to be another mid level card thats fairly good on perf per watt.
 
So why does double 5700xt cores somehow get double performance pass?
Because the Big Navi cores are not the same as Old Navi cores. IPC improvements, performance improvements, clock improvements. it will all add up. It won't be double of course, but it'll be a big leap.
 
Because NVIDIA have messed around with their numbers going from Turing to Ampere, half of their cores can only do FP32, the other half can do either FP32 or Integer Operations but not at the same time. If you compare the 3080 to the 2080Ti cores, the 2080Ti has 4352 whereas the 3080 has 8704 cores, so in other words, exactly double. Is the 3080 twice as fast as the 2080Ti, nope. It's about on average 30% faster.

I believe the 2080 ti had a separate dedicated set of int ALUs, so those numbers are not equivalent.. also instructions are fed into a pipeline awaiting execution it might so happen that some of the executed instructions might have to be discarded or data for execution can only be fetched at next cycle, this is surely expected to occur at full utilisation

In short Nvidia don't really have that kind of Shader power in games. The Cuda core has 2 x Fp32 units but one is shared with Int32.

It's a bit like having a production line with 2 workers. One worker deals with only putting a graphics card in a box but the other also has to put a sticker on and put cards in so in effect has less through put. When Int32 is not called for they have twice the power as has been shown in other apps but in games it's different.
I have thought about that but couldn't find any evidence that Rx 5700 has a separate int path.. it might be using those same shaders for int operations as well
 
no its not, but the way some on here of late have veered off from educated guess work in to announcing amd's going to blast the doors off of 3080's is getting a bit silly, i posted a week or two ago saying they maybe able to have a beast at 300watts BUT they have to make one that big, what happens if they just make it at 150 or even 200 watts, amd have been championing the ole perf per watt for a while now, i could be wrong and they could come in with a 300watt monster but until they announce what their plans are we dont have a clue, and thats not even figuring out how much they are looking at charging for the thing.

as for using console specs and relative performance its hard to do apples to apples due to the way consoles are a closed system with much lower overheads. yes we can get a rough idea what maybe possible but until amd show a card and specs we dont know.

hopefully they will do all out and make a true high end card but yet again i get this sneaking suspicion its going to be another mid level card thats fairly good on perf per watt.

AMD have said several times they are gunning for the high end so if they don't announce a big part it would already be a disaster. Your arguments suggest that you haven't been reading what AMD themselves have actually stated.

Yes it is all based on estimates but we know higher clocks are possible, we know RT is coming and we know there are more CUs than the 5700XT had already. With 50% perf/watt then performance will absolutely go up and this is only the stuff we know already.

I am not saying I expect it to beat the 3080 or 3090 because honestly I don't know but I would love to see it competing at that level.
 
have we seen any real hint of big navis potential performance numbers yet ?
Um...XSX demos, PS5 demos and specifically the XSX Gears 5 demo showing it outperform the PC version with identical settings and the new RDNA 2 features. Plus AMD's claims of +50% performance per watt over RDNA 1 and we already know what RDNA 1 can do because of the 5700XT. Don't forget Nvidia's surprising (but still pure marketing) slash of MSRP for Ampere.

So yeah, that's quite a few "real hints" of Big Navi's performance.
 
i posted a week or two ago saying they maybe able to have a beast at 300watts BUT they have to make one that big, what happens if they just make it at 150 or even 200 watts
That's clearly not going to happen though. The 5700 XT is a 225W TDP card, and AMD themselves have been hyping up BIG Navi as a tier above that. They're hardly going to come out with a 150W mid-range card. They've said again and again that they're targeting the high end. There's just no rational basis for your thinking.

As for its absolute performance... we'll see, I suppose.
 
Status
Not open for further replies.
Back
Top Bottom