• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA ‘Ampere’ 8nm Graphics Cards

By the way they are talking I'm expecting AMD cards to have the most memory on them. Went be surprised to see bottom end with 6GB, low to mid with 8GB and upper mid to high end with anything up to 16GB.
 
No it was not even close to being a 7950 and more akin to a 7850 which has 1.74 tflops. Honestly do some fact checking before posting. The xbox one was akin to a 7770. Also going easy with these numbers because they are not even the Ghz cards which were the high end of that gen before the 2 series arrived.

In gaming performance. The 760 was ~2tflops and was around about or a little slower than the 7950 in gaming.

As mentioned, there always seems to be a disparity in gaming performance between AMD and nvidia in gaming performance terms.

But you are correct a better direct comparison in terms of price in usd and tflops directly would be a 7850. The 7950 was $449 on release.

Regardless, the ps4 and xbox one had a mid range $250 gpu equivalent.

On release the ps5 and xbox series x will also have roughly the equivalent of a mid range gpu (5700xt/2070 super/2080/ vega 64 if we draw rough tflop comparisons).

All those cards performance wise will be in the mid range segment at or around the ps5 and xbox series x release.

It simply isn't reality to state that this generation of consoles has some massive difference in gpu power in comparison to pc gpu' s when compared with the last generation.

Sure, there is a few months difference in the release of things and a slight difference in tflops percentages but come the end of the year, there will be no marked difference (other than the price of Nvidias top card).
 
Last edited:
In gaming performance. The 760 was ~2tflops and was around about or a little slower than the 7950 in gaming.

As mentioned, there always seems to be a disparity in gaming performance between AMD and nvidia in gaming performance terms.

But you are correct a better direct comparison in terms of price in usd and tflops directly would be a 7850.

Regardless, the ps4 and xbox one had a mid range $250 gpu equivalent.

That's why you use GCN numbers to compare. Any other architecture ie a Nvidia architecture is useless to compare as Nvidia ain't in the console's we are speaking about. Anandtech review of the gtx760 showed out of the box no settings touched there card was at 2.64 tflops when using the actual boost the card was achieving. Nvidia numbers are based on the guaranteed boost clock which in the case of a gtx760 was 1033 mhz but the actual card was boosting to 1149mhz. Every number Nvidia give you towards performance is mainly useless as every card will perform slightly different due to the silicon lottery but have never seen one that was not faulty performing at Nvidia's numbers (always above). It's now the same for AMD as they have no fixed clock and give you the theoretical max so actual sustained tflops is less than stated. Think i read that the next gen console have fixed gpu clocks so that is there performance. Any how as GCN went on you could never compare to Nvidia and get anywhere close to actual performance using tflops. With Turing and Navi they are a lot closer again but it's still not the best way. As you said look at Vega 64 compared to 5700xt to see how much more efficient Navi is and Navi 2 is supposed to have taken on more improvements so we can't really compare.

What we can compare though is history as ps4 was gcn 1 and so was the desktop cards until the 2 series came along.
 
Last edited:
I feel we have a good idea on the performance spectrum now (no certainty of course) but price to the consumer remains an enigma. Will AMD follow Nvidia lead on pricing or be true to their word on disruption.... I hate to say it but my one true fear in all this is AMD appeasement to shareholders they keep challenging consumers and customers on this point where as Nvidia clearly don’t give a rats a** you there to make those margins boy
 
That's why you use GCN numbers to compare. Any other architecture ie a Nvidia architecture is useless to compare as Nvidia ain't in the console's we are speaking about. Anandtech review of the gtx760 showed out of the box no settings touched there card was at 2.64 tflops when using the actual boost the card was achieving. Nvidia numbers are based on the guaranteed boost clock which in the case of a gtx760 was 1033 mhz but the actual card was boosting to 1149mhz. Every number Nvidia give you towards performance is mainly useless as every card will perform slightly different due to the silicon lottery but have never seen one that was not faulty performing at Nvidia's numbers (always above). It's now the same for AMD as they have no fixed clock and give you the theoretical mx so with amd the actual sustained tflops is less than stated. Any how as GCN went on you could never compare to Nvidia and get anywhere close to actual performance using tflops. With Turing and Navi they are a lot closer again but it's still not the best way. As you said look at Vega 64 compared to 5700xt to see how much more efficient Navi is and Navi 2 is supposed to have taken on more improvements so we can't really compare.

What we can compare though is history as ps4 was gcn 1 and so was the desktop cards until the 2 series came along.

Ok, so we can agree the card in the ps5 is similar to a 5700xt in performance (roughly) yes (not taking into account the natural performance advantage consoles have anyway over PCs due to software as that would have been the case last gen)?

If so, what segment will 5700xt like performance be in when ampere and the new navi stuff hits?

It will be shoved into the mid range ~$300 segment.

This means this generation will end up being very similar to the 2013 consoles where we had a mid range 270x released for $199 which was a rebranded 7970 (so very similar to the ps4 gpu).
Want to take a guess as to what the 5700 and 5700xt will be rebranded to for the 6000 series :p?

The only tangible differences is that the generation shifts are slightly different by a few months and AMD are struggling to keep up at the top. Also prices have inflated. The mid range segment has gone up a good $100, but then rumours are the ps5 will be rrp'd at $100 more than the ps4 so it's swings and roundabouts there. Add that to the weak pound compared with 2013 and i can see why people perceive there to be a change, but it is all very similar really.
 
Ok, so we can agree the card in the ps5 is similar to a 5700xt in performance (roughly) yes (not taking into account the natural performance advantage consoles have anyway over PCs due to software as that would have been the case last gen)?

If so, what segment will 5700xt like performance be in when ampere and the new navi stuff hits?

It will be shoved into the mid range ~$300 segment.

This means this generation will end up being very similar to the 2013 consoles where we had a mid range 270x released for $199 which was a rebranded 7970 (so very similar to the ps4 gpu).
Want to take a guess as to what the 5700 and 5700xt will be rebranded to for the 6000 series :p?

The only tangible differences is that the generation shifts are slightly different by a few months and AMD are struggling to keep up at the top. Also prices have inflated. The mid range segment has gone up a good $100, but then rumours are the ps5 will be rrp'd at $100 more than the ps4 so it's swings and roundabouts there. Add that to the weak pound compared with 2013 and i can see why people perceive there to be a change, but it is all very similar really.

No the actual Tflops of the 5700xt is around 9.2 as most cards average out around 1800mhz out of the box. Amd's numbers are based on the boost clock (1905mhz) hence why they say peak sp compute up to 9.7tflops. From the reviews and users the actual clock hovers around 1800mhz out of the box which is in between the game clock 1755mhz and the boost clock 1905mhz. Interesting stuff i know but the ps5 has 10.8 TFLOPs for the peak performance average so it's clearly faster.

I can't answer that as the next gen have not released yet and both teams gpus could be really disappointing. What i can tell you is compared to the 290x/gtx780ti and ps4 the new console performance will most certainly be closer to what gets released unless you think AMD and Nvidia are bringing 30+ Tflop cards to the market

Pricing again i have no clue but when the ps4 came out you are lucky if the 7850 was $150 so again progress has been made if this time round we get $3-400 gpu performance. The Ps4 released at $399 and we don't have a ps5 price yet but at least in $'s we are going to be getting a lot more gpu wise.

Even if we look at what Nvidia were offering then and now what you are saying does not stack up. At the moment Nvidia only have 2 mainstream cards that are definitely faster than what's in the ps5 with the 2080s and 2080ti and AMD have nothing. If you go up to the series x Nvidia only have the 2080ti as a certainty.

Here is chart that should clear it up for you. This is 2 months before ps4 and Xbox one release date. Now have a look at the 7850 (ps4) and 7770 (Xbox One) to see where they sat in the pecking order

https://www.techpowerup.com/review/msi-gtx-660-gaming/26.html

Now have a look at the 2070 super for a Ps5 type baseline and a 2080s for the xbox series x. They are still maybe 4-5 months away from launch compared to 2 in the article above but close enough. There is also some of the Nvidia 7 series that had launched but not the 2 series from AMD. Only the 780 and Titan were new chips any how but you can disregard the 7 series if you want to make it fairer.

https://www.techpowerup.com/review/nvida-geforce-rtx-2070-super/27.html

I could be being unfair on the ps5 gpu here because at 10.8tflops average max peak i believe it would be faster than the 2070 super and here is why. The average stock model 5700xt will have around 9.2 tflops based on the 1800mhz clock they all seem to run at. The average Founders edition 2070s runs around 1875mhz-1900 nothing touched so has around 9.6tflops. In games the difference between 5700xt and 2070s varies depending on review site to around 5-10%. Being the difference in tflops in just shy of 5% the architectures seem to be around the same efficiency in games. So if AMD have made a stride in IPC with Navi 2 a 10.8 tflop ps5 gpu would be a bit faster than the Super so for me it's not looking to shabby.
 
Last edited:
By the way they are talking I'm expecting AMD cards to have the most memory on them. Went be surprised to see bottom end with 6GB, low to mid with 8GB and upper mid to high end with anything up to 16GB.
I really want 16gb I hope it’s not to expensive.

What’s the odds on nvidia lowballing on memory except on their flagship card.
 
I really want 16gb I hope it’s not to expensive.

What’s the odds on nvidia lowballing on memory except on their flagship card.

Well if the speculation is close I don't think there will be any free lunches. I'm expecting top SKUs to be around £1,000. I think you'll be spot on with memory, I'm expecting AMD to be slightly ahead on RAM but behind on ray tracing FPS.
 
I really want 16gb I hope it’s not to expensive.

What’s the odds on nvidia lowballing on memory except on their flagship card.
Why is it 'lowballing' if they provide what VRAM is needed for 4k+ resolutions even on a 3090? WIll you be running games at above 4k resolutions or is there any specific reason you need 16GB VRAM? I'm just curious why you are fixated on this figure.
 
Why is it 'lowballing' if they provide what VRAM is needed for 4k+ resolutions even on a 3090? WIll you be running games at above 4k resolutions or is there any specific reason you need 16GB VRAM? I'm just curious why you are fixated on this figure.

You could be right. Nvidia cards may not need as much VRAM. Perhaps using more GPU grunt whereas AMD may be going down a different route that will require more VRAM. By all accounts the 4K modded people seem to use a lot of VRAM.
 
You could be right. Nvidia cards may not need as much VRAM. Perhaps using more GPU grunt whereas AMD may be going down a different route that will require more VRAM. By all accounts the 4K modded people seem to use a lot of VRAM.

Article here: https://www.overclock3d.net/news/gp...res_leak_-_rtx_speed_boost_nvcache_and_more/1

Another feature that's reportedly coming to Ampere is NVCache, a new technology that's designed to allow Ampere graphics cards to better utilise data in system memory and storage to speed-up memory-constrained workloads. In effect, Nvidia has created an alternative to AMD's HBCC (High Bandwidth Cache Controller), which allows AMD to utilise system memory of fast storage to overcome the limitations of frame buffer sizes. In effect, HBCC allows AMD to use system memory and storage as more VRAM, which is something that Nvidia hopes to replicate with Turing.

Memory-wise, Nvidia appears to be focusing on improvements in memory compression to deliver increased effective memory bandwidth with Ampere. This allows Nvidia to increase its memory performance without increasing the VRAM capacities, and build costs, of its next-generation graphics cards significantly. A new technology called Tensor Accelerated VRAM Compression is also said to be in the works.

Nvidia with all of their engineering prowess are not going to make it so their high-end cards do not have enough VRAM to function at high resolutions during their lifecycle, especially not during this critical time where compeititon is heating up enough to threaten their dominance. That woud make no logical sense.
 
Article here: https://www.overclock3d.net/news/gp...res_leak_-_rtx_speed_boost_nvcache_and_more/1



Nvidia with all of their engineering prowess are not going to make it so their high-end cards do not have enough VRAM to function at high resolutions during their lifecycle, especially not during this critical time where compeititon is heating up enough to threaten their dominance. That woud make no logical sense.

I don't think anyone disputes Nvidia will put plenty of memory on their high end cards. It's the rest of them, artificially segmented and designed to upsell you to high end to get more VRAM that's the issue. Maybe they'll be more generous this time? ;)
 
I don't think anyone disputes Nvidia will put plenty of memory on their high end cards. It's the rest of them, artificially segmented and designed to upsell you to high end to get more VRAM that's the issue. Maybe they'll be more generous this time? ;)

Less powerful cards = lower resolutions = you need less VRAM. 6-8GB VRAM, combined with all of their memory efficiency tech, is likely (I'll be interested to confirm iit either way) suitable for 1440p. I would be interested to see Gamers Nexus run a detailed article on this and I may shoot them an email to consider it as I could find no good comparisons of VRAM usage from this year on the interwebs when I searched. :)
 
Last edited:
Back
Top Bottom