• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA ‘Ampere’ 8nm Graphics Cards

Dont agree, as there is clear distinction between the original dud release and the super range. Its as good as a new gen on its own.
No one with a 2070 was eyeing an upgrade though, same with 2080. They look like a great upgrade to 10 owners, but all 7, 9, 10 are on same lines, so 20 should be too, otherwise your covering for the dud release, the up and down crazy line will show it, it'll highlight the ****ery
 
Last edited:
No one with a 2070 was eyeing an upgrade though, same with 2080. They look like a great upgrade to 10 owners, but all 7, 9, 10 are on same lines, so 20 should be too, otherwise your covering the dud release, the up and down crazy line will show it, it'll highlight the ****ery

I think showing it this way highlights that Turing 1 was a dud release - its not covering it up at all. Then it shows that they fixed it with Turing 2, which I think is a fair assessment. Yes benefit of hindsight and all that.

Here is my prediction of where the Ampere series will land. No science to it, just lined it up visually by eye.

IqG4dXn.png
 
I'm talking more about the optics. Of course Nvidia can follow suit and drop prices as well.

That would ruin their new lineup in price to performance, if they dropped the 2070S to £329 to compete with a £279 5700XT, then the new 3060 at £499 with a 10% bump over the 2070S looks like at idiot buy, right? I mean who'd get the 3060, £170 less for a loss of 10% no brainer. Optics or not, they can't just drop the prices or they cannibalise there own sales of the new parts, and once those parts run out AMD will mop up the sales or Nvidia need to then drop the 3060 to a sensible price.
 
The 2060s and 2070s look like what Turing *should have* been at luanch. The 2080s still would have been a pretty weak upgrade along that row. The 2080Ti should have been brought down around the 2080s price point and then Turing would have been decent overall. (Yet late at that point)
 
If an intel motherboard running a Titan RTX @x8 is not saturated then there is no chance @x16 a 3XXX card will be.

The other week I had to upgrade my 24/7 PC, I did look at AMD but in the end went for an intel setup as the AMD CPUs don't overclock that high and would really throttle a couple of Titans.:eek:

„Horizon: Zero Dawn on PC shows significant performance difference between 8x and 16x PCIe 3.0”

https://www.reddit.com/r/hardware/comments/i46qtf/horizon_zero_dawn_on_pc_shows_significant/
 
I think showing it this way highlights that Turing 1 was a dud release - its not covering it up at all. Then it shows that they fixed it with Turing 2, which I think is a fair assessment. Yes benefit of hindsight and all that.

Here is my prediction of where the Ampere series will land. No science to it, just lined it up visually by eye.

IqG4dXn.png

Just a little note but not that important, perhaps firestrike (ultra) would've been slightly better to use for generation to generation comparison. Timespy uses DX12 which works better on the newest cards whereas the older cards predate DX12 going mainstream so don't perform as well on timespy. But most games are DX12 now so timespy good indicator on real world perf I guess.
 
Just a little note but not that important, perhaps firestrike (ultra) would've been slightly better to use for generation to generation comparison. Timespy uses DX12 which works better on the newest cards whereas the older cards predate DX12 going mainstream so don't perform as well on timespy. But most games are DX12 now so timespy good indicator on real world perf I guess.
Welcome to your new full time job.:p
 
Just a little note but not that important, perhaps firestrike (ultra) would've been slightly better to use for generation to generation comparison. Timespy uses DX12 which works better on the newest cards whereas the older cards predate DX12 going mainstream so don't perform as well on timespy. But most games are DX12 now so timespy good indicator on real world perf I guess.

If you can link me to a ranking I'll see what it looks like.

I think, because we're doing a comparative look across generations, it doesn't really matter what benchmark we use we'll see the same overall shape to the chart. The purpose isn't to be tied down to 1% accuracy on a performance increase, just to visually see the shift across time.
 
Rumour mill time, this is coming from motherboard manufacturers.

Everyone who owns a PCIE4 AMD motherboard is going to be happy, Intel owners who do not are going to be sad face.

So apparently some of the new RTX3000 cards are able to saturate PCIE 3.0 x16. Now because Intel currently has no desktop CPU that supports pcie4, this means RTX3000 GPUs will run faster on AMD x570 systems and B550 systems.

Rocket Lake S from Intel was suppose to launch at the end of this year so Intel owners could get maximum benefit from RTx3000 but now they cannot - they will have to wait while AMD systems will have higher GPU performance

I wouldn’t have thought so. Even something like the 2080Ti doesn’t even saturate pcie 3.0 8x never mind 16x. So the 3090 or whatever it is will need more than twice the bandwidth of the 2080Ti which I think is unlikely.

And if say a 3090 saturated pcie 3.0 16x the only multigpu option is to go threadripper. As pcie 4.0 8x on normal x570 will not be good enough for maximum performance as x570 doesn’t have enough lanes to run two cards at full 16x bandwidth.

Id be interested in seeing if there was i difference but I think by the time it may be beneficial we will have things like ddr5 etc, so most may be wanting to upgrade anyway.

Obviously having pcie 4.0 now is going to be more futureproof. But I can’t see it being all that beneficial just yet. More marketing at the minute.
 
Last edited:
More marketing at the minute.

Unless they produce non-consumer cards electrically wired for 8x PCI-E 4.0 lanes, that is a huge benefit for lower end non-workstation systems that might want more lanes for 2x GPU's or more NVMe storage.

As pcie 4.0 8x on normal x570 will not be good enough for maximum performance.
Whut?
 
Back
Top Bottom