• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA ‘Ampere’ 8nm Graphics Cards

What he's describing is a form of tacit collusion. It can't be proved.

It's doesn't need proof of collusion, quite a few years ago the big electrical retailers got prosecuted for this and lost. Similar pricing, deals and offers all at relatively the same time. It's was price fixing with a wink and a nod rather than actual co-operation and it's still illegal.
 
I am reminded of Jensen Huang's quote in the NVIDIA 3080 reveal video; 'Now is the time to upgrade that 1080ti' he said...

What he should have said; 'Start saving up for the steadily increasing cost of our more powerful 3080, because by the time it is in stock, it will be halfway to the 4000 series and close to £1000'.

Disappointed but was expecting this, no real surprise. Maybe next year I'll upgrade my 1080ti to a 3080... IF it still cost close to RRP, IF anyone has them in stock!

At least my depleted bank account is happy, it's been on a serious diet since March!!!
 
I am reminded of Jensen Huang's quote in the NVIDIA 3080 reveal video; 'Now is the time to upgrade that 1080ti' he said...

What he should have said; 'Start saving up for the steadily increasing cost of our more powerful 3080, because by the time it is in stock, it will be halfway to the 4000 series and close to £1000'.

Disappointed but was expecting this, no real surprise. Maybe next year I'll upgrade my 1080ti to a 3080... IF it still cost close to RRP, IF anyone has them in stock!

At least my depleted bank account is happy, it's been on a serious diet since March!!!

3070s are creeping up as well - gonna be near if not past 3080 "launch" price at this rate.
 
I've seen a few people try to suggest that with the architecture changes and extra Cuda cores, that current games are unable to take full advantage of Ampere, but future games will and so Ampere will age like fine wine.

Is there any truth to this or is it just polony?

 
Last edited:
I've seen a few people try to suggest that with the architecture changes and extra Cuda cores, that current games are unable to take full advantage of Ampere, but future games will and so Ampere will age like fine wine.

Is there any truth to this or is it just polony?

Ask yourself... are current games the best example of RT /DLSS implementation and performance, or will they likely get more sophisticated and efficient over the coming year or two, allowing Ampere to better realise it's benefits?

Next year we will imo almost certainly at some point have Super editions out too with higher specs.
 
Ask yourself... are current games the best example of RT /DLSS implementation and performance, or will they likely get more sophisticated and efficient over the coming year or two, allowing Ampere to better realise it's benefits?

Next year we will imo almost certainly at some point have Super editions out too with higher specs.

Are developers really going to invest that much time into Nvidia specific optimisations when both next gen consoles are AMD based?

Perhaps a few isolated instances, but I suspect AMD cards will age better assuming they are at least competitive
 
Are developers really going to invest that much time into Nvidia specific optimisations when both next gen consoles are AMD based?

Perhaps a few isolated instances, but I suspect AMD cards will age better assuming they are at least competitive

People say this a lot but often console optimisations don't carry over to the PC version of the game and/or result in poor performance for both AMD and nVidia on the PC and where they do optimise the PC version they are going to do so for the dominant platform.
 
Are developers really going to invest that much time into Nvidia specific optimisations when both next gen consoles are AMD based?

Perhaps a few isolated instances, but I suspect AMD cards will age better assuming they are at least competitive

I'm really not sure what you are trying to say... that the PC gaming industry turns no worthy profit for devs to put any effort into? Or that Nvidia won't keep up their considerable pressure and presence to bring their specific features (like DLSS) to market? Saying 'AMD cards will age better' is pure speculative and subjective twaddle...
 
I've seen a few people try to suggest that with the architecture changes and extra Cuda cores, that current games are unable to take full advantage of Ampere, but future games will and so Ampere will age like fine wine.

Is there any truth to this or is it just polony?


I've always considered the 20 series to be a beta, so I agree with this guy. Though don't go thinking Ampere is the last GPU you will need. I've also been saying that by the time you need more than 10GB you will also need a better GPU, well for games anyway.
 
I don't see how.. the AI stuff in plain mathematical lingo is the ability to do matrix operations with single instruction (convolutions are another area, but haven't sent much development there).. the guy clearly has no clue about the complexity of graphics rendering pipeline.. RT on the other hand is a donkeywork special effects algorithm. If I am doing armchair, I would say that in future AI can be used to guess rendering outcomes well in advance to avoid traditional pipeline moves thus saving effort and unlocking performance but we are no where close to it.. not from a hardware perspective and neither from developer standpoint... it would require additional efforts on their behalf to train these predictive models offline
 
I don't see how.. the AI stuff in plain mathematical lingo is the ability to do matrix operations with single instruction (convolutions are another area, but haven't sent much development there).. the guy clearly has no clue about the complexity of graphics rendering pipeline.. RT on the other hand is a donkeywork special effects algorithm. If I am doing armchair, I would say that in future AI can be used to guess rendering outcomes well in advance to avoid traditional pipeline moves thus saving effort and unlocking performance but we are no where close to it.. not from a hardware perspective and neither from developer standpoint... it would require additional efforts on their behalf to train these predictive models offline

Bit beyond my hands on working knowledge but in that context probably better using the AI to analyse the scene structure - BVH, etc. and make decisions about where to and in what volume to do traces so as to make as efficient use of the ray count budget as possible.
 
I haven't seen Nvidia's white papers..but there's good reason to believe that there's already some optimisation heuristic behind BVH generation.. but AI on the other hand has the potential to take in the position of game objects and guess the final RT output which can be applied in single pass atleast theoretically thus obviating the need for dedicated RT altogether.

I am just a math guy not much of a specialist in graphics, but have generally looked at few course materials offline and come to appreciate the ingenuity and complexity of the traditional raster approach..
 
Just watched this and it is spot on:


It's not. FPS is not a linear measurement of performance. You can't simply compare frame rate differences at different base frame rates. Creating it becomes harder to create the same fps delta at higher frame rates.

Besides they're also looking at raster rt perf on the same card as well, and giving a % jump which is dumb because not only is the 3080 faster at RT but also rasterization.

You have to be really careful when comparing perf impact of features, game started and devs often prefer to look at a linear scale, e.g maybe RT in scenario A incurs a 4ms penalty to render each frame where scenario B is 2ms, so the card in scenario is twice as fast running that feature. And that ms impact is true no matter what else the card or the frame rate are doing.
 
It's not. FPS is not a linear measurement of performance. You can't simply compare frame rate differences at different base frame rates. Creating it becomes harder to create the same fps delta at higher frame rates.

Besides they're also looking at raster rt perf on the same card as well, and giving a % jump which is dumb because not only is the 3080 faster at RT but also rasterization.

You have to be really careful when comparing perf impact of features, game started and devs often prefer to look at a linear scale, e.g maybe RT in scenario A incurs a 4ms penalty to render each frame where scenario B is 2ms, so the card in scenario is twice as fast running that feature. And that ms impact is true no matter what else the card or the frame rate are doing.
The conclusion is spot on, it’s disappointing to me ;)

Enabling RT should not have such a huge impact on fps in games currently released. I was expecting it to be good enough where the impact would be little to none. Hence disappointing.
 
The conclusion is spot on, it’s disappointing to me ;)

Enabling RT should not have such a huge impact on fps in games currently released. I was expecting it to be good enough where the impact would be little to none. Hence disappointing.

"Should" by what standards? Your expectations? Maybe your expectations are just wrong? Real time ray tracing is extraordinarily more computationally expensive than rasterization, it not just an effect you turn on, it's whole new type of rendering to begin with, the fact we can do any at all in real time even with today's hardware is a crazy feat of engineering.
 
"Should" by what standards? Your expectations? Maybe your expectations are just wrong? Real time ray tracing is extraordinarily more computationally expensive than rasterization, it not just an effect you turn on, it's whole new type of rendering to begin with, the fact we can do any at all in real time even with today's hardware is a crazy feat of engineering.
Lol. Did you read my posts at all? Go read them all and come back :)
 
I've seen a few people try to suggest that with the architecture changes and extra Cuda cores, that current games are unable to take full advantage of Ampere, but future games will and so Ampere will age like fine wine.

Is there any truth to this or is it just polony?


I'm 5 mins and 30 seconds in and it seems like he thinks that DLSS is the be all to end all in the graphical pipeline.
His entire argument at this point seems to be built around DLSS.

Does he bring up any other points later on in the video?
 
Back
Top Bottom