• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Blackwell gpus

IIRC, weren't ada and ampere pretty much seal tight in terms of nothing really being revealed until the day of reveal? A few leaks in maybe a couple hours before but that was about it.
 
Interesting interview with Nvidia's CEO:

One big takeaway - as he said, general computing speed-ups each generation are pretty much over with current tech. The only way to speed things up is AI. This very much includes GPUs, also gaming ones. That might be a hint about performance of coming 5k generation - AI speed-ups that games don't really use yet (plus previews show these will be used for conversations and the like and not to speed graphics up), nothing much in raster or actual graphics generation performance. DLSS already barely use power of current tensor cores as is. To me it sounds like loads of marketing BS to sell things gamers can't really use yet, whilst trying to explain why we shouldn't expect big performance upgrades anymore. It is rather clear to me they push these chips to be enterprise first and gaming second.
 
Last edited:
Interesting interview with Nvidia's CEO:

One big takeaway - as he said, general computing speed-ups each generation are pretty much over with current tech. The only way to speed things up is AI. This very much includes GPUs, also gaming ones. That might be a hint about performance of coming 5k generation - AI speed-ups that games don't really use yet (plus previews show these will be used for conversations and the like and not to speed graphics up), nothing much in raster or actual graphics generation performance. DLSS already barely use power of current tensor cores as is. To me it sounds like loads of marketing BS to sell things gamers can't really use yet, whilst trying to explain why we shouldn't expect big performance upgrades anymore. It is rather clear to me they push these chips to be enterprise first and gaming second.
its been quite obvious with how tdps have been moving, rasterization isnt looking like a very efficient way to scale, i think this path is pretty promising especially when you consider the headroom in graphics pipeline mgmt, often the gpu works on way too many pixels that wont be displayed on the screen, a huge waste of resources and AI models can address this issue in various ways
 
Last edited:
its been quite obvious with how tdps have been moving, rasterization isnt looking like a very efficient way to scale, i think this path is pretty promising especially when you consider the headroom in graphics pipeline mgmt, often the gpu works on way too many pixels that wont be displayed on the screen, a huge waste of resources and AI models can address this issue in various ways
Raster is still base for most games. They didn't say anything about RT being improved, as focus seems to be on AI now. Current AI is still very primitive and mostly used in LLMs - these have nothing to do with graphics in games. Unless they come up with something else, that's actually good for graphics in games, I wouldn't expect any miracles - maybe FG with more frames but that's really not improving performance.
 
Last edited:

Look like RTX 5080 and RTX 5090 will launch next month in October 2024.
I doubt that. Just because manufacture of the 4090 will cease in October does not mean that the 5090 will launch then. Indeed, it could be a sign that manufacturing on the 5090 is just getting underway. If we were that close to an actual launch, I think we'd have seen more leaks by now from the supply and distribution chain.
 
Raster is still base for most games. They didn't say anything about RT being improved, as focus seems to be on AI now. Current AI is still very primitive and mostly used in LLMs - these have nothing to do with graphics in games. Unless they come up with something else, that's actually good for graphics in games, I wouldn't expect any miracles - maybe FG with more frames but that's really not improving performance.

Stuff like Ray Reconstruction, Random-Access Neural Compression of Material Textures, maybe even assets generated from low poly to high poly geometry, complex physics effects, etc.
 

Look like RTX 5080 and RTX 5090 will launch next month in October 2024.
I always look at that site URL and it makes me think 'wooftech', which is pretty appropriate given the amount of rubbish they often bark.
 
Stuff like Ray Reconstruction, Random-Access Neural Compression of Material Textures, maybe even assets generated from low poly to high poly geometry, complex physics effects, etc.
All of the existing ones work even on 2060 just fine (with power to spare), on old tensor cores - aside artificially locked FG (which AMD proved can work just fine on any GPU). 4k series have tensor cores much improved but these things just can't use the power as it's already. Physics is CPU only, if you use GPU for it you will lose FPS in games (either physics or graphics computation). AI has nothing to do with physics currently. Generating high poly model from low poly is called tessellation and already exist for many years - nothing to do with AI or tensor cores.
 
Last edited:
All of the existing ones work even on 2060 just fine (with power to spare), on old tensor cores - aside artificially locked FG (which AMD proved can work just fine on any GPU). 4k series have tensor cores much improved but these things just can't use the power as it's already. Physics is CPU only, if you use GPU for it you will lose FPS in games (either physics or graphics computation). AI has nothing to do with physics currently. Generating high poly model from low poly is called tessellation and already exist for many years - nothing to do with AI or tensor cores.
physics can be offloaded on gpu, infact physics involves a lot of matrix operations and is a great candidate for shifting workloads to gpu
the end objective doesnt change but its the process, eventually its just about making sure the game is rendered on screen, that will never change.. but the methods can be made more efficient
 
physics can be offloaded on gpu, infact physics involves a lot of matrix operations and is a great candidate for shifting workloads to gpu
the end objective doesnt change but its the process, eventually its just about making sure the game is rendered on screen, that will never change.. but the methods can be made more efficient
I didn't say it can't be, however so far it's been very inefficient. There's a reason back in the days of GPU accelerated physx Nvidia advised people to get 2 GPUs: one for graphics and one dedicated for physics. Otherwise there's been loads of stuttering and bad performance, which is one of the reasons they shifted it back to CPU as nobody is buying another GPU just for po physx. It could be better with modern GPUs but I doubt it. Even in 4090 few pools games I checked did but work great with GPU calculated physx and they are older than coal now.
 
Last edited:
I didn't say it can't be, however so far it's been very inefficient. There's a reason back in the days of GPU accelerated physx Nvidia advised people to get 2 GPUs: one for graphics and one dedicated for physics. Otherwise there's been loads of stuttering and bad performance, which is one of the reasons they shifted it back to CPU as nobody is buying another GPU just for po physx. It could be better with modern GPUs but I doubt it. Even in 4090 few pools games I checked did but work great with GPU calculated physx and they are older than coal now.
hmm.. physics is pretty much a problem tailor made for tensor cores, the throughput can exceed by an order of magnitude, hardly inefficient i would say
and there are many other ways that AI can be used to guess workload development for more efficient execution, like tessellation for example can be staged after a more effcient AI model which decides on whether and how much to tessellate
the use cases are abundant
 
hmm.. physics is pretty much a problem tailor made for tensor cores, the throughput can exceed by an order of magnitude, hardly inefficient i would say
and there are many other ways that AI can be used to guess workload development for more efficient execution, like tessellation for example can be staged after a more effcient AI model which decides on whether and how much to tessellate
the use cases are abundant
But reading Nvidia all I can see is them talking AI is the only way forth. Physics isn't even mentioned at all, they even stopped mentioning RT. The fact is currently tensor cores are still largely unutilised so if they can find sensible workloads that actually matter in games and persuade Devs to use them - I'm all for it. But it can't be full replacement of actually graphics generating parts.
 
Last edited:
All of the existing ones work even on 2060 just fine (with power to spare), on old tensor cores - aside artificially locked FG (which AMD proved can work just fine on any GPU). 4k series have tensor cores much improved but these things just can't use the power as it's already. Physics is CPU only, if you use GPU for it you will lose FPS in games (either physics or graphics computation). AI has nothing to do with physics currently. Generating high poly model from low poly is called tessellation and already exist for many years - nothing to do with AI or tensor cores.
I'm talking about a model able to provide high details graphics from a lower detail asset, better than what tesselation does. Akun to UE 5.

Also, CPUs can't do advanced physics as the same speed at GPUs. Probably AI (NPCs) in great numbers as well.

With hysics, more complex effects, to the level of offline rendering, but done real time in games - where "close enough" estimates is plenty if it looks decent.
 
Back
Top Bottom