• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Blackwell gpus

I'm talking about a model able to provide high details graphics from a lower detail asset, better than what tesselation does. Akun to UE 5.
If that's done live, with current generative AI you will get different final model every time, on every machine - that's not a good idea, especially in online games. Again, current AI is mostly generative chaos and way overblown by marketing - it's the easiest to do type of AI and really not very useful for things you suggest, IMHO. And other types of AI are much harder to pull off, which means we are far away from wide retail use of these.

Also, CPUs can't do advanced physics as the same speed at GPUs.
They are plenty fast these days with high amount of cores unless you want to do very advanced things like fluid or smoke simulations - but that's not what players been asking for. Destruction physics had been done and dusted on the CPUs fire many years now (on way slower than modern ones) but lighting was the issue with raster and prebaked lights. With fully dynamic RT it should be easy to do, you don't need GPU for physics at all then, just for lighting it up properly.

Probably AI (NPCs) in great numbers as well.
Again, generative AI models are fine for creating NPCs with conversations etc. but that is the limit pretty much. And even that isn't as good as manually waiting it by pro writers in most cases, prone to hallucinations and other abnormalities.

With hysics, more complex effects, to the level of offline rendering, but done real time in games - where "close enough" estimates is plenty if it looks decent.
It can't be done with current AI tech - maybe in the future, but we are not there yet unless you are happy with really weirdly looking games, each time different and more like horror dreams than reality.

To summarise, I would advise to not listen to marketing rubbish about AI too much - it has its uses but not in a way you seem to wish it to do it.
 
Last edited:
If that's done live, with current generative AI you will get different final model every time, on every machine - that's not a good idea, especially in online games. Again, current AI is mostly generative chaos and way overblown by marketing - it's the easiest to do type of AI and really not very useful for things you suggest, IMHO. And other types of AI are much harder to pull off, which means we are far away from wide retail use of these.


They are plenty fast these days with high amount of cores unless you want to do very advanced things like fluid or smoke simulations - but that's not what players been asking for. Destruction physics had been done and dusted on the CPUs fire many years now (on way slower than modern ones) but lighting was the issue with raster and prebaked lights. With fully dynamic RT it should be easy to do, you don't need GPU for physics at all then, just for lighting it up properly.


Again, generative AI models are fine for creating NPCs with conversations etc. but that is the limit pretty much. And even that isn't as good as manually waiting it by pro writers in most cases, prone to hallucinations and other abnormalities.


It can't be done with current AI tech - maybe in the future, but we are not there yet unless you are happy with really weirdly looking games, each time different and more like horror dreams than reality.

To summarise, I would advise to not listen to marketing rubbish about AI too much - it has its uses but not in a way you seem to wish it to do it.
DLSS and Ray Reconstruction can do a pretty good job at rebuilding the image and add detail where it's missing. I'm think more towards that area, rather than a ChatGPT/Dall-E model where indeed, is something else.

Fluid simulation, of course, but even complex stuff such as metal bending accordingly, concrete breaking, etc. Not "simple" Red Faction type where you destroy stuff and rubble just... disappears! A lot more than that.

For AI/NPCs I was thinking primarily about something similar to a Crowd Simulation, tech proven by AMD about... 16 years ago.

For dialogs with NPCs, as per ChatGPT, it uses about 1-10tf for normal text to text and another 2-10tf for understanding and generating speech. That is fp 16bit. 4080 is close to 50tf. Next gen cards should do more and, if needed, perhaps decouple the 1:1 ratio of fp 16 vs 32, if that will increase performance.
I had a role play session with the original ChatGPT 4 a while back, in the Fallout 4 universe. It was surprisingly good! I think what it would really push things is to use it a a game master for an open world game where even hallucinations could be (potentially) fun!

Anyway, with the exceptions of NPCs accelerated on the GPU (AMD demo) and physics (PhysX), tech already proven, everything else is more like "what it could be". What it will be... well, time will tell. :)
 
Last edited:
DLSS and Ray Reconstruction can do a pretty good job at rebuilding the image and add detail where it's missing. I'm think more towards that area, rather than a ChatGPT/Dall-E model where indeed, is something else.
That's relatively simple, small algorithms that barely use tensor cores as they are for inferencing. Learning is another matter but that's what Nvidia deals with on their side of things. I highly doubt anything like that will be used to replace tessellation for example.

Fluid simulation, of course, but even complex stuff such as metal bending accordingly, concrete breaking, etc. Not "simple" Red Faction type where you destroy stuff and rubble just... disappears! A lot more than that.
There are already games using metal bending etc. physics on CPU with 0 issues, like BeamNG etc. AMD has their own CPU based physics engine (introduced many years ago) for things you just mentioned but it's not being used by any games as far as I'm aware - Devs just weren't interested: https://gpuopen.com/femfx/
For AI/NPCs I was thinking primarily about something similar to a Crowd Simulation, tech proven by AMD about... 16 years ago.
Again, old tech but Devs just aren't interested so far, it seems.
For dialogs with NPCs, as per ChatGPT, it uses about 1-10tf for normal text to text and another 2-10tf for understanding and generating speech. That is fp 16bit. 4080 is close to 50tf. Next gen cards should do more and, if needed, perhaps decouple the 1:1 ratio of fp 16 vs 32, if that will increase performance.
I had a role play session with the original ChatGPT 4 a while back, in the Fallout 4 universe. It was surprisingly good! I think what it would really push things is to use it a a game master for an open world game where even hallucinations could be (potentially) fun!
Nvidia already has their own tech for that which they are pushing hard with Devs now, called ACE. I reckon we'll see that in some games soon enough. However, this is a cloud solution and that can't and won't be free - subscription to have that active in games will be a must, I reckon. That won't gain popularity most likely.

Anyway, with the exceptions of NPCs accelerated on the GPU (AMD demo) and physics (PhysX), tech already proven, everything else is more like "what it could be". What it will be... well, time will tell. :)
None of that accelerates graphic generation in games though and that's what Nvidia CEO blabbers about all the time recently - computational power increases are over, AI is the future of graphics etc. Just 0 actual details given what that really means.
 
Last edited:
yes seems like gpus are selling for 30k each, so i wont rule out nvidia selling gpus as a subscription service -exclusively (maybe starting 2034), or you could buy one for 29990 if you know jenson personally
 
Last edited:
It would be great to get some more information. I want to build an AI PC with multiple cards and would much prefer to have 3-4 cards with far more ram than the 4090.

If realistically it would be announced and available by end of year I'd stick with using smaller models and buy a few cards then for more....but if it's going to be Q1 I might as well just buy multiple 4090 which is annoying as they are end of line and price is still same!
 
At this point, I'd just like an official reveal to stop the rumour mill. The latest scuttlebutt/baseless speculation is that the 5090 will need two 16-pin power connectors.


I mean... seriously?
 
I really struggle to see who would want to buy these super expensive cards unless you need them for professional or commercial use, it's not like the games industry has blessed us with a plethora of stunning AAA 'must play' titles over the last 4 years.
I think GPU's can sometimes be like mobile phones where we just want to have the latest not because we really need it
 
Last edited:
Back
Top Bottom