• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

8GB VRAM defo is not enough for Gaussian Splatting

mrk

mrk

Man of Honour
Joined
18 Oct 2002
Posts
105,265
Location
South Coast
Randomly the tube recommended this, which then lead me into a rabbit hole of sorts. This method has been around a while, but because it's so heavy, a lot of VRAM is required apparently, but the results speak for themselves, it doesn't need ray tracing or path tracing to produce what appear to be the same results, the scenes look like what UE5 demos look like, but render at 124fps.


Takeaway comment:

This technique is an evolution, one could say it's an evolution from Point Clouds. The thing that most analysis I've seen/read is missing is that the main reason this exists now is because we finally have GPUs that are fast enough to do this. It's not like they're the first people who looked at point clouds and thought "hey, why can't we fill the spaces between the points?" EDIT: I thought I watched to the end of the video, but I didn't, the author addressed this in the end :) It's not just VRAM though! It's rasterization + alpha blending performance

Another video:


So essentially it employs a "neural radiance field technique" - And this is something similar to what Nvidia has pointed at but not yet unlocked in the driver to speed up PT rendering (source) - The difference here is that whilst Gaussian splatting works on realtime training a static scene, NRC works in fully dynamic scenes. There's likely many years before splatting works on fully dynamic scenes, but when that does, that would negate the need for powerful hardware that has ray tracing capabilities and thus the performance penalty that comes with path tracing.

Looks like now that GPUs are finally powerful enough and have enough VRAM (well, some anyway :p) - This sort of tech could well be the next big thing over the next few years.
 
Last edited:
Aren’t nerfs the thing that produced the really high detailed unreal engine demo of the factory unit for a shooter in unreal engine? I think it was about six months ish ago it was shown.

 
Last edited:
That was done using an Epic store asset using photogrammetry was it not?
 
It was, but it looks like Gaussian splats are an evolution of nerfs or am I getting the wrong end of the stick from the videos?

Edit: ok so it looks like nerfs are a 3d version and splats are 2d. To me it seems they are derived from similar tech (photogrammetry) but used in a different way.

Either way very exciting.
 
Last edited:
  • Like
Reactions: mrk
AFAIK for real time lighting and dynamic scenes you are still going to need a form of ray tracing and it won't look as good as the original source then.

You also need about 8x the source information for a real world scene or about 4x if built from a ultra high quality rendered 3D model to re-light correctly in real time.
 
Last edited:
Randomly the tube recommended this, which then lead me into a rabbit hole of sorts. This method has been around a while, but because it's so heavy, a lot of VRAM is required apparently, but the results speak for themselves, it doesn't need ray tracing or path tracing to produce what appear to be the same results, the scenes look like what UE5 demos look like, but render at 124fps.


Takeaway comment:



Another video:


So essentially it employs a "neural radiance field technique" - And this is something similar to what Nvidia has pointed at but not yet unlocked in the driver to speed up PT rendering (source) - The difference here is that whilst Gaussian splatting works on realtime training a static scene, NRC works in fully dynamic scenes. There's likely many years before splatting works on fully dynamic scenes, but when that does, that would negate the need for powerful hardware that has ray tracing capabilities and thus the performance penalty that comes with path tracing.

Looks like now that GPUs are finally powerful enough and have enough VRAM (well, some anyway :p) - This sort of tech could well be the next big thing over the next few years.


The demo shown is using photos from a camera of real life and to make a 3D image

so how do you do this with a game? Because you need to render the image first with ray tracing and all the graphics, then take photos of it to feed into Gaussian thing?

And you say it uses 8gb vram or whatever and this was to render a simple room or static image - how much vram is needed to rendered a city? Cause that's what you need to make a game, is it 1TB vram?
 
Last edited:
Yes the current versions all being shown are all static scenes with baked lighting etc - It's years off before dynamic scenes can be rendered this way most likely?
 
And just like clockwork, youtube pushed this into my feed, which actually was quite interesting. Seen this guy's videos before and he's pretty good too.

 

I’m still struggling to see how it can be applied to games as it seems to rely heavily on taking pictures. I’m also not sure it would be compatible with destructible/interactive environments.
 

I’m still struggling to see how it can be applied to games as it seems to rely heavily on taking pictures. I’m also not sure it would be compatible with destructible/interactive environments.

If you take enough images it can be processed to produce geometry/data compatible with level design, same as photogrammetry but you need a few more data points for this kind of use - you'd have to split things up at the point cloud level, then recombine them in real time to produce the 3D Gaussian splatting data, would need a lot more reference information (probably AI assisted) so as to have the right diffuse colour, specular and transparency, etc. information for re-lighting the scene which would necessitate a lot more memory both physical and VRAM.

To be honest I'm not sure I'm convinced of the merits when it comes to games - you are still going to have to rely on ray tracing, etc. once you add in dynamic elements and most of this can be artistically reproduced with a reasonably high polygon count and a decent very high resolution megatexture style ability to go in and paint anywhere in the scene so as to remove things like the seams, repetitive tiling, etc. that break up the illusion with most games so that details run across surfaces and objects, etc.
 
Last edited:
Back
Top Bottom