• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

What do gamers actually think about Ray-Tracing?

Does vram matter a lot to RT ?

It does require a bit more yeah. But not much as I recall. The one that seems to need even more is frame generation. Thus far the only title I noticed 12GB struggling is Hogwarts when enabling FG.

Everything else has been fine with 12GB for me. That said now is time for minimum of 16GB in the mid range.
 
Last edited:
People need to stop adding hogwarts into the discussion, it was a bugged and poorly optimised game at launch and remains so to this very day with both bad VRAM and system RAM optimisation as well as ongoing shader and traversal stuttering. They only just announced a remaster is in the works so it's clear even they can't fix it so are remastering it instead.

There are a handful of games just like that, remain in a state that results in runaway memory use and reviewers and gamers alike seem to plant them under the category of "this will EAT your VRAM!!!!!!!!!!!" - When it's exclusively a developer not optimising their game issue and years later generating a new cash grab as a remaster or whatever.

Yes, RT requires more vram.

DLSS3 Frame Gen was released to increase fluidity mainly for RT at a further cost to vram.

RT does not require a whole lot more VRAM and FG doesn't come into it either. Look at the latest titles like Silent Hill 2 Remake, using the most popular game engine right now and for the foreseeable future:

It's almost as if people forget that RTSS exists where we can deep dive into exactly how much VRAM is being used in any rendering mode:

Software Lumen at native 4K:
g7xSJpG.jpeg


Software Lumen at DLSS 4K Performance:
Eqmx4TE.jpeg


Hardware Lumen at native 4K:
5UW88gz.jpeg


Hardware Lumen at DLSS 4K Performance:
sswxXba.jpeg


So between both upscaling and native and HW vs SW RT respectively there is a mere <1GB RAM actual use difference in UE5. It's still a whole load below 12GB at 4K. The thing to note here is that modern games state an SSD is needed and assets are streamed in and out on the fly so if optimised well for memory, resident memory isn't clogged up with everything as everything is streamed in and out quick enough that it should not matter as long as a safe baseline of VRAM is available, in modern games what I'm seeing is that this safe baseline is 10-12GB minimum for a game running at 4K whether upscaled or not, in games that do not have memory optimisation issues. We should not be normalising needs based on badly optimised games, as that tells the devs we don't care about optimisation.

To be fair to Bloober Team, Silent Hill 2 is one of the examples of a game where memory use is excellent, it'#s just a shame they didn't bother to optimise the frametime performance and there's an inherent timing bug in this game similar to Jedi Survivor that results in traversal stutter like in Jedi Survivor and there's no real exact reason as to why.
 
It does require a bit more yeah. But not much as I recall. The one that seems to need even more is frame generation. Thus far the only title I noticed 12GB struggling is Hogwarts when enabling FG.

Everything else has been fine with 12GB for me. That said now is time for minimum of 16GB in the mid range.
If that's the case the new 5080 will struggle with its weak 16gb
 
@mrk

Your above example-both are running RT, your even showing an increase when you use HW RT'ing-vram increases.

'a whole lot more', you've managed to add context no one used while also showing vram increasing at the same time when enabling HW RT'ing over software, maybe try using an example of RT off and RT on to show a better indication of on v off vram impact.

What you are failing to realise/acknowledge is I'm generally talking about mainstream RT usage, always have been, you are in the minority of use case running a 4090, it will plough through anything with good playable framerates.:thumbsup:
Never knew DLSS used it, I thought it was on tensor cores
Supposed to be using TC-but there's 100% an impact on vram with DLSS3, there's two users with 12Gb 4070 series explaining vram impact on RTX, plenty tech vids out there showing it too.
 
What you are failing to realise/acknowledge is I'm generally talking about mainstream RT usage, always have been, you are in the minority of use case running a 4090, it will plough through anything with good playable framerates.:thumbsup:
That's not really relevant though, my post references actual VRAM and RAM use, the fact I have a 4090 makes zero difference, if it did, then much more VRAM would have been used if what you're alluding to is the majority of what happens. "Mainstream rt", what does that even mean? In 2024 UE5 is the only mainstream RT, because there is no option in a UE5 game without RT, Lumen is RT as a reminder in-case you had forgotten.

Just because I'm showing numbers from a 4090 doesn't change the fact that the memory numbers remain the same. The example above is one quick and easy example, I could do exactly the same for other recent games and show the same scaling of results with and without RT/upscaling/frame gen enabled etc, i fact I have done in those respective game threads. Hell I even saw the same sort of memory scaling back with the 3080 Ti FE which I also documented in detail at the time for games launched around then.


Edit*
The only exception I can think of is if a GPU isn't fast enough to handle the kind of settings a user is applying, in which case yes if the GPU can't process memory loads effectively, then more data will set in VRAM for longer, which could see a bigger load on said memory. Having more VRAM won't suddenly improve performance, as the bottleneck is in the GPU itself not memory usage. Obviously assuming that the minimum baseline mentioned above is being followed.

The solution to that issue is simple, lower settings that align with that GPU's ability, or buy a better GPU. 4070s are being used by pockets of gamers in order to utilise FG but it's false marketing since a 4070 isn't powerful enough to get the minimum of 60fps in many modern games at 1440p which would then result in a great gaming result after FG is enabled. This goes back to the above paragraph, the GPU needs to be capable enough to shift that data through the pipeline effectively, throwing more VRAM at the card won't make a big difference if the baseline is being met already (12GB let's say).

It's a song and dance dirtied by false marketing by nvidia plastering frame gen with every 40 series when the tech is only really realised on 4080 and above properly when we are talking at least 1440p+ resolutions and higher settings. I believe AMD and lossless scaling devs are the only ones that publicly state 60fps should be the baseline before enabling FG, whilst nvidia just post the splitscreen videos showing 91fps or whatever after FG is enabled without any fineprint detail.
 
Last edited:
Thread doesn't need to n fro posts of 2 dudes talking about apparently different things.

It's the backend of 2024 almost everyone running mainstream 40 series(NOT running 4090/80/70Sti) knows RT/DLSS3 has an impact on vram-the 60/70 class GPUs can at times outrun the Vram, and if you disagree using ayour 4090 as your point in case, that's fine too.:thumbsup:
 
Does vram matter a lot to RT ?

Apart from the usual "it depends" there are many features that actually get affected by it, or should it say the lack of it. A good place to nosey at it is some of the benchmarking on the 4060Ti. The smooth frametime graph of the larger 16GB version paints its own picture.



But the irony being that both frame generation and ray tracing require more VRAM

Thanks Steve!

Its no surprise that a 4090 user with a good spec machine has no issues with PT, RT, FG etc. However you cannot market it is like this for all gamers down the stack where nvidia typically will gimp the bandwidth/bus and memory size while still commanding that premium.
 
Thread doesn't need to n fro posts of 2 dudes talking about apparently different things.

It's the backend of 2024 almost everyone running mainstream 40 series(NOT running 4090/80/70Sti) knows RT/DLSS3 has an impact on vram-the 60/70 class GPUs can at times outrun the Vram, and if you disagree using ayour 4090 as your point in case, that's fine too.:thumbsup:
That's not what I said at all but ok...
 
4070s are being used by pockets of gamers in order to utilise FG but it's false marketing since a 4070 isn't powerful enough to get the minimum of 60fps in many modern games at 1440p which would then result in a great gaming result after FG is enabled. This goes back to the above paragraph, the GPU needs to be capable enough to shift that data through the pipeline effectively, throwing more VRAM at the card won't make a big difference if the baseline is being met already (12GB let's say).
This.
Basically only 4080 and 4090 have enough juice to push 4k with some RT. 12GB vRAM for 4k is not really a thing since a card with only that much vRAM it should not be used higher than 1440 (or even 1080p).
 
I was thinking 2023 looks better.

Then I noticed it wasn't just the lighting etc but they changed whole models of ships to look worse

DsoqMgc.png

They have updated the models, those models here being physical assets, player ownable ships with full and fully functional interiors, the one you're looking at is 450m long. takes a year to make one of that size, the largest player ownable ship is the other one with the ships flying out of the front, it is 960m long, it has an internal tram system.

IMO the bottom one looks way better, much more detail on it, and contrasting detail.
 
Back
Top Bottom