• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

10GB vram enough for the 3080? Discuss..

Status
Not open for further replies.
If that was the goal then there would be no "Graphics Settings" section in any game; they're there to be turned down not up.

Anyone who struggles with that concept is confusing PC gaming with console gaming. :)

That's why I don't get the fixation on vram.

It's like lowered settings are not a problem at all if the GPU itself is too weak, but the same settings and framerates are some sort of gaming apocalypse if caused by vram rather than the GPU.

It's two siblings in the same family where one can do no wrong in the eye's of the parent and the other gets in trouble for the slightest transgression.
 
That's why I don't get the fixation on vram.

It's like lowered settings are not a problem at all if the GPU itself is too weak, but the same settings and framerates are some sort of gaming apocalypse if caused by vram rather than the GPU.

It's two siblings in the same family where one can do no wrong in the eye's of the parent and the other gets in trouble for the slightest transgression.

Agreed.

If you think about it, your example is exactly why we have Graphics Settings. I could take a PC that runs "buttery smooth with everything on max settings" because that PC is driving a monitor that is 1080p 60Hz and the GPU is basically Zzz. That same PC plugged into my CX48 will then appear to fall flat on it's face because it's now driving 4k 120Hz. That's why the settings are there to be amended to suit the absolute myriad of options available to PC gamers, and the starting point is always everything max then toggle down until satisfied.

That's why this Vram conversation is mainly driven by two types of gamers:
  1. Those who can't get their hands on a 3080 therefore want to justify that they don't want one anyway, because Vram.
  2. Those who bought a 3090 and need to justify the purchase premium, with the only real difference between the 2 cards being Vram.
Actual 3080 owners can just chuckle at the absurdity of the "debate".
 
do we have any actual data of real VRAM usage at settings across 1400 and 4k?

I'm expecting to turn a setting down at somepoint in the next couple of years. Big deal.

Another year and I'll have a new card with more RAM. Why is this even a question?
 
It's very relevant if you want to compare hardware because it has a big impact on what you're saying. There's clear benefits and costs involved in that deal for cheap mid tier hardware and you can't just accept the benefits and pretend like the costs associated with them don't exist or aren't relevant.
You said this

Spot on, it's no secret to anyone even remotely technical that console generations do not have fast APUs, they're sharing the same die for CPU and GPU functions and all in all they aren't that fast.
I respond with this
Considering that the PS5 and the XSX are better than a vast majority of gaming pcs, I think they are doing pretty well for themselves.


The cost of the consoles is irrelevant to whether or not they are fast and whether or not they are faster than a majority of gaming PCs.


I never said they don't know how to build consoles. Sony and MS have the same constraints as Nvidia and AMD do when doing things like deciding memory size. Irrelevant waffle...
What you did say is that they put too much VRAM. There is no way that sony and MS ares going to arbitrarily put too much VRAM on their consoles, which by your admission are price sensitive. They would engineer a way around it. Oh wait they did, that fancy SSD on the PS5 looking mighty fine. But that's a different conversation.

I think they know what they are doing.

AMD deliberately made a trade off with their GPU design and created "infinity cache" which trades away transistors for use in doing calculations on the GPU itself and instead uses them for much larger amounts of local high speed cache. The upshot of this is more data is kept on die and requirements on memory bandwidth to the vRAM is lower. This allows them to target cheaper GDDR6, but at the expense of wasting die space on cache that could be spent on say RT cores or something else. This allows them to use the less power hungry GDDR6 and not have to clock the **** off the memory to hold a slim lead.
FTFY

P.S. you've let the mask slip. Twice now. ;)

The Xbox made that speed trade off for probably the same reason, slower memory is cheaper memory and if the whole memory pool doesn't need to be that fast because a large chunk is being used for more system RAM-like purposes then it doesn't need to be that fast. The only reason I mention this is because it re-confirms the calculations most of us had done about what % of memory is realistically going to be used by which parts of the system and how much you can realistically consider the equivalent of vRAM.
Maybe you didn't notice but i was mocking the silly notion that games developers can't use the slower memory for graphics items. Game developers can use it and if they need to or want to they damn well will use it. It doesn't matter what our remotely technical forum experts think.

Whether I'm right in my additional hypothesis about the consoles likely not being able to make good use of 10Gb for vRAM purposes remains to be seen, we need tools to measure memory usage on the consoles ideally. But if we take a modern game targeted at the next gen systems (making use of RT features and which push the best in class PC dGPUs to their limits), a game like WD:L, then so far I've been vindicated on my hypothesis.

 
I'll say one thing about Nvidia, at least they've made an effort to get GPUs into the hands of gamers at MSRP - AMD have been completely MIA.
That is true. I have had a 3080 and 3070 fe, plus helped quite a few others get other fe cards. With AMD prices are just looney. I only entertain fe cards as a result and likely will do again next gen unless amd sort it out.
 
If you turn every setting to max then you probably need to go back to console if you don't understand why they put all those graphics options there, what they do and how to use them. They are there for an very wide range of PC hardware. I've seen people buy a 3090 for 1080p 60 hz monitor.

The only time you see all gfx settings at max is for apples to apples card/game reviews. This will give you the performance difference between cards and give you a good indication of how much extra performance per £££ vs another card. It is not how you set up a game to play it and get the most from your card.

As apparently 90% of the worlds gamers are still at 1080p this is why many potato enhancement gfx options are there, but at high resolutions (PC gaming master race) you'd turn them off, freeing up GPU horsepower.

At the end of the day, if you believe that a 3080 runs out of VRAM once you have set the game up (for gaming) and you need to spend double for a 3090/6900XT then you crack on.

Even if you can make VRAM run out to the point of stutter, then are you going to go and pay twice the price for a card because you don't want to turn down a setting that you cant see the difference between high or ultra anyway except in stills?

Those with 3080's still have £700+ in their pockets ready for the next round of cards.
 
Let the, but it's DEATHLOOP posts begin. :D

Even the 6700 XT looks to run a little short of video memory with RT enabled, although appears fine with RT off.

RT is rubbish at the moment, there is not ONE game where it looks anywhere near as good as a 3dsmax/vray render.
MS/AMD/NVIDIA need to up their game, dx12ultimate does NOT deserve the ultimate title.
Maybe some REAL RT hardware for the 4090?
 
RT is rubbish at the moment, there is not ONE game where it looks anywhere near as good as a 3dsmax/vray render.
MS/AMD/NVIDIA need to up their game, dx12ultimate does NOT deserve the ultimate title.
Maybe some REAL RT hardware for the 4090?


Ray Tracing cannot compete with Path Tracing

An Nvidia developer did a demonstration in Unreal Engine for this recently, very informative and he had 3 lighting models that he flicked between in this demo scene. He has pure rasterisation lighting and shadows, Nvidia RTX ray traced lighting and shadows and lastly he showed pure Path Traced lighting and shadows.

The RT light and shadows looked much better than the pure raster and it was naturally dynamic and objects interacted in the world. But then he switched to full path tracing and oh damn, once you go to path tracing then Ray Tracing looks like cheap rubbish.

We won't be able to get super lifelike images until we move to path tracing, the RT plugins for RTX and DirectX are rubbish when you put them side by side with the Unreal Engine path tracer.

but games are unplayable with path tracing the hardware just isn't there for real time 60fps. The main use case for path tracing in unreal engine is a comparison so a source of truth - the path tracer tells you what the game should look like and then you "try" to configure the RT settings to get as close as you can but the reality is that you can't get it close enough no matter what RT settings the plug uses at present
 
Last edited:
Path tracing is just a way of using ray tracing.

RT is rubbish at the moment, there is not ONE game where it looks anywhere near as good as a 3dsmax/vray render.

Quake 2 RTX's implementation is actually quite close - you have to do a lot of work to avoid noise and it is missing and/or just using fast approximate implementations for some features like caustics. The geometry and material limitations of the Quake 2 engine also hold it back.
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom