• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

V-Ram arguments, 4GB is not enough!

4gb cards will be fine for many years too come.

Nowadays caching can make it look like there's going to be a problem even when there isn't.
I no longer use overlays, In fact I removed Afterburner long ago due to the bugs when used with Fiji and I'm better for it.
Now that AMD have introduced a way to turn the power saving feature off in the Crimson software I do not have any overclocking programs installed and do not monitor anything,
If I am having an issue with how a game runs I'll either turn Fraps on too first check the fps or Steams fps monitor.
I only have a 60 fps screen anyway so as long as what I play keeps it's minimums in the high 40's or better I don't seems to be able to tell a difference unless I'm monitoring it. (there are exceptions such as Project cars which I need to keep slightly higher for it to flow right).
Even at 1080p there are games that can't manage to keep the fps above the high 40's but I've long since come to terms with the fact that I can't just max everything out and get a playable experience, even at 1080p. I only need to tweak one or two settings to fix it though.
There are games where 4gb's of HBM is not enough and I have to turn settings down even though the actual fps is still good enough but in truth the ability to see a difference between the two settings visually is usually beyond me unless I take a screen shot and scrutinise it.

At the end of the day 4gb cards will be fine for many years too come..
 
Last edited:
Well the thread is about 4gb being enough for 1080p, your talking about 4K and a 6gb card, 4k is 4x1080p so it wouldn't really be that surprising if 6gb wasn't enough in some games at 4K, just like 1.5gb probably wouldn't be enough in some games at 1080p.
Anyway you haven't actually provided any evidence that lack of vram was causing your issue, your just saying it was.

This is why you don't see any Fiji scores in the ROTTR bench thread.

Single Fury X @2160p bench thread settings.

DX11
loL3bD7.png

DX12
nUD0ee1.jpg

Other problems encountered -

DX12 run had memory artifacts despite being at stock.

DX11 was unable to do a 4 way CF run as the drivers keep crashing.

By comparison TitanXs have no problem running 4 way SLI and will average well over 60fps @2160p.
 
This.

I was playing tomb raider last night at 3440 x 1440 near max settings on a 980Ti and the vram usage was ~1.5GB, how people can believe 4GB isn't enough for 1080P is incredible.
Some games just use all the vram they can, which is not an issue and doesn't mean you need more vram...
Even if you were having performance issues and the vram was maxed out, does not mean the lack of vram is causing the performance issue(s), the game could simply be caching and the performance issue(s) could be caused by something else (poor optimisation, driver issue, corrupt game install ETC....). More in-depth testing needs to be done.

Rise of the Tomb Raider is the most high profile example.
When you use the highest texture setting a warning is flashed on screen that tells you this setting will need more than 4 gb's of vram.
If you use that setting with a Fury (4gb's of HBM) you experience problems with how it runs, even though the fps is still good it stutters, sometimes badly, This is at 1080p.
Turning the texture setting down one level solves the problem and personally I have been unable to tell the difference in how it looks.
 
Nowadays caching can make it look like there's going to be a problem even when there isn't.
I no longer use overlays, In fact I removed Afterburner long ago due to the bugs when used with Fiji and I'm better for it.
Now that AMD have introduced a way to turn the power saving feature off in the Crimson software I do not have any overclocking programs installed and do not monitor anything,
If I am having an issue with how a game runs I'll either turn Fraps on too first check the fps or Steams fps monitor.
I only have a 60 fps screen anyway so as long as what I play keeps it's minimums in the high 40's or better I don't seems to be able to tell a difference unless I'm monitoring it. (there are exceptions such as Project cars which I need to keep slightly higher for it to flow right).
Even at 1080p there are games that can't manage to keep the fps above the high 40's but I've long since come to terms with the fact that I can't just max everything out and get a playable experience, even at 1080p. I only need to tweak one or two settings to fix it though.
There are games where 4gb's of HBM is not enough and I have to turn settings down even though the actual fps is still good enough but in truth the ability to see a difference between the two settings visually is usually beyond me unless I take a screen shot and scrutinise it.

At the end of the day 4gb cards will be fine for many years too come..

Same, stopped obsessing long ago as I wasn't enjoying gaming as I was constantly monitoring and looking for problems that dont exist

Now I just use Steam FPS counter and perfoverlay command in BF4 if needed, if there is a problem I turn settings down and enjoy the game, simple as that

All I care aboout about is my next card can push 1440P at higher FPS than my 290X whilst being cool and quiet, if 4GB can do that then fine, if it cant then I will rely on trusted review sites to point this out when cards are released, not people on forums thinking a way a game is coded is a problem IE some games will use all the RAM available
 
Same, stopped obsessing long ago as I wasn't enjoying gaming as I was constantly monitoring and looking for problems that dont exist

Now I just use Steam FPS counter and perfoverlay command in BF4 if needed, if there is a problem I turn settings down and enjoy the game, simple as that

All I care aboout about is my next card can push 1440P at higher FPS than my 290X whilst being cool and quiet, if 4GB can do that then fine, if it cant then I will rely on trusted review sites to point this out when cards are released, not people on forums thinking a way a game is coded is a problem IE some games will use all the RAM available

Same here,
My current Fury Tri-x is both cool and quiet unlike the 290x Gaming edition it replaced, Not worrying about that along with how much better the drivers have been for me over the last few months with Crimson has improved the gaming experience immensely.
I'm planning on going 21:9 at some point soon(ish) so I will aim for a card with more ram next time but for now my Fury runs 1440p on a 1080p screen lovely.
 
Well the thread is about 4gb being enough for 1080p, your talking about 4K and a 6gb card, 4k is 4x1080p so it wouldn't really be that surprising if 6gb wasn't enough in some games at 4K, just like 1.5gb probably wouldn't be enough in some games at 1080p.
Anyway you haven't actually provided any evidence that lack of vram was causing your issue, your just saying it was.

ROTTR 1080p Bench thread settings.

DX11
YRa3dE8.jpg

DX11 Memory
9Sw0Vzp.jpg








DX12
Vz4VrRZ.jpg

DX12 Memory
hBUxRgy.jpg





Some things I noticed -

DX11 performance is better than DX12

DX11 memory usage is a lot higher where it is swapping it out to the system.

DX12 does not swap out memory, this could possibly explain the worse overall performance.
 
This is why you don't see any Fiji scores in the ROTTR bench thread.

Single Fury X @2160p bench thread settings.

DX11
loL3bD7.png

DX12
nUD0ee1.jpg

Other problems encountered -

DX12 run had memory artifacts despite being at stock.

DX11 was unable to do a 4 way CF run as the drivers keep crashing.

By comparison TitanXs have no problem running 4 way SLI and will average well over 60fps @2160p.

It would be hardly surprising that a 4gb card struggles at 4k especially given the settings used in the ROTTR bench thread.

Rise of the Tomb Raider is the most high profile example.
When you use the highest texture setting a warning is flashed on screen that tells you this setting will need more than 4 gb's of vram.
If you use that setting with a Fury (4gb's of HBM) you experience problems with how it runs, even though the fps is still good it stutters, sometimes badly, This is at 1080p.
Turning the texture setting down one level solves the problem and personally I have been unable to tell the difference in how it looks.


Could the highest texture setting be using uncompressed textures? which could cause more than 4gb to be used.

What happens with a GTX980 with very high texture settings?
 
That is why I have posted 1080p runs as well.

The Fury X struggles @1080p as well on max settings.

Having a quick read through the bench thread shows that people with nvidia 4gb cards don't have the same problem as fury / fury x users, suggesting that the problem lies elsewhere (memory management issue or driver issue perhaps).

Don't you find that odd?
I do, unless nvidia cards have much better memory management?! (even though they are using GDDR5 compared to HBM).
 
Last edited:
Same here,
My current Fury Tri-x is both cool and quiet unlike the 290x Gaming edition it replaced, Not worrying about that along with how much better the drivers have been for me over the last few months with Crimson has improved the gaming experience immensely.
I'm planning on going 21:9 at some point soon(ish) so I will aim for a card with more ram next time but for now my Fury runs 1440p on a 1080p screen lovely.

My favorite game at the moment is the new Master of Orion game.

It uses a whopping 1gb of memory maxed out @2160p.:D

What people sometimes forget is ultimately it is about the game not the graphics.:)
 
Having a quick read through the bench thread shows that people with nvidia 4gb cards don't have the same problem as fury / fury x users, suggesting that the problem lies elsewhere (memory management issue or driver issue perhaps).

Don't you find that odd?
I do, unless nvidia cards have much better memory management?! (even though they are using GDDR5 compared to HBM).

6gb 980 Ti's sometimes have problems on the bench at the higher resolutions.

4gb cards with GDDR5 do seem to work a bit better than HBM1 equipped cards but that also highlights another point, HBM1 has it's weaknesses too.

Comparing the 4gb 980s to the 980 Ti's does show that the 980s are losing performance @1080p too.

As to memory management on 4gb cards I have found that (only my personal view) AMD have done better comparing 290Xs to 980s when it comes to heavy workloads.
 
Having a quick read through the bench thread shows that people with nvidia 4gb cards don't have the same problem as fury / fury x users, suggesting that the problem lies elsewhere (memory management issue or driver issue perhaps).

Don't you find that odd?
I do, unless nvidia cards have much better memory management?! (even though they are using GDDR5 compared to HBM).
Or as per the some examples in the pass that despite being "same graphic settings", the graphic details that Nvidia actually render and output is less than AMD's? ;)
 
6gb 980 Ti's sometimes have problems on the bench at the higher resolutions.

4gb cards with GDDR5 do seem to work a bit better than HBM1 equipped cards but that also highlights another point, HBM1 has it's weaknesses too.

Comparing the 4gb 980s to the 980 Ti's does show that the 980s are losing performance @1080p too.

As to memory management on 4gb cards I have found that (only my personal view) AMD have done better comparing 290Xs to 980s when it comes to heavy workloads.

The GTX980s are doing just fine, infact a overclocked 980 easily beats a Fury X at the same settings, obviously that's not what you would expect.

If anything shouldn't the HBM cards have an advantage given the quicker transfer rates / higher bandwidth with regards to swapping textures in and out?

Or as per the some examples in the pass that despite being "same graphic settings", the graphic details that Nvidia actually render and output is less than AMD's? ;)

Slander! :o:p
 
I was playing ROTTR @2160p on a Kingpin 980 Ti and the lack of memory was causing a drop in performance.

Im pretty sure if the card had 12gb of ram it would still struggle, its the POWER the card has not the ram that's causing it to be slow at 4k.
 
Im pretty sure if the card had 12gb of ram it would still struggle, its the POWER the card has not the ram that's causing it to be slow at 4k.

Kingpin 980 Ti v TX

My 980 Ti is slightly faster @1080p and 1440p but the TX wins @2160p.


If you have not got enough memory SLI becomes pointless, if you do have 12gb available then SLI gives good fps @2160p.




4 GPU

  1. Score 66.95, Min 23.06, GPU TitanX @1430/1952, CPU 5960X @4.0, Kaapstad Link DX11 364.51 Drivers
 
This debate is beyond long in the tooth. What Drunkenmaster is saying is true, though. OSD monitoring tools can only tell you the amount of vbuffer that has been requested by the GPU, not what is actually being used. In effect the OSD number is less than meaningless. You aren't able to see how exactly the game is handling discarded buffers, or whether the developer is using the most efficient buffering method for the desired usage type (Particle systems/props etc).

Essentially, for a given setting games will require what is needed to perform without hitching, and that is what matters. Throwing numbers seen on screen with no understanding is the stuff of fairy dust. Rule of thumb is to really scrap heavy post processing like multi sampling. These are resource intensive and really only act as pub-talk for being able to push them in most instances. With all that said, I'd still rather have more vbuffer than less.
 
This debate is beyond long in the tooth. What Drunkenmaster is saying is true, though. OSD monitoring tools can only tell you the amount of vbuffer that has been requested by the GPU, not what is actually being used. In effect the OSD number is less than meaningless. You aren't able to see how exactly the game is handling discarded buffers, or whether the developer is using the most efficient buffering method for the desired usage type (Particle systems/props etc).

Essentially, for a given setting games will require what is needed to perform without hitching, and that is what matters. Throwing numbers seen on screen with no understanding is the stuff of fairy dust. Rule of thumb is to really scrap heavy post processing like multi sampling. These are resource intensive and really only act as pub-talk for being able to push them in most instances. With all that said, I'd still rather have more vbuffer than less.


I would almost agree, but the numbers onscreen/(or in monitoring software) show what is being 'used' not what is 'needed', big difference. If a program/game decides to fill the extra memory a card has with textures, then it is using that memory, if on the other hand it wont play without stutter/framedrops unless it has those texture then it needs them. But you are quite right, these onscreen memory counters or other monitoring software generally count what is used rather what is needed.
 
Back
Top Bottom