• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

VRAM use in Crossfire X Setups

Associate
Joined
1 Apr 2009
Posts
429
Hello folks.

I currently have a 4870 512MB card, and was considering buying another for Crossfire.

I have heard in various places that the accessable VRAM in such a setup would only be that of 1 card, resulting in a total of 51MB. (in the case of applying a 1GB card with a 512MB card, only 512MB would be used).

Could anyone explain to me if: A- this is correct? and B- why does this happen?

I would like to have 1GB of VRAM available to me for games. If I cannot acheive this by buying another 512MB card, then I will sell my current one and purchase a 1GB card.

Cheers.
 
No idea of the hows and whys, but it's unfortunately true. Not only should it use both, but it should use them as DDR for extra speed :p
 
Indeed they should! I am wondering if this behaviour is a hardware limitation or something that my be fixed with future driver updates.

Just speculating here, but it may be that there is not enough bandwith between the cards to use both sets of memory..
 
Its not something thats gonna change...

If you want 1gig of VRAM useable by the games then you'll have to sell your current card and buy a new setup... which I would advise as your gonna make the most off your current card selling it now and 512Mb is starting to get a little short if you play at high res/settings on some newer games.
 
Unfortunately it's not something that's going to change any time soon.
The reason for this is that both cards need access to the same texture information. This then has to be stored on both cards, if they were to copy it onto one card only you'd have a lot of swapping and slowdown, and the process would be far to 'laggy', it's nothing to do with bandwidth between the cards, but more to do with latency, and memory addressing. It's not perfect as far as the consumer goes, however having to swap individual textures to and fro across the cards would be even worse for smooth gameplay. Imagine a situation of an old computer without enough RAM having to swap to and from the harddisc, and you're about there!
 
Last edited:
Unfortunately it's not something that's going to change.
The reason for this is that both cards need access to the same texture information. This then has to be stored on both cards, if they were to copy it onto one card only you'd have a lot of swapping and slowdown, and the process would be far to 'laggy', it's nothing to do with bandwidth between the cards, but more to do with latency, and memory addressing. It's not perfect as far as the consumer goes, however having to swap individual textures to and fro across the cards would be even worse for smooth gameplay. Imagine a situation of an old computer without enough RAM having to swap to and from the harddisc, and you're about there!

This.

Although I thought both nVidia and ATi were looking at ways around this?
 
They are, although if they'd made any breakthroughs we'd have heard about it as marketing if nothing else, so it's safe to assume it's not going to be solved anytime soon, my first line ideally should have been 'going to change anytime soon'. :)

There were some rumours that this is partially what sideport on the ATI cards was for, but who knows, either way it's not likely to change soon.
 
There were some rumours that this is partially what sideport on the ATI cards was for, but who knows, either way it's not likely to change soon.

AFAIK this was one of the reasons for sideport (that and for a potential hydra style multi GPU rendering alternative to AFR/SFR) and they've evidently decided it wasn't worth it.
 
OK thanks for the info there, I understand it now.
Looks like I will be upgrading the card. Tell me, will 1GB cards cut it for the forseeable future? I notice the X2 cards have 2GB on them. Are there any games out there that have textures running up to that size? Also I game at 1900x1200.

Cheers
 
So where does Crossfire X come into it - you could have a 4870 (1Gb) and an onboard GFX with only 128mb - surely the whole thing wouldn't be reduced to only 128mb? I know the onboard doesn't give much of a boost - but the marketing claims are that it does help a bit?
 
The x2 cards count as 1GB cards dreadhead, same as with dual cards except they're in one package so to speak. There are games that will use 500+ even without AA, I know Stalker with the complete mod, all settings up uses a bit over 500 as I do get texture stuttering sometimes (the ingame console actually lists it as 800MB used sometimes, but ATI texture compression is more efficient than NVidia, so I'd guess it's more like 600-700 in reality), and Im considering a minor upgrade myself, when more texture heavy games come along (because they do look better)
 
So where does Crossfire X come into it - you could have a 4870 (1Gb) and an onboard GFX with only 128mb - surely the whole thing wouldn't be reduced to only 128mb? I know the onboard doesn't give much of a boost - but the marketing claims are that it does help a bit?

Unfortunately I'm not sure how it works in those circumstances, due to the minor usage of the additional chip and its use at low resolution they may have some bandwidth heavy temporary measure that doesn't work for the high end cards which require a hell of a lot more bandwidth and lower latencies because of this.
 
CrossfireX with an onboard card with 128Mb would be sheer madness - assuming you could even do it - 99.999% of its workload would be discarded as it couldn't keep up and it would limit you in so many other ways.
 
Unfortunately it's not something that's going to change any time soon.
The reason for this is that both cards need access to the same texture information. This then has to be stored on both cards, if they were to copy it onto one card only you'd have a lot of swapping and slowdown, and the process would be far to 'laggy', it's nothing to do with bandwidth between the cards, but more to do with latency, and memory addressing. It's not perfect as far as the consumer goes, however having to swap individual textures to and fro across the cards would be even worse for smooth gameplay. Imagine a situation of an old computer without enough RAM having to swap to and from the harddisc, and you're about there!

Wot happens in different RAM sticks in a comp then?
 
They're all the same distance from the memory controller - either the CPU or NB. So latency from stick to stick is identical no matter which one you call data from or write to.

Now in a GPU, we have one GPU and memory on one card, and the same on another. The two frame buffers are both the same distance, and therefore the same latency, from their GPU.

If the data in the frame buffer was different, and the GPU on the same card called it, it would be fine. But if the other GPU called it, it would need to be read and transmitted, either across the PCIe bus or SLI bridge.

This would be horrifically slow compared to taking data from the same card, so you'd get a massive pause while the GPU waits for the information it requested.

Plus the PCIe bus isn't wide enough to act as a good path for memory read/writes, so you'd get a nasty penalty there too.
 
Back
Top Bottom