• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

GPU Memory Total Useable ?

Soldato
Joined
19 Feb 2007
Posts
15,520
Location
Northampton
I'm just wondering if anyone knows if and when the total amount of GPU memory will instead of being lets say 4+4=4GB on a multi GPU setup but instead be 4+4=8GB ?

I remember seeing something about this some time ago but haven't seen anything of it since, Would be quite interesting to see.
 
I seem to recall seeing something ages ago that AMD? were working on a system that would allow a GPU access to system RAM and GPU ram and basically, tie everything together.

I don't see how two 4GB cards could become 8GB however as essentially they each need their own 4GB memory allocated to the work each card is doing.
 
Unified memory pool was rumoured to be a future Mantle function, i'm not sure if there are any working examples or whether its pure hypothesis.

But I guess we'll be waiting a while if it is ever going to happen.
 
It was touted as one of the Crossfire modes for Mantle, however still the leading performance mode for crossfire is alternate frame rendering, which requires each card to have their own copy of everything

Basically, you could never pool the vram of two cards and use it to render a single frame as the latency over pcie would be huge, so however you use the vram it would still be independent of the other card, so the second card would be doing something other than rendering... With AFR the 2 cards have to have an exact duplicate of Vram, with other modes you could uncouple the vram freeing you up from being limited to only 4GB, but you couldnt use all 8GB to render a single frame, if that makes sense

Unless Amd reintroduce a type of direct cable connection that gives direct PCB access to both cards vram, in effect turning two single cards in to a hybrid single card (even dual gpu cards would need some modification to allow direct access to the full 8gb by both gpu's)
 
Last edited:
It's a bit like RAID.

When going to two GPUs, you are generally doing it to (hopefully) double your performance.

Two GPUs means twice the processing power, but each GPU keeps a complete copy of all the data, so that it can use all it's own memory bandwidth for itself, without having to worry about it's partner.

If we moved to the model you are suggesting, 4+4=8, it implies no duplication of data for textures etc. So, GPU A needs as much access to GPU B's vram as it's own, and vice versa. This data would have to be transferred over the PCIe bus. Let's take the 290x, with 320GB/s memory bandwidth, a PCIe 3.0 x16 connector has only 16GB/s in each direction.

So, in the time this GPU can pull 1 GByte from the other GPU's vram, it can pull 20 GByte from it's own. Given that with an 1155 board you are talking PCIe 3.0 x8 for each card, it's now a 1:40 ratio, that is, it's 40 times quicker to read from local vram than the remote vram, assuming the PCIe bus is used for NOTHING ELSE.

Given this, you'd slow the system down so much by doing this, that it's only practical for each GPU to have it's own copy of all the data.

This all also applies to dual GPU cards such as the 295X2, as they have a PCIe bridge on the card and the two GPUs still communicate over PCIe.

I don't forsee it ever being cheaper to build a replacement interface for PCIe that's as fast as the GPU memory bus than it is to just add more vram, so I don't think this will ever happen.

In some GPGPU activities the memory bandwidth isn't as crucial and this would work.
 
Back
Top Bottom