• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

We upgrade RAM but not VRAM; why not?

Associate
Joined
16 Apr 2012
Posts
161
Location
United Kingdom
Simple question really - everyone has a different requirement of how much VRAM depending on whether the placebo of more is better or screen resolutions and applications simply make use of more; so why don't GPU manafacturers make a top end card with VRAM modules you can simply "upgrade" the same as you would as if you needed more RAM in the base system?

I'm pretty ignorant to the manafacturing processes so maybe this has a bearing what is possible but is it simply a case of too expensive or are GPU manafacturers too lazy for what seems like a leap in the way GPU's are released (maybe read as it takes away an "entire" card sale on upgrading within the same gen technology)?
 
^ This. I'm sure it 'could' be done if they wanted to.

But, GPU's have a fairly short user life these days in terms of overall grunt required (aside from Vram amount) for what's required by new more taxing games.

As a personal example, I upgrade my GPU(s) every 2 years or so. I upgrade my motherboard, cpu, ram about every 5 years or so.
 
And not providing his feature allows nVidia to stiff Titan X owners for £400 for twenty quid worth of GDDR5 over the 980ti.
 
It would probably make the cards more expensive to produce and the market for VRAM modules would be very small so the VRAM would be expensive to upgrade.
 
My S3 Virge had an empty memory chip socket which I did fill to double the ram but that was in the MB range not GB. (late 1990's)

64-bit DRAM or VRAM (VX) memory interface, 2, 4, and 8 (VX) MiB video memory, Single-cycle EDO operation
 
But, GPU's have a fairly short user life these days in terms of overall grunt required (aside from Vram amount) for what's required by new more taxing games.

Oh I don't know, my 1.5Gb 580's in SLI still rock along for me despite running 5760x1080 surround, I can't help thinking if I could up then to say 4Gb they'd have a fair bit more life to live.
 
Bunch of factors. Cost is a big one, as well as connection quality (soldered beats socketed) but also support - GPUs can only address certain capacities depending on their bus.
 
Bunch of factors. Cost is a big one, as well as connection quality (soldered beats socketed) but also support - GPUs can only address certain capacities depending on their bus.

Same technically applies to motherboards also.

There is nothing stopping the industry from adopting an industry standard VRAM slot on graphics cards. GPU's could be designed to support from 4GB to 16GB in that slot. Though it would be very expensive to adopt, design and cater for this variable VRAM amount in the GPU architecture.
 
Size, cost, power and communication inefficiency. Basically all the reasons that have the chip designers and bringing more features on die or gradually moving to tech such as HBM/HMC.

Don't forget unnecessary complexity/challenges to setting up the supply chain when a suitable and superior alternative exists, and for the customers/users themselves. Customer support would be a nightmare I am sure. Imagine different gens gpu hardware support, different gens of memory module and memory technology, different memory speeds as faster chips enter the market - the effect on gpu performance because user wanted to reuse old large capacity memory module and carry over to their newly purchased gpu.
 
Last edited:
Oh I don't know, my 1.5Gb 580's in SLI still rock along for me despite running 5760x1080 surround, I can't help thinking if I could up then to say 4Gb they'd have a fair bit more life to live.

You clearly don't play any new big games then. Fair enough if that's what you like :)
 
1) The trace lengths from the GPU to the memory are short, extending them to allow socketed memory would make the board much more expensive
2) Putting DIMM like slots on the card would destroy the air flow across the card unless you went triple slot.
3) No one manufactures DIMMs or SODIMMs with GDDR5 on them
4) If they did then each one would need qualifying and would not be cheap as they would be a boutique item (not enough demand).
5) Warranty nightmares
 
1) The trace lengths from the GPU to the memory are short, extending them to allow socketed memory would make the board much more expensive
2) Putting DIMM like slots on the card would destroy the air flow across the card unless you went triple slot.
3) No one manufactures DIMMs or SODIMMs with GDDR5 on them
4) If they did then each one would need qualifying and would not be cheap as they would be a boutique item (not enough demand).
5) Warranty nightmares

Bah. Killjoy. :D
 
You clearly don't play any new big games then. Fair enough if that's what you like :)

GTA V runs well on a single screen at high ish settings, I have to drop it to low only once I enable surround which points to lack of VRAM in my book.

Tbh I find the same issues with most of my titles, Battlefront is another example, runs great at high settings on one screen (despite its 2Gb minimum requirement) but struggles with surround enabled.
 
1) The trace lengths from the GPU to the memory are short, extending them to allow socketed memory would make the board much more expensive
2) Putting DIMM like slots on the card would destroy the air flow across the card unless you went triple slot.
3) No one manufactures DIMMs or SODIMMs with GDDR5 on them
4) If they did then each one would need qualifying and would not be cheap as they would be a boutique item (not enough demand).
5) Warranty nightmares

Seems like a waste of space really; I meant more along the lines of each actual memory module area on the board surface was a mini flat socket as opposed to a traditional dimm/sodimm slot for memory on a pcb board - buy and upgrade each socket as you go along / require (cooler re-attached locks them in place with blanks on any unused slots).

Totally get it being a niche market item and all the other reasons surrounding such a hypothetical product (that and you're limited by the grunt of the card as to how much more vram could be made use of before the next series came along).
 
GPU's are designed around bandwidth requirements. CPU's for general consumer use are insensitive in comparison now but GPU's are not. For example, one of the ways to segment GPU products is by hobbling bandwidth. How can the manufacturer/designer ensure a uniformly operating product for the end customer if they can end up installing any number of memory modules (influencing GPU bus width) at any historically sold speed and operating voltage. It would cause more problems than it would solve.
 
Last edited:
Back
Top Bottom