• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

GDDR7 announced, 1.4x over GDDR6.

GDDR7 achieves an impressive bandwidth of 1.5-terabytes-per-second (TBps), which is 1.4 times that of GDDR6’s 1.1TBps and features a boosted speed per pin of up to 32Gbps.

linky

You forgot to mention that gddr7 also does this while using 20% lower voltages AND it does this faster speed and lower power draw on the same process node. Memory transistors no longer scale well with node, so there isn't much cost-benefit to using smaller nodes so G7 chips are on the same node as G6 so that's a very large nice gain
 
So what happened to HBM which some were insisting was going to render GDDR obsolete...
 
Last edited:
So what happened to HBM which some where insisting was going to render GDDR obsolete...

HBM is certainly faster but also much more expensive to use so its reserved for top cards only

Nvidia's H100 GPU uses HBM2e VRAM and its bandwidth is over 2TBps, far more than GDDR7 can do

Also future cards will use HBM3, which has a theoretical bandwidth of as much as 5TBps

If you need as much bandwidth as you get your hands on, HBM is the only choice, GDDR7 doesn't compete in absolute performance, but the cost difference is very big. HBM is a Bugatti Chiron and GGDR is a BMW M2 - in most cases the M2 is going to be fast enough for the job, but if you really must have highest performance at any cost, the Bugatti is the only choice.
 
Last edited:
Product segmentation. It's on all the high end compute GPUs.

It was a somewhat rhetorical question - there were a few posters on here years back insistent HBM would take over the world and rubbished me for saying GDDR would continue for awhile yet on consumer cards.

EDIT: Though to be fair I did think we'd see some form of stacked memory on the interposer on the halo cards as a regular thing by this point.
 
Last edited:
I still have an AMD Fury X and it has the same bandwidth as my 6900XT which is 6+ years newer. HBM3 was supposed to be the lower cost mainstream HBM version but that never happened.
 
I still have an AMD Fury X and it has the same bandwidth as my 6900XT which is 6+ years newer. HBM3 was supposed to be the lower cost mainstream HBM version but that never happened.


HBM memory currently costs 5 times more than GDDR. At the end of last year HBM was 3 times more expensive but due to demand for AI GPUs suppliers have increased prices and now HBM costs 5 times more

Therefore if Nvidia for example used HBM instead GDDR for the RTX4090, it would have increased the cost of production by $300usd

And maybe rtx4090 owners would have been ok paying $300 more for faster memory but would that memory have actually improved gaming performance is the question

 
Last edited:

Nvidia are current sampling GDDR7 for next generation Blackwell RTX 5000 GPUs, 1 month after Samsung announcement.

So what happened to GDDR6W answered to HBM2 Samsung announced back in Nov 2022? Oddly it seemed nobody sampled GDDR6W for consumer, AI or HPC GPUs.
 

Micron revealed GDDR7 memory roadmap.

GDDR7 and GDDR7X will surpass HBM3E bandwidth and make future HBM4 and HBM4E already obsolete.

Dunno where you read that but it's not true. HBM is way ahead of gddr in terms of raw bandwidth

Nvidia H100 GPu uses HBM2 memory and already has over 2TB/s bandwidth, so why are you saying GDDR7 will beat HBM4 when it's theoretical max performance in 2026 only matches HBM2 memory technology from 2019

I'm going to assume you don't understand the data, because they only provide the GPU memory bandwidth numbers for gddr7 and not HBM in your link. If you know how to calculate you'll see that by the time gddr7 is doing 2TB/s bandwidth HBM4e GPUs will be doing 10TB/s bandwidth ;)
 
Last edited:
Back
Top Bottom