• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The next Geforce doesn’t use HBM 2

Man of Honour
Joined
21 May 2012
Posts
31,923
Location
Dalek flagship
ixsxu3S.jpg


http://www.fudzilla.com/news/graphics/43873-next-geforce-doesn-t-use-hbm-2
 
I guessed as much. GDDR5 on low and mid range and high end products will be GDDR5x again. Then nvidia will move to GDDR6 when its out in 2k18.
 
So the GeForce Titan XVX BLACK X with use GDDR6 then?

Yeah, I mean 16 Gb/s on a 384-bit bus is 768 GB/s so it looks like HBM isn't needed yet at these performance levels.

Going forward though I imagine it'll be needed, especially for 4K and beyond.

My guess is HBM will become massively relevant after ~20 Tflops compute power. You'll need 1 TB/s+ bandwidth at that point.

And of course there's some other use cases for HBM that are clearly useful, like cache/memory for iGPUs. I think it's almost a certainty the proper next-gen consoles will have HBM instead of GDDR (maybe they'll do Zen3 + Navi cores in 2020?)
 
Best case selcdnario for Vega is 512Gbps, more likely in the 400-470range going by later rumours.

The Volta Titan will have 768Gbps bandwidth using GDDR6, and will be cheaper and easier to manufacture.

Not hard to see what Nvidia will go with
 
Best case selcdnario for Vega is 512Gbps, more likely in the 400-470range going by later rumours.

The Volta Titan will have 768Gbps bandwidth using GDDR6, and will be cheaper and easier to manufacture.

Not hard to see what Nvidia will go with

I guess it also depends how much memory they want to go with, and what power consumption they want.

When you're talking about a top-end card like the Titan Volta, it could make sense for them to use HBM for that.

They could do 4 stacks of 4GB to get 16GB and 800-1024 GB/s bandwidth (depending on the speed of HBM they used) while also using less power than GDDR6, so they could clock the core higher.

But certainly for the 1080-level cards, and below, it doesn't make sense.
 
Is this the same Volta NVidia is struggling to make more than one chip per waffer, so is made to order only? :p
 
Is this the same Volta NVidia is struggling to make more than one chip per waffer, so is made to order only? :p

That just for the top-end Tesla FP64 and deep learning chip, because it's over 800mm^2.

And it is a mightily impressive chip for the tasks it's designed for.
 
My guess is HBM will become massively relevant after ~20 Tflops compute power. You'll need 1 TB/s+ bandwidth at that point.

At some point GDDR will get a node shrink which will significantly improve potential power, heat and/or performance.

HBM is increasingly looking like going the way of Rambus.
 
At some point GDDR will get a node shrink which will significantly improve potential power, heat and/or performance.

HBM is increasingly looking like going the way of Rambus.

Remains to be seen really. It's superior in every way other than cost.

Also there's talk of HBM3 being a smaller and cheaper version of HBM2.

So it could be the progression ends up like:

  • HBM1 - establishes the tech, too small (GBs wise), too expensive, ok speed for its time
  • HBM2 - still too expensive, too big (physical chips), solves capacity problem, very good speed
  • HBM2.5 - HBM2 but cheaper and physically smaller. Also slightly slower at 200 GB/s per stack, but 16GB 800 GB/s at low cost possible. First generation that makes mainstream sense.
  • HBM3 - Not as cheap as HBM2.5 but ridiculously fast, high density, and low power. Max capability 64GB at 2.05 TB/s (4 stacks) at lower power than HBM2.

EDITED ABOVE: To clarify between HBM2.5 (or 'low cost HBM') and HBM3
 
Last edited:
I think a big advantage for GDDR5(X) for a gaming card is even though it uses more power and therefore generates more heat than HBM, that power and heat is not packed around the GPU and so is less of a problem to remove.

Gaming cards like to run high clockspeeds we don't want them throttling because of extra heat around the GPU.
 
Remains to be seen really. It's superior in every way other than cost.

Also there's talk of HBM3 being a smaller and cheaper version of HBM2.

So it could be the progression ends up like:

  • HBM1 - establishes the tech, too small (GBs wise), too expensive, ok speed for its time
  • HBM2 - still too expensive, too big (physical chips), solves capacity problem, very good speed
  • HBM3 - HBM2 but cheaper and physically smaller, first generation that makes mainstream sense


HBM3 is also designed to be slower with higher latencies - that is where the cost saving comes from.You might only get 300BBps from HBM3 2-stack solution

The problem with HBM2 is it already doesn't really provide competitive bandwidth without going for 4 stacks by which point the costs just don't work for even a halo gaming card and a lot of the power savings are lost.
 
HBM3 is also designed to be slower with higher latencies - that is where the cost saving comes from.You might only get 300BBps from HBM3 2-stack solution

The problem with HBM2 is it already doesn't really provide competitive bandwidth without going for 4 stacks by which point the costs just don't work for even a halo gaming card and a lot of the power savings are lost.

I actually double-checked, and that's HBM2.5 (or 'low cost' HBM), HBM3 is a different thing.

So HBM2.5 runs at 3 Gbps, but less bits-per-pin. So you get 200 GB/s per stack, but it's supposed to be substantially cheaper, and smaller and lower power.

So in theory you could get 16GB (or 8GB) with 4 stacks, at 800 GB/s, for a similar cost to GDDR6. But with lower power too.

Then HBM3 is mad. Runs at 4 Gbps, smaller & cheaper than HBM2 (but not as cheap as 2.5), double the density too. So 512 GB/s PER STACK, and up to 16GB per stack.

HBM3 max capability is 64 GB at 2.05 TB/s, with smaller footprint and lower power than HBM2.
 
Back
Top Bottom