• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Locks Down HBM Frequency on Fiji Cards

We don't really have the data to support that one way or another - neither can we really see how much if any architecture changes have an impact on how useful or not overclocking HBM would be - in the future that might change but I don't see big enough changes compared to the current GPUs that it would significantly change the story there and by and large outside of a limited number of synthetic benchmarks VRAM overclocking doesn't generally yield much unless your trying to get that last 0.1% for a benchmark world record.

This is a very crude way of doing it but if you work on cores and active transistors

The Fury X is about 45% bigger than a 290X

The 980 Ti is about 37.5% bigger than a 980

If you compare how the 980 and 290X trade blows @2160p and scale it up using the above figures adding HBM into the equation seems to have made very little difference.

I know the above is a very crude way of doing it and it is best to wait for actual proper benches but it still asks some questions.
 
This is what I have been thinking about, in theory they will dominate laptop gaming with an APU combined with HBM, next gen of consoles will be interesting to look at too.

Even if you couldn't upgrade the ram, i would love an CPU/APU with 8 - 16GB of HBM.

would be interesting to see if amd release any zen CPU's with on die HBM and dual memory controllers. so you could further expand with DDR4 etc.

would be interesting to see if AMD do something like slotted HBM behind the cpu. bring out a new AT standard for it. etc

Also on the console side, seeing the size of the Rage Fury Nano. The next gen of consoles with HBM could be tiny. especially looking at the AMD Quantum pc.
 
This is a very crude way of doing it but if you work on cores and active transistors

The Fury X is about 45% bigger than a 290X

The 980 Ti is about 37.5% bigger than a 980

If you compare how the 980 and 290X trade blows @2160p and scale it up using the above figures adding HBM into the equation seems to have made very little difference.

I know the above is a very crude way of doing it and it is best to wait for actual proper benches but it still asks some questions.




This is why one should be weary of making any comparison as both technologies are fundamentally very different. If it was a case of manufacturing faster IC's across wider buses on bigger footprints, GDDR5 could stay around forever.

HBM counters a lot more than just bandwidth restrictions presented by GDDR5 - Which is why I made it clear in the OP that there are likely very genuine reasons for locking overclocking out. The one thing that does stand however is the product placement and maybe poor choice of words regarding the product. But we know that a lot of things do tend to get lost in translation from one department to the next.
 
Last edited:
Controversial post. ;)

HBM is infact far slower than GDDR5, 500MHz vs 6000MHz .

Of course the fact that HBM operates over a 4096 bit bus gives it tremendous bandwidth compared with the paltry 512 bit bus on the 390x.
 
Controversial post. ;)

HBM is infact far slower than GDDR5, 500MHz vs 6000MHz .

Of course the fact that HBM operates over a 4096 bit bus gives it tremendous bandwidth compared with the paltry 512 bit bus on the 390x.

1000MHz vs 6000MHz ;)

But RAM has for a long time had a wider interface internally than externally. When we go from DDR2 to DDR3, then DDR4, we are actually doubling the internal on chip prefetch buffer, effectively increasing the internal width, keeping the internal frequency the same, the external width the same, and doubling the external clock rate. It's why latencies (in clock cycles) roughly double each gen.

It's just that laying 4096 equal length traces on a PCB is impossible, so for the link from GDDR die to GPU die you increase the frequency and decrease the width.

With HBM, you are laying your traces on a silicon interposer not a PCB, it's much easier to run a nice wide bus, and you get rid of the abstraction you had before.
 
To be fair on most cards I don't bother overclocking the memory, it seems to cause more problems for little gain, just the core. Whilst it's a shame for enthusiasts not to be able to tinker, I don't think it's the end of the world.

If they turned around and said the entire card was locked down that would be a different matter.
 
Please give me one proper source that supports this. Since when is there some massive overhead accessing vram.

Power efficiency is much greater with HBM, short signal traces etc. It's what has allowed AMD to allocate more power budget to compute and pack in 4096 shaders.

I'm not sure where you have invented this "bandwidth efficiency" stuff from.

He's not making up the concept, although the exact figures to make his point maybe. I think it's called data bus utilisation, but don't quote me (I am a biologist not a chip engineer!). What DM is referring to is the difference between peak theoretical bandwidth and what you can actually get from the hardware in real life even if you are hammering the memory for as much data as you can - random rather than synthetic orchestrated access. You wont have every clock edge with data.
Single bank refresh is one specific HBM vs gddr5 difference that could improve bus utilisation or efficiency. http://www.google.co.uk/url?sa=t&rc...TsBDT3Idwn1qdvwrEVEB7KQ&bvm=bv.96042044,d.ZGU Claim there that current DRAM can consume 5-10% bandwidth due to all-bank refreshing.
I don't know whether HBM allows better arbitration logic in the controller, perhaps so. I am hoping there is a nice proper in depth analysis done soon.
 
What are the chances hacked drivers/BIOS/AB will allow you to OC HBM?

Do you even want to? Seems dangerous and pointless? Already more than enough b/w and the chips are microns apart, that has to be a nightmare to manage heat within the stack.
 
Even if you couldn't upgrade the ram, i would love an CPU/APU with 8 - 16GB of HBM.

would be interesting to see if amd release any zen CPU's with on die HBM and dual memory controllers. so you could further expand with DDR4 etc.

would be interesting to see if AMD do something like slotted HBM behind the cpu. bring out a new AT standard for it. etc

Also on the console side, seeing the size of the Rage Fury Nano. The next gen of consoles with HBM could be tiny. especially looking at the AMD Quantum pc.

I was looking at zen rumours before and they are stating upto 16gb of HBM (must be 2.0?) I'm thinking that can't possibly be for gfx alone. from the silicon diagrams I looked at though, there's only 2 stacks of HBM, unless they just simplified the diagram. With 16gb that seems like it would be rather high, could make for an interesting CPU shim.. they often seem to mess up the internal height without multiple levels in the mix as it is, ha. could be a massive L3 internal cache I guess, think that would work as ram to some extent, I'm not that technical though so wouldn't surprise me if that's totally wrong.
 
I never seem to get anything from memory overclocking when I have played around with it in the past and as a rule I don't overclock anything I use if I can get by without doing it, I prefer to buy something fast enough to do the job without overclocking, Hence the stock clocked 4790k and 290x I currently run. I'm moving to Fiji and I'm fine about it, I won't need to overclock anything hopefully.
 
Maybe AMD are concerned at the amount of heat generated even though the card is watercooled. There may be hot spots around the memory chips that will could cause problems with reliability if overclocked.

Highly doubtful, given the tiny amount of power draw and voltage.

As others have stated, most likely a case of unknown durability in the case of untested frequencies and increased voltages, coupled with diminishing returns as the GPU probably has all the bandwidth it needs.
 
Highly doubtful, given the tiny amount of power draw and voltage.

As others have stated, most likely a case of unknown durability in the case of untested frequencies and increased voltages, coupled with diminishing returns as the GPU probably has all the bandwidth it needs.

Even if the memory used no power at all the four places where they are up against the core are probably the hotest spots due to the memory acting like insullation. On top of this in reality the memory does use power so this will produce additional heat.
 
Back
Top Bottom