• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Working On An Entire Range of HBM GPUs To Follow Fiji And Fury Lineup – Has Priority To HBM2 Cap

Access times don't work like that (It's not a one clock cycle to access system, same as DDR3 etc are not)

For reference, GDDR5 gets about CAS = 10.6ns tRCD = 12ns tRP = 12ns tRAS = 28 ns tRC = 40ns. This is actually bit worse than DDR3 as latency is traded off for bandwidth.

I don't have a data sheet for HBM to compare unfortunately :( AMD themselves say it's better than GDDR5 but I can't verify that one way or another - likely to be not dissimilar as it's still DRAM underneath.

Also if they did work like you were suggesting then 500MHz would give you a time of 2ns not 2ms :p and the 6000MHz stuff would be at an incredible 0.17ns, nowhere near what it actually is.

yeah, typo... And anyway isn't gddr5 synchronous? So cas latency is clock cycles, not ns

and as I already said in the thread when someone else picked me up, I wasn't talking about access times (which is the latency you are talking about), but sequential reads/writes... a wide bus is going to favour larger datasets, which perhaps 1080p is requiring smaller chunks and therefore wasting bus width

I've never done any active memory management in the applications I've written on PC, but you could end up wasting resources on a PLC in this kind of way so it makes sense to me, though perfectly willing to admit I'm wrong if an AMD engineer wants to correct me
 
Last edited:
The memory management is better with HBM as the refresh is per bank rather than needing the entire chip to be refreshed. so read and refresh cycles can occur in parallel on different parts of the chip. which reduces latency overall.

nvidia made a good data sheet about it the other year.

http://www.cs.utah.edu/thememoryforum/mike.pdf

It's still dependent on a tragically slow and high latency PCI-E bus when you run out of VRAM though, whether Fury having 4GB is a costing or implementation issue it will ultimately go down in history as its main downfall (assuming AMD fix the pump issue and reduce pricing :p).
 
I think its more the case AMD will be guaranteed initial volume due to their involvement with Hynix,but once production is sufficient,its highly unlike Hynix would try to burn their bridges with Nvidia as they are major player in the discrete graphics card market.
 
Last edited:
It's still dependent on a tragically slow and high latency PCI-E bus when you run out of VRAM though, whether Fury having 4GB is a costing or implementation issue it will ultimately go down in history as its main downfall (assuming AMD fix the pump issue and reduce pricing :p).

Yeah, it is a shame that they could only get 4GB on the card. considering all the dual linking info that started circulating. stating that it could have been possible to get double stacks per chip. but it could have had other issue in regards to price.

But someone had to get the tech out. it might have been better to have put it on a midrange card first.

I can see a revised version coming next year with hbm2 though.
 
I thought AMD and NV have cross-license agreement. Just like AMD and Intel.

But that won't help NV if AMD really gets most of HBM2 capacity in the beginning.
 
Last edited:
Back
Top Bottom