• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

HBM 2 specs for AMD Polaris and Nvidea Pascal from samsung

Associate
Joined
4 Jan 2011
Posts
180
Hi guys anandtech has a new article on HBM 2 from samsung well worth a read its these chips will be coming to next gen gfx cards from AMD and Nvidea at 14/16 nm

Key points in relation to GPU memory

Samsung's 4-Stack HBM2 based on 8 Gb DRAMs

Total Capacity 16 GB
Bandwidth Per Pin 2 Gb/s
Number of Chips/Stacks 4
Bandwidth Per Chip/Stack 256 GB/s
Effective Bus Width 4096-bit
Total Bandwidth 1 TB/s

Here is the link to full article

http://www.anandtech.com/show/9969/jedec-publishes-hbm2-specification

key points

No hotlinking images, re-host them yourself next time - Davey

Its a interesting read thought I would share looks like 16 gb is going to be the mem size for next gen flagship cards if a 4 stack is used. If samsung are the maker or they use a standard JEDEC layout from another manufacturer

We may see Hynix or Micron release a larger stack for aftermarket board partners opening up 32 and 48 and 64 gb versions but looks like standard is going to be 16gb

please discuss and get your geek on
 
Last edited:
It says they have X86 design win/s - must be AMD? Intel is HMC.

Also they need to use 4 modules to make it worth it over GDDR5/X, it is 256GB/s per module, so that was always a safe bet, now based on the pricing data recently posted it seems the 4 stack of 8Gb chips is indeed what the top card's will likely use.

Really want to see how it performs @ 1080p vs GDDR5/X.

edit: it seems SiP is the new SoC, trendy term for "chip". Uggghhhhhhh!
 
Last edited:
It says they have X86 design win/s - must be AMD? Intel is HMC.

Also they need to use 4 modules to make it worth it over GDDR5/X, it is 256GB/s per module, so that was always a safe bet, now based on the pricing data recently posted it seems the 4 stack of 8Gb chips is indeed what the top card's will likely use.

Really want to see how it performs @ 1080p vs GDDR5/X.

edit: it seems SiP is the new SoC, trendy term for "chip". Uggghhhhhhh!

Surely 1080P type cards will use GDDR5 for a few more years at least...

Only fools bought 980ti/FuryX for 1080P - it's a peasant resolution nowadays after all, not one to use with £500 GPU's. You know when consoles are using a given resolution, that it's the worst possible.

I'm interested in the 1440P and 2160P results vs current high end cards.
 
Agreed with most of above..

1080p card will use gddr5 for costs

A 980ti could be used at 1080p for ultra long life span BUT probably be better changing cheaper cards for often
 
Only fools bought 980ti/FuryX for 1080P - it's a peasant resolution nowadays after all, not one to use with £500 GPU's. You know when consoles are using a given resolution, that it's the worst possible

Peasant resolution? Who even says that? People who run 4K could also say the same thing about all resolutions below that, but we all know that would be a silly thing to say.

In case you haven't noticed there are still some games that require a 980Ti to stay at 60fps or above @ 1080p.. So it's not a waste in those cases. Less so for higher hz screens.
 
I want to see if it makes any difference compared to HBM1 I suppose is a better way of putting it. Just talking strictly about the top 1 or 2 cards that will actually have it of course.
 
Surely 1080P type cards will use GDDR5 for a few more years at least...

Only fools bought 980ti/FuryX for 1080P - it's a peasant resolution nowadays after all, not one to use with £500 GPU's. You know when consoles are using a given resolution, that it's the worst possible.

I'm interested in the 1440P and 2160P results vs current high end cards.

Harsh to say that fools bought them for 1080P. A lot of games are very demanding, even on a 980Ti/Fury X/Titan X and a bit of a silly thing to say.

Like others have said, I do expect only the yop end cards to use HBM2 and can see those taking some time to make but I do look forward to seeing if the memory does give any real noticeable gains, even at 1080P.
 
Never saw your thread mate link your info into this one if you have more details always good to have one thread with all the info.
 
Surely 1080P type cards will use GDDR5 for a few more years at least...

Only fools bought 980ti/FuryX for 1080P - it's a peasant resolution nowadays after all, not one to use with £500 GPU's. You know when consoles are using a given resolution, that it's the worst possible.

I'm interested in the 1440P and 2160P results vs current high end cards.

I get 85 to 130 FPS in Battlefront on Ultra @ 1080P (avr 100), for some with 144hz 1080P screens that's not enough, only a 980TI/Fury-X will get you 120+ at maxed settings.
 
AMD last night pretty much just said they will be doing HBM2 in their opteron line of chips which will be early 2017. Those will be some interesting damn chips and might well claw back a huge amount of server market share. With A1100, the ARM based chips, a great fabric and storage capacity they have huge IO capability which will improve further and offer more flexibility when K12 also comes in 2017. Then you have potentially huge performance with bandwidth that as yet Intel can't offer, actually can't come close to offering.

I'm not under the impression that Intel will be offering HMC before 2017 which could give AMD a good window to get back in the game. 2017 looks huge for them.

However as I've suggested before one of the biggest limits currently is packaging chips based on interposers. I think Samsung and Hynix going to be in production by mid year certainly suggests the industry has increase capacity coming online sometime during Q3. My worry is in regards to graphics that server margins and importance to AMD means a lot of available interposer production capacity in the short term could be focused on AMD Opterons, maybe K12 and maybe even more so their server APU products.

EIther way, HBM2 and interposers are opening up huge possibilities for CPU performance, core count, bandwidth, power which is again why I stated at the time that Fury X was a monumental milestone for AMD/the industry as a whole.

If AMD start putting HBM on server products, it won't be long before we get desktop chips with them. The cost/volume does probably mean I would guess, that the majority of 2016/17 Zen products don't have HBM. Maybe cost/capacity will be there for some really astonishingly good CPU/APUs in 2018. I hope AMD get at least one premium Iris Pro style expensive HBM APU model to stick in laptops and provide a level of graphics performance Intel can't match, even if small volume and high cost, they need to get people interested in higher end AMD laptops and show that they can do things Intel can't.

It really can't be underestimated what interposers and HBM will do for the given performance available in every area, server, desktop, laptop.
 
Harsh to say that fools bought them for 1080P. A lot of games are very demanding, even on a 980Ti/Fury X/Titan X and a bit of a silly thing to say.

Like others have said, I do expect only the yop end cards to use HBM2 and can see those taking some time to make but I do look forward to seeing if the memory does give any real noticeable gains, even at 1080P.

Kind of true though, even a 290X is capable of playing 90% of games at max settings. TX and FX are only really effective if you're going to be going above 1080p.

Unless you include GameWorks/TressFX in there than maybe you could justify it, that is subjective though.
 
Not really, it's basically silly for someone to state categorically that X card is a waste at X resolution. It can be individually to a person but 1080p isn't a 'performance level' at all. 1080p @ certain settings @ a certain framerate is a performance level.

The performance required for 1080p at decent settings and 30-60fps isn't the same as max settings, 1080p and say 90fps minimums.

A 290x can play every game at max settings, it can't play every game at max settings at 60fps let alone 120fps.

I would say my 290 overclocked couldn't achieve 60fps, certainly minimums, in Witcher 3 maxed out(tessellation optimised) and many other games it can't. I prefer high minimums on a 120hz + screen.

A high end card is a absolutely NOT a waste at 1080p if you want high framerates. Personally I'd take 1080p at 120fps over 4k at 60 or less fps. Though I'm quite liking 1440p at 50-120fps using xfire depending on the game, some games run great and work with xfire perfectly, some xfire doesn't work on.

If you're happy with 30fps minimums then a 7970 is damn good for 1080p, it's completely subjective, but making a statement for everyone and calling people fools because they want something different to you is ignorant and rude.
 
Not really, it's basically silly for someone to state categorically that X card is a waste at X resolution. It can be individually to a person but 1080p isn't a 'performance level' at all. 1080p @ certain settings @ a certain framerate is a performance level.

The performance required for 1080p at decent settings and 30-60fps isn't the same as max settings, 1080p and say 90fps minimums.

A 290x can play every game at max settings, it can't play every game at max settings at 60fps let alone 120fps.

I would say my 290 overclocked couldn't achieve 60fps, certainly minimums, in Witcher 3 maxed out(tessellation optimised) and many other games it can't. I prefer high minimums on a 120hz + screen.

A high end card is a absolutely NOT a waste at 1080p if you want high framerates. Personally I'd take 1080p at 120fps over 4k at 60 or less fps. Though I'm quite liking 1440p at 50-120fps using xfire depending on the game, some games run great and work with xfire perfectly, some xfire doesn't work on.

If you're happy with 30fps minimums then a 7970 is damn good for 1080p, it's completely subjective, but making a statement for everyone and calling people fools because they want something different to you is ignorant and rude.

Which is why i said "Kind of true". Also said that it was subjective, not sure if you're aiming that at me.

Only played a couple of games where i couldn't maintain 60FPS. Witcher 3 and Crysis 3, even then i only had to change a couple of settings down a notch.
 
Well done to Davey for moderating and not wanting to criticise, but shouldn't this thread of been closed/merged seeing as the other one was posted a day earlier ?
 
Kind of true though, even a 290X is capable of playing 90% of games at max settings. TX and FX are only really effective if you're going to be going above 1080p.

Unless you include GameWorks/TressFX in there than maybe you could justify it, that is subjective though.

This is the thing, GameWorks is most of the AAA games of late and even at 1080P, they need some grunt, so if you want all the settings maxed even at 1080P, you will need something big. This is where we need the big Pascal/Polaris and probably some fast HBM2 to keep it flowing nicely.

Check out my frames on a EVGA SC Titan X here at 1080P.


So yer, calling people fools for buying a 980Ti/Fury X for 1080P is very silly.
 
This is the thing, GameWorks is most of the AAA games of late and even at 1080P, they need some grunt, so if you want all the settings maxed even at 1080P, you will need something big. This is where we need the big Pascal/Polaris and probably some fast HBM2 to keep it flowing nicely.

Check out my frames on a EVGA SC Titan X here at 1080P.


So yer, calling people fools for buying a 980Ti/Fury X for 1080P is very silly.


To be fair tho GameWorks is shockingly badly optimised.
 
Back
Top Bottom