• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Will HBM be an actual benefit on a consumer GPU anytime soon?

HBM solved a problem for AMD, it allowed lower power consumption, so we've already seen actual benefit from HBM on consumer cards. Imagine if AMD used GDDR5 on Vega and so used an additional 80w, people would loose their ****. If they could've used GDDR5/(X) instead I'm pretty sure they would have given how expensive HBM is.


But it is like putting a band-and on a bullet wound. It isn't along term solution. It would have been more beneficial if AMD put the resources into improving the architecture and moving away from BFN which has just failed to scale.

NV invested in efficiency improvements,they can switch to next be generation HBM or whatever comes next if and when GDDR fails to improve efficiently
 
But it is like putting a band-and on a bullet wound. It isn't along term solution. It would have been more beneficial if AMD put the resources into improving the architecture and moving away from BFN which has just failed to scale.

NV invested in efficiency improvements,they can switch to next be generation HBM or whatever comes next if and when GDDR fails to improve efficiently

I agree, it's far from ideal but you and I really have no idea about the costs and time involved with improving their architecture, it may have been too costly or it may have delayed the launch by a couple of years, perhaps they were beyond the point of no return when they realised fiji and vega are inefficient. I'm sure AMD weighted up the pros and cons of revising their architecture vs. going ahead with it.

Nvidia are in a much more comfortable position, Obviously they're direct competitors but nvidia are massive in comparison, have vastly more resources to invest in R&D and have the huge advantage of time.
 
Yep which it has always been. I thought the fury x would be ahead of the 980ti by now as they are both older tech.

The GTX 980Ti was ahead of the GTX 1070 for a long time. Some test so the R9 Fury beating the GTX 980Ti.

The point is R9 card with 4Gb of HBM is performing very well.
 
The GTX 980Ti was ahead of the GTX 1070 for a long time. Some test so the R9 Fury beating the GTX 980Ti.

The point is R9 card with 4Gb of HBM is performing very well.

It always did perform really well, did Amd really need overpriced HBM memory, were they that far behind Nvidia that they had to use hbm? I don’t think they were, as the r580 shows, ddr is more than adequate on Amd cards.
 
It always did perform really well, did Amd really need overpriced HBM memory, were they that far behind Nvidia that they had to use hbm? I don’t think they were, as the r580 shows, ddr is more than adequate on Amd cards.

I'm pretty sure AMD was ahead of Nvidia when they started working on HBM. Didn't AMD have a hand in bringing HBM to the market?

Who knows if the Fury cards would have performed as well as they do with 4Gb of DDR5, but my feeling is they wouldn't. I know HBM help AMD overcome the 20nm TSMC node troubles and it helped reduce the die size.
 
It always did perform really well, did Amd really need overpriced HBM memory, were they that far behind Nvidia that they had to use hbm? I don’t think they were, as the r580 shows, ddr is more than adequate on Amd cards.
I think they really did need to use it, it's about power (TDP) not performance. I doubt AMD big wigs were sat around a table and thought, hmmm should we use GDDR5 or a very expensive new memory type that'll cannibalize our profit margins... HBM it is!
 
I think they really did need to use it, it's about power (TDP) not performance. I doubt AMD big wigs were sat around a table and thought, hmmm should we use GDDR5 or a very expensive new memory type that'll cannibalize our profit margins... HBM it is!

AMD have done this a few times though - jumped on new tech before it is ready sometimes at significant expense. HBM possibly isn't any different in that regard.

For quite awhile they sacrificed quite a bit of die area for tessellation hardware before there was any realistic hope of it being used.

So I don't think it is really much of a guide as to the decision making.
 
AMD have done this a few times though - jumped on new tech before it is ready sometimes at significant expense. HBM possibly isn't any different in that regard.

For quite awhile they sacrificed quite a bit of die area for tessellation hardware before there was any realistic hope of it being used.

So I don't think it is really much of a guide as to the decision making.

The the push on tessellation would have been design remnants from ATI and I'm pretty sure AMD had a hand in designing HBM.
 
AMD have done this a few times though - jumped on new tech before it is ready sometimes at significant expense. HBM possibly isn't any different in that regard.

For quite awhile they sacrificed quite a bit of die area for tessellation hardware before there was any realistic hope of it being used.

So I don't think it is really much of a guide as to the decision making.

No denying that, AMD do jump the gun sometimes with new tech, however unlike the tessellation example you gave, HBM does actually have a big advantage right now, namely low power consumption, so they had a very valid reason for employing it when they did. Whether or not they should've revised their architecture and used GDDR5 rather than pursue the HBM route is an interesting question, like I said before we don't know the costs or time involved so impossible for us to say.
 
I think they really did need to use it, it's about power (TDP) not performance. I doubt AMD big wigs were sat around a table and thought, hmmm should we use GDDR5 or a very expensive new memory type that'll cannibalize our profit margins... HBM it is!

They used both.

The Hawaii 8gb and 4gb ddr cards seem to perform pretty close to each other.
 
They used both.

The Hawaii 8gb and 4gb ddr cards seem to perform pretty close to each other.

My reply that you quoted was talking about the R9 fury, with which AMD used HBM.

Not sure what your point is with the Hawaii cards, I never said anything about the memory amount being insufficient or anything like that.
 
My reply that you quoted was talking about the R9 fury, with which AMD used HBM.

Not sure what your point is with the Hawaii cards, I never said anything about the memory amount being insufficient or anything like that.

My is was 8gb of DDR isn't showing much gain over 4gb of HBM so HBM seems to have some merit. And AMD used both HBM and DDR memory types.
 
My is was 8gb of DDR isn't showing much gain over 4gb of HBM so HBM seems to have some merit. And AMD used both HBM and DDR memory types.

Not sure I understand your logic here, it doesn't necessarily follow that 4GB of HBM has some merit just because an 8GB DDR card isn't showing much gain over it, perhaps the games you looked at don't require more than 4GB of either type of memory. Are you implying that 4GB of HBM is some how better in terms of capacity than 4GB of GDDR?
 
Not sure I understand your logic here, it doesn't necessarily follow that 4GB of HBM has some merit just because an 8GB DDR card isn't showing much gain over it, perhaps the games you looked at don't require more than 4GB of either type of memory. Are you implying that 4GB of HBM is some how better in terms of capacity than 4GB of GDDR?

The Fury X with it's pitiful 4gb of memory shows that is likely the case.
 
The Fury X with it's pitiful 4gb of memory shows that is likely the case.

I have a Fury-X and game at 3440 x 1440 and personally haven't noticed any issue with vram capacity, the card runs out of grunt before 4GB is a problem, atleast in the games I play. Whether that's down to driver witch craft or an inherent advantage of HBM i'm not sure or maybe the games I play just don't need more than 4GB(?)
 
Back
Top Bottom