• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD "Greenland" Vega10 Silicon Features 4096 Stream Processors?

Status
Not open for further replies.
So with that in mind When GDDR5X comes that must mean GDDR5 sucks right?

No just hbm :p

Hbm helped amd keep up with nvidia but card for card the 980ti is better than the furyx in performance, One uses gddr and the other uses hbm, The furyx was a poor card when you consider it had hbm to help it, then factor in it only has 4gb!

I really hope both companies use 8gb of gddr or hbm on higher end cards but I feel nvidia will get the performance out of gddr on 16nm so why bother till next year or when amd release thier top hbm2 cards.
 
It will be, it'll be the second much improved iteration :confused::confused::confused:

Soi it starts :p

Only improvements will be more memory per stack so in essence if four stacks are used then your going to have more than 4GB which to me is the improvement that matters most. However the other one is more bandwidth. Other than that is less power usage but from what i can tell the amount of power it's saving is not much from HBM1.

It's like GDDR5 -> GDDR5X except without the increased capacity per stack of memory.
 
Did i say or insinuate that Kaap before trying to put words in my mouth or twist things. Ofc there is performance gains from HBM. However developing a new Arch on a new node has performance gains. For Mid-high tier cards they don't need to drop HBM on it. It's like putting HBM on a 380x or something. Ridiculous...

ATM using HBM or producing cards with HBM is more expensive than using GDDR5/x... HBM has better performance than GDDR5/x and HBM shines on larger resolutions where more bandwidth is used. mid-high tier cards are more likely to be focusing on 1080p and possibly some 1440p. GDDR5/x will be fine for this. Why increase costs and put HBM on the cards? HBM doesn't yeild massive performance gains but still is better.

So what are AMD users supposed to use for 2160p when these new cards arrive ?

Should they stick to 4gb slower Fury Xs if they are still available ?
 
Soi it starts :p

Only improvements will be more memory per stack so in essence if four stacks are used then your going to have more than 4GB which to me is the improvement that matters most. However the other one is more bandwidth. Other than that is less power usage but from what i can tell the amount of power it's saving is not much from HBM1.

It's like GDDR5 -> GDDR5X except without the increased capacity per stack of memory.

I think we can all agree that we're looking forward to seeing it in action!! Unfortunately it's looking like next year before we do :(
 
So what are AMD users supposed to use for 2160p when these new cards arrive ?

Should they stick to 4gb slower Fury Xs if they are still available ?

New cards as in the polaris mid - high end? Well nothing stopping them from using these cards but they might as well wait for Vega or nVidia's equivalent cards for 4k if this is their desired res, like any smart person. And these cards will most likley have HBM2 well Vega will anyway.

I really am not sure what your trying to get at Kaap because i know your a smart guy.
 
New cards as in the polaris mid - high end? Well nothing stopping them from using these cards but they might as well wait for Vega or nVidia's equivalent cards for 4k if this is their desired res, like any smart person. And these cards will most likley have HBM2 well Vega will anyway.

I really am not sure what your trying to get at Kaap because i know your a smart guy.

AMD will be targeting the mid range Polaris cards equipped with GDDR5(X) at 2160p.

I also expect they will do well at the resolution too.

HBM2 on big Polaris will be even better but that is a long way off.

GDDR5 may be a bit ancient but it can still cope with 4 TitanXs in SLI churning out very high frame rates at 2160p.

Having said that I do agree it is about time we moved onto HBM2.
 
New cards as in the polaris mid - high end? Well nothing stopping them from using these cards but they might as well wait for Vega or nVidia's equivalent cards for 4k if this is their desired res, like any smart person. And these cards will most likley have HBM2 well Vega will anyway.

I really am not sure what your trying to get at Kaap because i know your a smart guy.

Well, I am glad I am not the only one, I don't understand what he is trying to say.

But I am going to make a stab at it. This is what I gathered from his posts so far in this thread.

AMD's midrange polaris card is really AMD's high end card for this year. High end cards should use HBM, but since HBM is a failure, AMD aren't using it. And since AMD aren't using it, that means they won't be faster than Nvidia's cards. So they won't sell well. But even worse is that AMD users who have 4K monitors will not be able to use these cards, even if they are faster, as, yes you guessed it, they have no HBM!! Because HBM is a failure, don't you know?

Yeap, I definitely think this is what he is trying to say :D
 
AMD will be targeting the mid range Polaris cards equipped with GDDR5(X) at 2160p.

I also expect they will do well at the resolution too.

HBM2 on big Polaris will be even better but that is a long way off.

GDDR5 may be a bit ancient but it can still cope with 4 TitanXs in SLI churning out very high frame rates at 2160p.

Having said that I do agree it is about time we moved onto HBM2.

haha, now you write a post that agrees with what everyone else has been saying all along!! :p

It also makes my previous post redundant!! Oh well, it took me ages to write it, so I am leaving it there!! :)
 
Last edited:
I kinda wanna pull 144fps (ish) in Star Wars Battlefront at 1440p and on ultra, a fury manages about 110 fps, same with the 980ti (I don't think this is a CPU bottleneck but I may be wrong?). So I'm wondering just how much faster they are than the FX.

I'm also wondering how limiting a PCIe 3.0 x8 will be for a crossfire setup over the next 5 years ish with PCIe 4.0 supposed to be coming out next year, if I remember correctly. I intended to have a big spend this year on my system but everything seems a bit delayed right now, the x99 refresh and the Polaris release only being the equivalent of the 380x performance segment (mid-high), am I reading that correctly, or is this more the 390x (high end) and the big guy will be the equivalent of the FuryX (enthusiast).

If we get FuryX performance at the £200-£250, mid-high price bracket with 8gb vRAM, that would be desirable.. but this seems unlikely to me. Is this a wait and see situation or do we have a better idea of whats going on?
 
Overlag said:
When Nvidia start using HBM everyone will be saying its the best thing since sliced bread...

probably or they will use the HBM1 sucked HBM2 is amazing when they are pretty much same.

So with that in mind When GDDR5X comes that must mean GDDR5 sucks right?

It did suck though, It was not enough ram for the high end cards, 4gb's did not cut it, The claims that it would make up with speed what it lacked in size where not correct but then again there were plenty of us on the forum saying 4gb is still only 4gb's regardless of whether it's HBM or gddr5 and it turned out to be right. I've got a Fury and it is a great card but it is only 4 gb's and I've been caught out a few time by that limitation, 4gb's should never of been released on high end cards, especially when the range down has double that on a 512 bit bus that is in no way a bottleneck. It was a mistake.
 
It did suck though, It was not enough ram for the high end cards, 4gb's did not cut it,

I think your entire premise is wrong. And this has been discussed to death. 4GB has been enough for current games. Firstly, some people look at memory usage and confuse that with memory required. Secondly, in the few cases you can push a game to actually requiring more than 4GB it was through using uncompressed textures (little to no benefit) or a couple of image effects that are widely derided as making things worse.

If you want to make the rest of your argument, you first have to establish that the starting idea above is true and I don't think it is.
 
AMD will be targeting the mid range Polaris cards equipped with GDDR5(X) at 2160p.

I also expect they will do well at the resolution too.

HBM2 on big Polaris will be even better but that is a long way off.

GDDR5 may be a bit ancient but it can still cope with 4 TitanXs in SLI churning out very high frame rates at 2160p.

Having said that I do agree it is about time we moved onto HBM2.

Right, i see where your coming from. Well personally i don't think AMD will be targeting their mid range at 2160p. High end they may advertise it can run and drive 4k games but still that's up for debate as we all are expecting high end to match or slightly beat current flagship cards in performance. As of now one card doesn't cut it and two cards could probably just handle 4k where xfire and SLI actually work and scale.

This is why i'm certain they wont be using HBM2 or HBM1 due to this and they will be wanting their cards to yes beat nVidia's offerings but also will want them to remain competitive on price. HBM will increase costs and reduce what profits can be made on these cards. Lets be honest AMD have made a lot of money off the mid-high end previously coiompared to the money made off flagship cards and probably are banking on doing the same again. With the hopes of coming on top with their flagship(s) card(s)

Sure AMD users and 2160p users most likely wont have anything to upgrade to yet but if they are running 4k then they more than likely are not your budget gamers and most likely will be looking at flagship cards. 2017 for that :(
 
Last edited:
It did suck though, It was not enough ram for the high end cards, 4gb's did not cut it, The claims that it would make up with speed what it lacked in size where not correct but then again there were plenty of us on the forum saying 4gb is still only 4gb's regardless of whether it's HBM or gddr5 and it turned out to be right. I've got a Fury and it is a great card but it is only 4 gb's and I've been caught out a few time by that limitation, 4gb's should never of been released on high end cards, especially when the range down has double that on a 512 bit bus that is in no way a bottleneck. It was a mistake.

HBM1 does not have a limitation of 4GB. I'll say it again the HBM1 memory technology does not have a limitation of just 4GB will people get that in thier heads. Now yes AMD went with 4GB now that's due to different factors. its like saying GDDR5 sucks cos your limited to 4GB on the 380x

Now why would you want more than 4GB on the 380x? Sure AMD could have put more memory but that would have increased costs and not been really sufficient for the horse power. Now slightly different for the Fury lineup it does have more horsepower and they could have developed the card with more HBM1 memory on it. But this would have increased development costs and the card is already expensive.

What i'm saying it HBM1 doesn't suck it works the same as what HBM2 does except HBM2 allows more memory per stack which in essence allows for less stacks of memory on a card.
 
Last edited:
I think your entire premise is wrong. And this has been discussed to death. 4GB has been enough for current games. Firstly, some people look at memory usage and confuse that with memory required. Secondly, in the few cases you can push a game to actually requiring more than 4GB it was through using uncompressed textures (little to no benefit) or a couple of image effects that are widely derided as making things worse.

If you want to make the rest of your argument, you first have to establish that the starting idea above is true and I don't think it is.

Regardless, there are enough people (myself included) who simply won't buy a 4GB card, that they need to make 8GB the standard.

I expect my next card to last 2-3 years, and I'm not betting on 4GB being enough in 2018.
 
Regardless, there are enough people (myself included) who simply won't buy a 4GB card, that they need to make 8GB the standard.

Now that may well be so, but it's a very different thing to what the person I replied to said.
 
Regardless, there are enough people (myself included) who simply won't buy a 4GB card, that they need to make 8GB the standard.

I expect my next card to last 2-3 years, and I'm not betting on 4GB being enough in 2018.

4GB isn't enough for 1440P now in many games, especially if you like to have a browser open an some youtube videos running, or paused etc while gaming.
 
I think your entire premise is wrong. And this has been discussed to death. 4GB has been enough for current games. Firstly, some people look at memory usage and confuse that with memory required. Secondly, in the few cases you can push a game to actually requiring more than 4GB it was through using uncompressed textures (little to no benefit) or a couple of image effects that are widely derided as making things worse.

If you want to make the rest of your argument, you first have to establish that the starting idea above is true and I don't think it is.

Let that go ;) I bought 2 out of 3 Fury Xs nearly a year ago. They have been great, and they still are with everything I throw at them at 4k or 1440p. And they should be enough till vega comes out, end of year/next year. Quite a few games now are coming as DX12 supported, and I am hoping they will use some memory management features available in that API. But overall all this gloom and doom talk is complete nonsense.
 
Status
Not open for further replies.
Back
Top Bottom