• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD "Greenland" Vega10 Silicon Features 4096 Stream Processors?

Status
Not open for further replies.
IT really isn't, there are basically no games that have an ultra setting that uses more memory that actually offers higher IQ. Xcom 2 and Shadows of Mordor both have identical image quality with ultra textures or the next setting down. Ultra is just uncompressed textures, nothing more or less, for the express purpose of using more memory, NOT improving IQ.

For the most part the reason to use two cards is to increase framerate, not IQ. A single 290x can get 40-50fps on The Division at 1440p at what I consider highest IQ(so DoF and other crap disabled because they make the image look worse, not better). If and when my second card starts working properly I will use the same settings(what I consider to be max) and aim for 100fps, I won't randomly use 8xSSAA, because it will make little difference.

Same goes for Shadows of Mordor, I go for a higher framerate and maximum IQ settings which means not enabling ultra textures, with 8GB cards I wouldn't enable ultra textures, using more memory for no reason is pointless, though it does mean transferring more data which likely reduces performance slightly for no benefit.

The massive majority of users want increased performance at a higher resolution rather than increased IQ because for most people the option is getting between 40-60fps at max IQ, or going for 80-120fps at max IQ, or increasing resolution with extra cards.

There are no games I've seen that 2x 8GB 390's beat 4GB 290x's or Fury X's unless you enable non IQ increasing settings. There will be in the future, but there are none I've seen yet.

Hell, Xcom 2 looks like crap for the performance you get regardless of the settings you choose, that a very basic looking game can use that much memory.... well, lets just say two Nvidia gameworks games use more than 4GB of memory for no IQ benefit and one of those games looks terrible for that extra memory usage. It should give you a idea as to why the uncompressed textures were added.... when AMD's top tier cards had 4GB of memory and Nvidia's had 6 or 12GB.
 
I use as much ultra settings as I can because to me it looks better.

I don't use super sampling but if I had a furyx or 980ti I would want to at 1080p or 1200p or 1440p.

I agree with you on the DOF blurry mode as I always disable it.
 
I am not on a crusade against HBM1, I am just saying it as it is.

Fortunately AMD have realized what the shortcomings of HBM1 are and have been brave enough to go back to GDDR5(X) until HBM2 is available. This is something I fully support and I hope other people do as well.
 
I am not on a crusade against HBM1, I am just saying it as it is.

Fortunately AMD have realized what the shortcomings of HBM1 are and have been brave enough to go back to GDDR5(X) until HBM2 is available. This is something I fully support and I hope other people do as well.

It is about cost Kaap, nothing to do with the shortcomings of HBM1. They have already said that polaris is aimed at affordability. Adding HBM1 to a midrange part would be silly right now with how immature the manufacturing process is.
 
Hell, Xcom 2 looks like crap for the performance you get regardless of the settings you choose, that a very basic looking game can use that much memory.... well, lets just say two Nvidia gameworks games use more than 4GB of memory for no IQ benefit and one of those games looks terrible for that extra memory usage. It should give you a idea as to why the uncompressed textures were added.... when AMD's top tier cards had 4GB of memory and Nvidia's had 6 or 12GB.

I did not know you were a TitanX user, I say this as that is the only way you could speak from experience what XCOM 2 is like @2160p maxed.

If you post a pic of your new TitanX I will add you to the owners list in that thread.:D
 
I am not on a crusade against HBM1, I am just saying it as it is.

Fortunately AMD have realized what the shortcomings of HBM1 are and have been brave enough to go back to GDDR5(X) until HBM2 is available. This is something I fully support and I hope other people do as well.

If AMD were able to easily put 8gb of HBM1 onto these cards and it was cost effective i am sure they would not have gone back to Gddr5 as essentially HBM1 is the better technology.
 
It is about cost Kaap, nothing to do with the shortcomings of HBM1. They have already said that polaris is aimed at affordability. Adding HBM1 to a midrange part would be silly right now with how immature the manufacturing process is.

This does not stack up as the mid range Polaris will be replacing the Fury X !!!

These mid range cards are going to be AMDs flagships for this year.:)
 
If AMD were able to easily put 8gb of HBM1 onto these cards and it was cost effective i am sure they would not have gone back to Gddr5 as essentially HBM1 is the better technology.

I don't think HBM1 is a better technology, AMD by the look of it won't be using it on any new cards this year, even ones that only need 4gb.

HBM2 on the otherhand does get me very interested and my next card purchase will probably have it.
 
The gpu is going to be tiny in comparison to Fury X and hopefully this means cheaper to manufacture so cheaper for us to purchase.

It won't work like that though.

The GTX 980 is a lot smaller than the GTX 780 Ti but did not cost that much less to buy.

AMD will just replace one flagship card with another flagship card at the same price point.
 
I did not know you were a TitanX user, I say this as that is the only way you could speak from experience what XCOM 2 is like @2160p maxed.

If you post a pic of your new TitanX I will add you to the owners list in that thread.:D

Are you saying you believe that you can't enable 'max' settings on a game without more memory than the game will use? You do realise you can enable such settings and then you just get crap performance, fps doesn't effect actual IQ of the image being delivered. You continue to make these "if you don't have it you can't know what you're talking about" type statements and they are completely pathetic. Not least because with literally each one of these "I have one and I took the heatsink off, you didn't, so I know better than you...." BS statements you make yourself look more silly. Physics change because I haven't put custom cooling on a Fury X.

Here is a hint, you owning a card doesn't actually change anything, it just means you own the card.

Also, without ever owning a Titan I can... look at two images comparing ultra and lower settings :o Do you understand that you can save images and compare them not using the graphics card or even owning the game at all.
 
This does not stack up as the mid range Polaris will be replacing the Fury X !!!

These mid range cards are going to be AMDs flagships for this year.:)

Once again , 680gtx replaced the 580gtx, it WAS FASTER but had a SMALLER MEMORY BUS. The 980 replaced the 780ti, it was faster and had a smaller memory bus.

This is what happens every single new generation that I can recall, this is standard... yet you are using a standard occurrence, a smaller core(that is also faster) having a smaller memory bus as proof that HBM is bad, it's not only illogical it's being purposefully misleading. Nvidia use less bandwidth and a smaller memory bus, is that proof that the 780ti's 384bit GDDR5 memory controller was a failure? By your own logic there can be no other explanation for using less bandwidth on a faster core than a failure in the previous cards memory systems.

You once again are ignoring actual evidence disproving your point that the only reason you could possibly drop from HBM is that HBM is at fault. It's because you want HBM to fail, you clearly wanted it to suck 6 months before Fury X launched, your tirades (and utter lack of knowledge about it) back then, to your insistence that easily explained lower 1080p performance(which has improved over time with the same memory speed) has to be HBM.

Once again, explain why the 980 which is faster than the 780ti had a smaller memory bus and used much less bandwidth?

If AMD were dropping to GDDR5 because HBM wasn't working, why does it appear that the medium polaris has a 256bit bus, meaning at best it's going to be producing what, 300GB/s of bandwidth maybe... when HBM provided 512GB/s? If the new core must need HBM level of bandwidth then it would absolutely have a 512bit bus.

If a faster than Fury X card MUST have at least Fury X bandwidth, why does it have way way less bandwidth?
 
Last edited:
It won't work like that though.

The GTX 980 is a lot smaller than the GTX 780 Ti but did not cost that much less to buy.

AMD will just replace one flagship card with another flagship card at the same price point.

The gtx980 ($549) cost a good bit less than the gtx780ti ($699) at launch and at 398mm2 its still a pretty big Gpu. The gtx780ti has a die size of 532mm2. Polaris 10 is rumoured to be 232mm2 which is much smaller than the gtx980 and Fury X at 596mm2.

So as you can see the gtx980 is not a really good comparison.
 
Last edited:
I am not on a crusade against HBM1, I am just saying it as it is.

Fortunately AMD have realized what the shortcomings of HBM1 are and have been brave enough to go back to GDDR5(X) until HBM2 is available. This is something I fully support and I hope other people do as well.

Argument by Assertion. Evidence is so old-fashioned, these days.
 
The gtx980 ($549) cost a good bit less than the gtx780ti ($699) at launch and at 398mm2 its still a pretty big Gpu. The gtx780ti has a die size of 532mm2. Polaris 10 is rumoured to be 232mm2 which is much smaller than the gtx980 and Fury X at 596mm2.

So as you can see the gtx980 is not a really good comparison.

My point is the GTX 980 at launch was anything but cheap.

The new flagship (mid range Polaris) cards from AMD won't be cheap either.
 
Are you saying you believe that you can't enable 'max' settings on a game without more memory than the game will use? You do realise you can enable such settings and then you just get crap performance, fps doesn't effect actual IQ of the image being delivered. You continue to make these "if you don't have it you can't know what you're talking about" type statements and they are completely pathetic. Not least because with literally each one of these "I have one and I took the heatsink off, you didn't, so I know better than you...." BS statements you make yourself look more silly. Physics change because I haven't put custom cooling on a Fury X.

Here is a hint, you owning a card doesn't actually change anything, it just means you own the card.

Also, without ever owning a Titan I can... look at two images comparing ultra and lower settings :o Do you understand that you can save images and compare them not using the graphics card or even owning the game at all.

Don't you think you should see a game actually running before making your mind up about image quality lol.
 
Once again , 680gtx replaced the 580gtx, it WAS FASTER but had a SMALLER MEMORY BUS. The 980 replaced the 780ti, it was faster and had a smaller memory bus.

This is what happens every single new generation that I can recall, this is standard... yet you are using a standard occurrence, a smaller core(that is also faster) having a smaller memory bus as proof that HBM is bad, it's not only illogical it's being purposefully misleading. Nvidia use less bandwidth and a smaller memory bus, is that proof that the 780ti's 384bit GDDR5 memory controller was a failure? By your own logic there can be no other explanation for using less bandwidth on a faster core than a failure in the previous cards memory systems.

You once again are ignoring actual evidence disproving your point that the only reason you could possibly drop from HBM is that HBM is at fault. It's because you want HBM to fail, you clearly wanted it to suck 6 months before Fury X launched, your tirades (and utter lack of knowledge about it) back then, to your insistence that easily explained lower 1080p performance(which has improved over time with the same memory speed) has to be HBM.

Once again, explain why the 980 which is faster than the 780ti had a smaller memory bus and used much less bandwidth?

If AMD were dropping to GDDR5 because HBM wasn't working, why does it appear that the medium polaris has a 256bit bus, meaning at best it's going to be producing what, 300GB/s of bandwidth maybe... when HBM provided 512GB/s? If the new core must need HBM level of bandwidth then it would absolutely have a 512bit bus.

If a faster than Fury X card MUST have at least Fury X bandwidth, why does it have way way less bandwidth?

I never pretended to know anything about HBM1 6 months before the launch of the Fury X, it was you who was doing the lecturing (and getting it wrong) 6 months before the event.

The other thing that springs to mind is you have consistently failed to offer anything solid in the way of technical knowledge about HBM1, all you have done is engage people in endless waffle. By comparison AMDMatt has given more technical info about HBM1 in a couple of sentences than you have managed in all your posts.
 
What with there being a different code-name for the high-end to the mid/high cards - Vega = Greenland, Polaris = Baffin and Ellesmere - it does pose some questions.

Are they the same uarch? You have to imagine they are. Just that Greenland/Vega has the HBM2.

Did all the improvements they already talked about find their way to Polaris? Are some the things they talked about only going to find their way into Vega?

Why is Vega not called Polaris? Why does it warrant its separation on the roadmap as its own entity?

Why not Polaris/Ellesmere, Polaris/Baffin and Polaris/Greenland? Each chip already has its own chip codename (Ellesmere,Baffin, Greenland), so why separate out the high end chip into the Vega architectre codename?

I can't believe that they will be significantly different uarchs. They already said their new GCN4 was designed to take both HBM and GDDR5, so if they're both GCN4 cards, why use the Vega arch codename at all?

It's baffling to me why they've chosen to do this. It's like nV saying all their cards but the Ti and Titan are Pascal, but the Ti and Titan are Volta.
 
I don't think HBM1 is a better technology, AMD by the look of it won't be using it on any new cards this year, even ones that only need 4gb.

HBM2 on the otherhand does get me very interested and my next card purchase will probably have it.

You realise by the same argument you would be saying GDDR5 6GHz sucks, awful technology, however GDDR5 7Ghz, I'm super interested in that and want my next card to have it.... it really doesn't stop with you does it.

There are at least two people on this forum who actually went out and bought AMD cards as they think people would believe them when they said they weren't biased. You bought some AMD cards, insist that no one knows what they are talking about if they haven't physically touched that card, then trash it on every occasion possible while every time someone points this out you say "but I bought an AMD card, I can't be biased". One would suspect you were being paid by Nvidia, but then they'd hand you reasonable arguments and counter points, not the diatribe of gibberish you come out with.

I hate HBM1(because amd have it) but I love HBM2 a completely and utterly different technology. :rolleyes::rolleyes:
 
Status
Not open for further replies.
Back
Top Bottom