• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Radeon RX 480 "Polaris" Launched at $199

AMD already said to reporters after the previous event that the 8GB version fo the RX480 was only well, I forget exactly which $229 or 239, which absolutely does leave room for a different card at $300 but it may mean recommended max AIB price though again adding 30% or so to the cost for a custom cooling solution is.... excessive.

If there is going to be a 490 the obvious thing AMD could do is put a pair of chips on the same PCB as these are not power hungry GPUs and so could be done quite easy.

RX 490 for a dual card would also work quite well with the way AMD name their cards.
 
Yup, this is the issue. I'm honestly expecting small Vega to give the 1080 a bit of a pasting and I wouldn't be surprised if big vega beat GP102 comfortably as well.

If small vega has HBM2, it will destroy 1080 in overall performance and performance/watt.

Total garbage

You have no idea what the specs of small vega will be and yet you make the above prediction.

As to HBM memory I would caution anyone considering buying a card with it to have a very good look at the reviews before you part with your money.

I don't think it is a coincidence that GDDR5X has become the memory to use instead of HBM.
 
Easy. It offers enough bandwiodth for the current chips and is cheaper than HBM and with the power requirements of the current generations you dont need the power savings from HBM.

AMD made that mistake on the Fury.

There will be chips from both sides which will need HBM2 but these arent them.

In fact, I am pretty sure the Titan and 1080ti wont use HBM either despite what people are hoping for.

I could not have put it better.:)

HBM has it's uses but it is not a cure all solution.
 
GDDR5x is used on literally one card... yet has become the memory to use, even though HBM2 is on two cards(two supposedly available.... but no one has actually seen a working GP100).

This is a fact, if 1080 used HBM to achieve it's bandwidth, it would be in the region of 150W instead of 180W and it would be dramatically harder for AMD to beat it's performance/w with a gddr5 RX480.

If Nvidia wanted to give 1080 512GB/s of bandwidth, what with it losing ground to Fury X and 980ti at 4k compared to 1080/1440p, it would have to increase power by likely another 20W or so to achieve it, HBM wouldn take an extra 5W to do that.

Using an older architecture, the reason the Nano could beat Nvidia on performance/w against a newer architecture, was purely by using HBM.

Whatever the power limit on GP102 is, Nvidia will run into the situation where the memory will use a lot more power so the gpu core can use a lot less power within a reasonable single card wattage. Again HBM allowed a Nano to match a newer architecture in performance/w with an older one. Now think Vega, newer than Pascal realistically with HBM vs Pascal with gddr5x which uses significantly more power at any given level of bandwidth. AMD is going to have a HUGE advantage.

We also both have some supposedly leaked specs, it's funny because when you were using GP102/GP104/anything Nvidia leaked specs it's fine, but leaked Vega specs is a no because you say so.

From GP106 > GP104 die size, from most generations gaps in die sizes, from RX480 die size it's incredibly reasonable to presume small Vega will be a bigger die than GP104, and based off supposed architectural efficiency of Polaris without HBM, it's extremely naive to believe a larger Vega core with HBM won't beat a 1080.

Leaks put it as a circa 4000 shader part, it will probably be 350-400mm^2 both because of that shader count, the need to have a decent gap between it and RX480 die size/shader count, the need to have enough shaders to utilise HBM2 bandwidth and because leaks put it at 4000 shaders. All of that is errm... an idea of what Vega's specs are, and all of those ideas would have it comfortably ahead of the 1080 and what's more is if it uses HBM2, it can probably do that in the same power as the 1080, if it uses gddr5(x) it will likely use a bit more power but maybe still have better performance/w.

So I was right in my previous post when I said garbage !!!!

You have absolutely no hard facts whatsoever and yet you are saying small vega will beat the 1080.

Perhaps you can tell us how many graphics points small vega will score on Firestrike Ultra if you are so sure.:D

I myself don't have any idea what the performance of future AMD cards will be which puts me in the same boat as nearly everyone else.
 
Kaap with more of his ridiculous rhetoric on the demerits of HBM. As many have said before, fiji performance was nothing to do with HBM, get over it.

The only reason GDDR5X is being use in place on GP104 1080's is due to it being cheaper to implement, as you do not need to worry about interposers etc. Nothing to do with HBM's performance.

Indeed NVidia decided that GDDR5X was the best option for a high performance card.

How much faster is the 1080 than any single GPU Fiji/HBM based card again, answer quite a lot.:D
 
As we all know HBM1 performance can be updated to work entirely differently by drivers... so somehow it is still the HBM1 that is the problem.

From what I recall doesn't Fury X beat a 980ti at 1080p, and at every resolution and every setting except Hyper(de-optimised memory storage Nvidia are paying devs to use to hurt 4GB cards, both Nvidia and AMD ones) and lowest settings... funnily enough.

But still you know, HBM1 sucks, that it's beating the 980ti is irrelevant.

Fury is a not optimised overly large core built on an architecture not particularly designed for that number of shaders. It was more than anything a test bed for HBM so they could implement it, get the production chain for HBM1 established, start ramping that up, learn about HBM and how to optimise their next architecture that will use it and make improvements. Vega will bring chips with architecture tweaks designed for both HBM and a higher number of shaders.

Fury X is the reason AMD is able to bring HBM2 to higher volume and cheaper products than Nvidia will manage this generation.

dZ2ap2v.png

512GB/s via GDDR5 is a split of 35W chips and 50W PHY layer, via HBM1 it's roughly 17W chip and 12W PHY. GDDR5x doesn't reduce the PHY layer power much at all, regardless of bus size or chip power moving X amount of data off a chip takes Y power without a huge amount of difference, 512bit with 6Gbps chips or 256bit bus with 12Gbps chips will use about the same PHY power. You'll save about maybe 10W at 512GB/s using less GDDR5x chips over GDDR5 as you use half the chips but the chips themselves run faster and use more power overall.

HBM2 makes this comparison even worse as it reduces chip power to achieve 512GB/s.

HBM will always use significantly less power, that 50W difference can't go away. That means a 250W high end chip which will probably use around 512GB/s or more, meaning 75W or so will be purely memory leaving 175W for the chip. AMD within the same power budget will use maybe 25W for the memory leaving 225W for the gpu.

The bandwidth achievable with gddr5/x is not a problem, it never was. The problem is the power it will take to achieve it. HBM will always use significantly less power than GDDR5.

The sole reason an older GCN architecture could remotely rival Maxwell a much updated one in performance/w was purely HBM using much less power. It saved 50W. With gddr5 Fury X would need to use over 300W or more likely, have 500-1000 less shaders, either way reducing performance/watt massively and making a Nano completely unachievable, not size but the compelling performance in that sized package or it would have been so loud and hot it wouldn't have worked.

HBM2 will do the same and be an even bigger difference vs even higher bandwidth high end chips this generation.

I am more inclined to go with what AMD and NVidia have done and not used HBM1 as there are better solutions available to them.

One disadvantage to using HBM anywhere near a gaming Pascal core could be heat build up. Although the 1080 does not use much power there is still a lot of heat in a very small area and the last thing the core needs is memory chips stacked right next to it which could negatively effect performance.

Try thinking outside the box for a change DM instead of defending stuff that even AMD has dropped.

The next card that I buy in the very near future will be an AMD one so please don't even think about calling me biased either.:D
 
Polaris is a price/perf/efficiency based part, HBM is still too expensive to put on a mainstream part.

The only person not thinking outside of his box is you.

And now you list one of the disadvantages of HBM lol.

NVidia have not used HBM on the 1080 either and that is not a cheap card. Could this be due to HBMs other limitations ?
 
Your argument about heat. It just blew my mind and I couldn't treat it as a serious comment given that it's grounded in bull poop.

It is just a theory but quite a reasonable one.

If you have the GPU and memory chips in the same place with the memory actually working like lagging around the GPU then yes there will be a higher heat build up. Warm running memory chips will do nothing to make the GPU run cooler now will it lol.
 
Look Kaap, keep going around in circles, the only person making himself look daft is you. Plenty of people understand the issues with memory capacity for first gen HBM and where the issues are with fiji performance, yet you keep up this little act since you have to be right about it when you are not.

And the same again with GP104, it is all about cost that go beyond the part itself such as sorting out production lines. Which is why their HBM2 efforts are going towards GP100 since it is their first HBM based part alike fiji was AMD's HBM/Interposer pipe cleaner. mainly since Nvidia can charge an arm and a leg for it and recover those production setup costs etc since they have had less time to get this done in comparison to AMD who have had years of R&D time to get it right.

If NVidia have decided not to include it with the GP104 and GP102 cards it has nothing to do with cost. How much did the Fury Xs sell for again ?

I think I will side with AMD and NVidia on this one and back their reasoning for finding better memory solutions for their cards.
 
Buying an AMD card doesn't make you not biased, it provides you with a platform to say you're not biased and think you're serious.

Pascal is hot with a very small area and a lot of heat... really.... literally only a few months ago you were saying Fiji ran hotter than Hawaii because you both didn't understand the concept of power output compared to die area and you actually attempted to refute the concept and deny basic physics by insisting that when pointed out that Fury X had a significantly lower w/mm^2 output than Hawaii I was wrong and didn't know what I was talking about.

It's hilarious that back then you were arguing against an incredibly basic physics principal but now you're using it to defend a 1080.

One, HBM would drop actual power usage coming from the memory controller by 30-35W at that bandwidth level and two the HBM dies are separate so it wouldn't result in extra heat build up but in fact lead to a cooler running 1080 core, lead to lower power usage and higher sustained clockspeeds in the same power usage.


AMD didn't drop HBM, they used it on a product that is still sold, their next product uses HBM2. In the same way Nvidia is using gddr5 on a 1070 AMD is using GDDR5 on lower end cards also. There was never any intention to have HBM in every segment instantly due to cost and there was never any intention for Nvidia to use GDDR5x in every card due to cost. AMD have specifically stated the intention to use HBM2 and so have Nvidia, neither have dropped the technology but both will use the latest version of it.

But when Nvidia launches a GDDR5x card using 12Gbps chips, I'll be sure to spout the ridiculous idea that Nvidia has dropped using GDDR5x 10Gbps chips because they are a failed technology and no good.... because that is the incredibly ridiculous argument you're making as a claim that neither AMD or Nvidia will use HBM1 any more.

So are AMD manufacturing any new cards that use HBM1, no.

Your defence of HBM1 reminds me of a very famous parrot in a video

The HBM1 Parrot

And just to remind you I got into this debate today because you made an outrageous performance claim for Small Vega based on absolutely no facts whatsoever !!!
 
Kaap HBM is the way forward end of story. Everything has its shelf life and GDDR5 is coming to the end of its at the top end. HBM and technology's like it will be taking over.

You could go pray at the shrine like Flopper and see if you are worthy of receiving such tech in the future if that's what worries you.

Indeed HBM or similar is the way forward but HBM1 is history.
 
Drunkenmaster is right here. On all counts.

Which I dont think I've ever said before!

C'mon Kaap, this isn't a winnable argument man. I dont think anybody can prove with absolute certainty that your theories(or the theories you've heard) are completely untrue, but there's enough evidence suggesting they most likely are. Meaning that believing them over the more rational explanation is..........irrational.

I don't have a problem with HBM2 or higher but HBM1 left a lot to be desired.
 
Back
Top Bottom