• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Polaris architecture – GCN 4.0

I don't think it's right to say they ****** up if they don't reach a certain performance. That's quite a negative statement tbf.

For all we know these might be cut down chips while Apple and / OR Sony get the best binned chips. Who knows? There are plenty of potential variables.

We are expecting 390/980 at a great price. Anything more is pure speculation so let's wait to find out.
:-)

Maybe I'm being melodramatic but the current R9 380 is ~40% slower than R9 390. If RX 480 (it's replacement) is only ~40% faster than R9 380 and barely beats R9 390 then it's simply not a good improvement. Compare to the ~60%-70% improvement (on average) we see from GTX970 to GTX1070.
 
Last edited:
Maybe I'm being melodramatic but the current R9 380 is ~50% slower than R9 390. If RX 480 (it's replacement) is only ~50% faster then it's simply not a good improvement compared to the ~70% (on average) we see from GTX970 to GTX1070.

Remember there is a price difference to factor in between the 970 and 1070 also.
The 480 may not be a good improvement like some people expect? but if it performs as AMD says it will at the $200 they state then thats fine. They will have done a great job at a great price and they should be commended for lowering the VR barrier to entry as well as a great gaming solution at that price
 
Maybe I'm being melodramatic but the current R9 380 is ~40% slower than R9 390. If RX 480 (it's replacement) is only ~40% faster than R9 380 and barely beats R9 390 then it's simply not a good improvement. Compare to the ~60%-70% improvement (on average) we see from GTX970 to GTX1070.

AMD cut power consumption in half . They got twice the performance per watt similar to Nvida. I don't see either as a failure.
 
You are just relieved or hoping that the 480 is not Fury level performance. I think it will come close though when overclocked.
In any case lets see what the 480X brings shall we.

What:confused:

I'm entirely indifferent. The 480 is exactly what the rumours and AMD marketing indicated it would be. I'm pleasantly surprised at the price, it's great for consumers and will give Nvidia a hard time when releasing the 1060. I think the 480 is going to be a great card and a really good choice for many budget minded people.
 
Last edited:
Maybe I'm being melodramatic but the current R9 380 is ~40% slower than R9 390. If RX 480 (it's replacement) is only ~40% faster than R9 380 and barely beats R9 390 then it's simply not a good improvement. Compare to the ~60%-70% improvement (on average) we see from GTX970 to GTX1070.

I think the 480 will be a great card for both AMD and many gamers alike.
How much faster then a 380 we really wont know until more tests come out
 
Right, so they don't want to take away from reviewers, ok, i can understand that.

Dont buy that for a second. It just looks better if every publication shows a screenshot with AMD winning, and then spouts the same line about only cost $500 and beating a $700 card. Reviewers clicks come either way, hardly anyone watches these press events so everyone goes too reviews sites too get the event updates. Later they go too review sites for benchmarks, cause Nvidia/AMDs benchmarks in a press event is 100% legit right.
 
Why are we now comparing it to a 380?

Its gone from a 390X to a Fury-Nano to a 380.

What happened in the last 3 hours that its now compared to a current gen £160 card?

People need reminding that this benchmark exists, http://www.3dmark.com/3dm11/11263084

If you want to pass off some numbers and comparisons accompany it with some facts, in stead of just making blanket statements and claiming to be more intelligent than everyone else.
 
Last edited:
Dont buy that for a second. It just looks better if every publication shows a screenshot with AMD winning, and then spouts the same line about only cost $500 and beating a $700 card. Reviewers clicks come either way, hardly anyone watches these press events so everyone goes too reviews sites too get the event updates. Later they go too review sites for benchmarks, cause Nvidia/AMDs benchmarks in a press event is 100% legit right.

While hardly anyone watches theses event the information from them is spread like wildfire hence all the articles about the event and many threads about the information from the event.
 
FX are great CPUs in a time filled with crappy APIs, Mantle showed how an FX8350 matched an intel 5930 in gaming.

The FX CPUs were very forward thinking, designed for software that simply didn't exist at the time and ignoring the fact that such software would take some time to be developed. Multi-threading is very hard to do right, I know, I do it professionally, I can spend days debugging the most innocuous looking code. Strangely AMD made the same mistake with Hawaii and Fiji, a hardware that in theory should be incredibly powerful but hammered by the current APIs and some frontend bottlenecks. This is something of a theme for AMD - tomorrow's technology today. Sounds great, forward thinking, but in reality always comes back to bite them. HBM, great stuff, but utterly pointless on a FuryX, Pioneering Tessellation, and the totally ignoring it when it becomes practical.

Intel made a similar error with the Itanium. X86 architecture sucks and is far to antiquated, Buick on crumbling bloated foundations. Itanium-64 is fantastic improvement in theory. In practice they never made decent compilers, making decent compilers was incredibly tough, and even when they did it was a failure because no software developers would code specifically for that architecture.

Ironically AMD by choosing the cheaper and simpler 64bit extension x86 was a fantastically successful strategic move that really kicked Intel down a beat.
 
Why are we now comparing it to a 380?

Its gone from a 390X to a Fury-Nano to a 380.

What happened in the last 3 hours that its now compared to a current gen £160 card?

People need reminding that this benchmark exists, http://www.3dmark.com/3dm11/11263084

If you want to pass off some numbers and comparisons accompany it with some fact, in stead of just making blanket statements and claiming to be more intelligent that everyone else.

The 480 is certainly not comparable to the 380 in the slightest.
 
The FX CPUs were very forward thinking, designed for software that simply didn't exist at the time and ignoring the fact that such software would take some time to be developed. Multi-threading is very hard to do right, I know, I do it professionally, I can spend days debugging the most innocuous looking code. Strangely AMD made the same mistake with Hawaii and Fiji, a hardware that in theory should be incredibly powerful but hammered by the current APIs and some frontend bottlenecks. This is something of a theme for AMD - tomorrow's technology today. Sounds great, forward thinking, but in reality always comes back to bite them. HBM, great stuff, but utterly pointless on a FuryX, Pioneering Tessellation, and the totally ignoring it when it becomes practical.

Intel made a similar error with the Itanium. X86 architecture sucks and is far to antiquated, Buick on crumbling bloated foundations. Itanium-64 is fantastic improvement in theory. In practice they never made decent compilers, making decent compilers was incredibly tough, and even when they did it was a failure because no software developers would code specifically for that architecture.

Ironically AMD by choosing the cheaper and simpler 64bit extension x86 was a fantastically successful strategic move that really kicked Intel down a beat.

Its a bit more pragmatic than that, AMD needed to run 64Bit on the architecture that was the standard at the time, Intel's x86, so you ended up with X86_64.

The stand alone architecture (AMD64) is very successful independently of Intel, in fact Intel run that too. Servers.
 
The FX CPUs were very forward thinking, designed for software that simply didn't exist at the time and ignoring the fact that such software would take some time to be developed. Multi-threading is very hard to do right, I know, I do it professionally, I can spend days debugging the most innocuous looking code. Strangely AMD made the same mistake with Hawaii and Fiji, a hardware that in theory should be incredibly powerful but hammered by the current APIs and some frontend bottlenecks. This is something of a theme for AMD - tomorrow's technology today. Sounds great, forward thinking, but in reality always comes back to bite them. HBM, great stuff, but utterly pointless on a FuryX, Pioneering Tessellation, and the totally ignoring it when it becomes practical.

Intel made a similar error with the Itanium. X86 architecture sucks and is far to antiquated, Buick on crumbling bloated foundations. Itanium-64 is fantastic improvement in theory. In practice they never made decent compilers, making decent compilers was incredibly tough, and even when they did it was a failure because no software developers would code specifically for that architecture.

Ironically AMD by choosing the cheaper and simpler 64bit extension x86 was a fantastically successful strategic move that really kicked Intel down a beat.

Again, without HBM and the 30-40W that saves then without a major update to the architecture the Fury at that size with gddr5 would have had far less power available for the gpu and would have been significantly slower. That is the primary goal for HBM and it was in absolutely no way at all pointless for Fury. Fury consistently beats the 980ti in DX12, yet without HBM it would not have done so... HBM was anything but pointless.

As for tessellation, again, nope, they pioneered it and now it's used in just about every game for more realistic looks, better rounding on character details, better and more realistic mountains, more realistic stone shapes. What you mean to say is, once Nvidia started randomly over tessellating stuff to win benchmarks... AMD didnt' play that game and didn't waste the transistor budget on something that brought zero IQ benefit to their users. They provided ample tessellation performance for max IQ benefit of tessellation without over providing just to win a urinating contest with Nvidia that didn't need winning and only hurt performance in games for no reason.
 
Maybe I'm being melodramatic but the current R9 380 is ~40% slower than R9 390. If RX 480 (it's replacement) is only ~40% faster than R9 380 and barely beats R9 390 then it's simply not a good improvement. Compare to the ~60%-70% improvement (on average) we see from GTX970 to GTX1070.

It is actually far more than that:

http://tpucdn.com/reviews/NVIDIA/GeForce_GTX_1080/images/perfrel_2560_1440.png

JDH2uQM.png


An R9 390 is 60% faster than an R9 380 and an R9 390X is 74% faster than an R9 380. An R9 390 is 81% faster than a GTX960 and an R9 390X is 97% faster than a GTX960.

TPU places the R9 280X and R9 380X as having the same performance in recent reviews:

http://tpucdn.com/reviews/ASUS/GTX_950/images/perfrel_2560_1440.png

So,that makes an R9 390, 37% faster than an R9 380X and 49% faster than an R9 380X.

So an RX480 having either R9 390 or R9 390X level performance will be a massive upgrade in the segment with no price increase in the UK over the cards it is replacing. The GTX1070 is not comparable since you are looking at £320 at least for the cheapest model which is a good £70 increase in price over the £250 GTX970 which it is replacing.

Even if you take dollar amounts into consideration,it does not look good for the GTX1070 - the performance increase still comes at a price increase.

Also since the GTX1070 is meant to be around Titan X level,it means it is around 50% faster than a GTX970 looking at the TPU figures.
 
Last edited:
Surely the Maths is as simple as 62.5/1.83 = 34.153...

It is. That is the correct result.

That result however can't be right because that would be more like R9 380X performance.

That test is certainly janky, and we can't really draw any conclusions from it annoyingly.

Bring on the independent reviews!

It is unless your math skills are worse than a 5 year olds, or you are trolling.:D

It isn't that simple as the GPU Uilization was 51% for the single batch, 71.9% for the medium batch, and 92.3% for the heavy batch. Source

This is closer to the truth, but that was run on an older version so performance could have been improved in the newest version. This would put it between a Titan X and Fury X.
 
As for tessellation, again, nope, they pioneered it and now it's used in just about every game for more realistic looks, better rounding on character details, better and more realistic mountains, more realistic stone shapes. What you mean to say is, once Nvidia started randomly over tessellating stuff to win benchmarks... AMD didnt' play that game and didn't waste the transistor budget on something that brought zero IQ benefit to their users. They provided ample tessellation performance for max IQ benefit of tessellation without over providing just to win a urinating contest with Nvidia that didn't need winning and only hurt performance in games for no reason.

Nice rewrite of history there - ATI/AMD dropped the ball on tessellation it was nVidia who pushed it into actual usage (when the time was right) even if subsequently they may have used it politically.
 
AMD have simply got to get beyond Hawaii performance with Polaris or it's underwhelming as hell. We already saw brand new Hawaii-based cards for £200 ages ago.

edit: <--- quads check em
 
Back
Top Bottom