Luckily I dodged the bullet which was the FX and bought my first ATI card,the 9500 PRO,and that was great card!!
I actually wanted a card which could run DX9 OK!!![]()
Yes the 9500 was a great card and hacking to run up to 9700 was a plus

Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
Luckily I dodged the bullet which was the FX and bought my first ATI card,the 9500 PRO,and that was great card!!
I actually wanted a card which could run DX9 OK!!![]()
I don't think it's right to say they ****** up if they don't reach a certain performance. That's quite a negative statement tbf.
For all we know these might be cut down chips while Apple and / OR Sony get the best binned chips. Who knows? There are plenty of potential variables.
We are expecting 390/980 at a great price. Anything more is pure speculation so let's wait to find out.
![]()
Maybe I'm being melodramatic but the current R9 380 is ~50% slower than R9 390. If RX 480 (it's replacement) is only ~50% faster then it's simply not a good improvement compared to the ~70% (on average) we see from GTX970 to GTX1070.
Maybe I'm being melodramatic but the current R9 380 is ~40% slower than R9 390. If RX 480 (it's replacement) is only ~40% faster than R9 380 and barely beats R9 390 then it's simply not a good improvement. Compare to the ~60%-70% improvement (on average) we see from GTX970 to GTX1070.
Interesting comments from one of the AMD technical marketing guys on Reddit regarding why they only really showed the mGPU comparison:
https://www.reddit.com/r/Amd/comments/4m692q/concerning_the_aots_image_quality_controversy/
You are just relieved or hoping that the 480 is not Fury level performance. I think it will come close though when overclocked.
In any case lets see what the 480X brings shall we.
Maybe I'm being melodramatic but the current R9 380 is ~40% slower than R9 390. If RX 480 (it's replacement) is only ~40% faster than R9 380 and barely beats R9 390 then it's simply not a good improvement. Compare to the ~60%-70% improvement (on average) we see from GTX970 to GTX1070.
Interesting comments from one of the AMD technical marketing guys on Reddit regarding why they only really showed the mGPU comparison:
https://www.reddit.com/r/Amd/comments/4m692q/concerning_the_aots_image_quality_controversy/
Right, so they don't want to take away from reviewers, ok, i can understand that.
Dont buy that for a second. It just looks better if every publication shows a screenshot with AMD winning, and then spouts the same line about only cost $500 and beating a $700 card. Reviewers clicks come either way, hardly anyone watches these press events so everyone goes too reviews sites too get the event updates. Later they go too review sites for benchmarks, cause Nvidia/AMDs benchmarks in a press event is 100% legit right.
FX are great CPUs in a time filled with crappy APIs, Mantle showed how an FX8350 matched an intel 5930 in gaming.
Why are we now comparing it to a 380?
Its gone from a 390X to a Fury-Nano to a 380.
What happened in the last 3 hours that its now compared to a current gen £160 card?
People need reminding that this benchmark exists, http://www.3dmark.com/3dm11/11263084
If you want to pass off some numbers and comparisons accompany it with some fact, in stead of just making blanket statements and claiming to be more intelligent that everyone else.
The FX CPUs were very forward thinking, designed for software that simply didn't exist at the time and ignoring the fact that such software would take some time to be developed. Multi-threading is very hard to do right, I know, I do it professionally, I can spend days debugging the most innocuous looking code. Strangely AMD made the same mistake with Hawaii and Fiji, a hardware that in theory should be incredibly powerful but hammered by the current APIs and some frontend bottlenecks. This is something of a theme for AMD - tomorrow's technology today. Sounds great, forward thinking, but in reality always comes back to bite them. HBM, great stuff, but utterly pointless on a FuryX, Pioneering Tessellation, and the totally ignoring it when it becomes practical.
Intel made a similar error with the Itanium. X86 architecture sucks and is far to antiquated, Buick on crumbling bloated foundations. Itanium-64 is fantastic improvement in theory. In practice they never made decent compilers, making decent compilers was incredibly tough, and even when they did it was a failure because no software developers would code specifically for that architecture.
Ironically AMD by choosing the cheaper and simpler 64bit extension x86 was a fantastically successful strategic move that really kicked Intel down a beat.
The FX CPUs were very forward thinking, designed for software that simply didn't exist at the time and ignoring the fact that such software would take some time to be developed. Multi-threading is very hard to do right, I know, I do it professionally, I can spend days debugging the most innocuous looking code. Strangely AMD made the same mistake with Hawaii and Fiji, a hardware that in theory should be incredibly powerful but hammered by the current APIs and some frontend bottlenecks. This is something of a theme for AMD - tomorrow's technology today. Sounds great, forward thinking, but in reality always comes back to bite them. HBM, great stuff, but utterly pointless on a FuryX, Pioneering Tessellation, and the totally ignoring it when it becomes practical.
Intel made a similar error with the Itanium. X86 architecture sucks and is far to antiquated, Buick on crumbling bloated foundations. Itanium-64 is fantastic improvement in theory. In practice they never made decent compilers, making decent compilers was incredibly tough, and even when they did it was a failure because no software developers would code specifically for that architecture.
Ironically AMD by choosing the cheaper and simpler 64bit extension x86 was a fantastically successful strategic move that really kicked Intel down a beat.
Maybe I'm being melodramatic but the current R9 380 is ~40% slower than R9 390. If RX 480 (it's replacement) is only ~40% faster than R9 380 and barely beats R9 390 then it's simply not a good improvement. Compare to the ~60%-70% improvement (on average) we see from GTX970 to GTX1070.
Surely the Maths is as simple as 62.5/1.83 = 34.153...
It is. That is the correct result.
That result however can't be right because that would be more like R9 380X performance.
That test is certainly janky, and we can't really draw any conclusions from it annoyingly.
Bring on the independent reviews!
It is unless your math skills are worse than a 5 year olds, or you are trolling.![]()
As for tessellation, again, nope, they pioneered it and now it's used in just about every game for more realistic looks, better rounding on character details, better and more realistic mountains, more realistic stone shapes. What you mean to say is, once Nvidia started randomly over tessellating stuff to win benchmarks... AMD didnt' play that game and didn't waste the transistor budget on something that brought zero IQ benefit to their users. They provided ample tessellation performance for max IQ benefit of tessellation without over providing just to win a urinating contest with Nvidia that didn't need winning and only hurt performance in games for no reason.