• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Polaris architecture – GCN 4.0

Interesting comments on where we might see some decent improvements with Polaris:

http://forums.anandtech.com/showpost.php?p=38044776&postcount=660
http://forums.anandtech.com/showpost.php?p=38049554&postcount=684

So,interestingly we might see some decent improvements in DX11 performance.

Complete nonsense really, new command processor which is all he's read he's simply decided means longer command buffer. There are a lot of things the command processor does, a LOT. Almost every generation will deal with longer buffers by default. But everything should always eventually be made more efficient. If an architecture has a new feature then the command processor is the issue.

Also he's also diagnosed the problem incorrectly, Nvidia moved hardware scheduling and thus a lot of their command buffer off the die and to be done on the CPU. They have better multithreading in the drivers which somewhat goes around what DX11 wants, and could very much be the reason Nvidia have so many more issues with stability with DX11 games in the past couple of years since they've been doing this. The AMD, bottleneck shall we call it, for draw calls in DX11 is mostly in the drivers and mostly to do with DX11 itself. It's not on the GPU. If it was on the GPU then it wouldn't be able to handle 5 times the draw calls through DX12.


So he's randomly seen command processor, jumped to conclusions about what fixes will be in it and decided the DX11 problem is in the command processor even though absolutely every piece of evidence points to this not being the case at all.
 
Complete nonsense really, new command processor which is all he's read he's simply decided means longer command buffer. There are a lot of things the command processor does, a LOT. Almost every generation will deal with longer buffers by default. But everything should always eventually be made more efficient. If an architecture has a new feature then the command processor is the issue.

Also he's also diagnosed the problem incorrectly, Nvidia moved hardware scheduling and thus a lot of their command buffer off the die and to be done on the CPU. They have better multithreading in the drivers which somewhat goes around what DX11 wants, and could very much be the reason Nvidia have so many more issues with stability with DX11 games in the past couple of years since they've been doing this. The AMD, bottleneck shall we call it, for draw calls in DX11 is mostly in the drivers and mostly to do with DX11 itself. It's not on the GPU. If it was on the GPU then it wouldn't be able to handle 5 times the draw calls through DX12.


So he's randomly seen command processor, jumped to conclusions about what fixes will be in it and decided the DX11 problem is in the command processor even though absolutely every piece of evidence points to this not being the case at all.

sensible post

Polaris brighter than ever from AMD :cool:
 
In terms of improvements in the command processor, this thing deals with all interactions between the GPU cores and the outside world. The improvements within it are more than likely to help address usage of more cores within the GPU's structure as well as throughput improvements. One of the other improvements they mentioned was adding 'Instruction prefetching' which the command processor would mediate. And improvement in the command processor should also increase the top end number of instructions/draw-calls these cards can receive.

Instruction prefetching has been used for years in CPU's, so it seems rather new to be having it on a GPU if it had never been in any previous architecture. this should help compute based performance and hopefully general rendering performance.

But we still know little about how FinFett and the core improvements have affected performance. Considering that 14nm + finfett alone allowed a doubling of the area in a near 1/3 the power usage, we could consider the mentioned 232mm^2 model to have performance equivalent to a 390X - Fury levels. But with improvements in gate switching and the general architecture, the performance improvements could make this parts performance beyond an overclocked 980ti if the frequency of the GPU scales to 1.5ghz or even beyond.
 
DM just said that AMD's drivers will continue to be a problem... so yes it was a sensible post... but I am surprised at you agreeing with that

No I didn't, I said Nvidia's continual breaking of their own drivers from working around DX11 is an issue and AMD not breaking DX11 and having significantly more stable drivers for the past two years would continue.

How many Nvidia users had trouble with Witcher 3 again, how many game ready drivers were there before it worked..... how well did that game run on the drivers I installed like 4 months before that game launched? How much did my 290x utterly spank a 780ti(which cost a hell of a lot more) in that game, stable and from the moment of launch?

What do you think will happen to DX11 performance when Nvidia brings their scheduler back on die with Pascal?
 
Complete nonsense really, new command processor which is all he's read he's simply decided means longer command buffer. There are a lot of things the command processor does, a LOT. Almost every generation will deal with longer buffers by default. But everything should always eventually be made more efficient. If an architecture has a new feature then the command processor is the issue.

Also he's also diagnosed the problem incorrectly, Nvidia moved hardware scheduling and thus a lot of their command buffer off the die and to be done on the CPU. They have better multithreading in the drivers which somewhat goes around what DX11 wants, and could very much be the reason Nvidia have so many more issues with stability with DX11 games in the past couple of years since they've been doing this. The AMD, bottleneck shall we call it, for draw calls in DX11 is mostly in the drivers and mostly to do with DX11 itself. It's not on the GPU. If it was on the GPU then it wouldn't be able to handle 5 times the draw calls through DX12.


So he's randomly seen command processor, jumped to conclusions about what fixes will be in it and decided the DX11 problem is in the command processor even though absolutely every piece of evidence points to this not being the case at all.

If you actually saw his original post he removed. I'm pretty sure you might would see this thing in a bit different light. There is a reason why AMD got negative performance out from DX11 Command Lists, and decided not to use it at all. Polaris will feature upgrades that are similiar to Nvidia's Gigathread engine, which should help with this. And if you actually saw the video of Raja Koduri about Polaris, he pretty much says this aswell.

Edit, it wasn't Koduri, but Mike Mantor. "Bigger instructions buffer for better single-threaded performance"

I'm not gonna start a argument here as truth is that we have one theory against other, which nobody can prove yet. We just have to wait and see what happens. I personally hope he's wrong, because that would mean that if
bottleneck can only be removed by new hardware, us Fury users will stay having our cards under utlilized under DX11 drawcall heavy scenes.

AMD also has very talented driver engineers working with them. No matter how much we blame them, truth is that they would have already found a solution by now; how to multithread properly. They get multithread benefits in smaller applications and loads, but get negative performance under huge load (like 3dmark API ST vs MT).

Link for beneficial multithreaded test for AMD

http://forums.anandtech.com/showpost.php?p=38042407&postcount=628

Immediate -> 8-9fps
ST Def / Scene -> 8.5-9.5 fps
MT Def / Scene -> 23-24fps
ST Def / Chunk -> 8-9 fps
MT Def / Chunk -> 19-20fps
 
Last edited:
+1

I said this ages ago. Fiji was a 1st steps experiment into HBM technology which was originally aimed at beating the best Nvidia card out at the time (GTX980) and it did so. However, Nvidia had a sneaky trick up it's sleeve and produced a Titan X with a little bit nibbled off it and chucked it out at a power/performance price that was just way too good to be true just before the Fiji cards surfaced.

In truth it was a masterstroke of marketing and shrewdness....even if it did annoy those who bought a Titan X only a few months before.

I hope the Fiji cards life is not too short....as I bought one :p Mind you it runs brilliantly and does what I want it to do. Very excited about the Polaris/Pascal stuff coming out in 2016/17 though.
:D

there is no such thing as short life of AMD card. Look at 290x and what type of boost it received last summer. I am sure fiji will be still going strong for few years with new drivers.
 
I'm not gonna start a argument here as truth is that we have one theory against other, which nobody can prove yet. We just have to wait and see what happens.



Pretty much this, we all know (or at least hope) that the new GPU's will improve on the old ones, exactly how they will we will have to wait and see. All we have been told, is a few (sometimes vague) tweets about how things will improve, I wouldn't take any of that seriously, until we have the cards out and available for testing to see what has actually changed.

A PR man tweets things are gonna improve 100 times. Do we take that as gospel or just wait and see. The cynical part of me wants to say, if it AMD saying it, it has to be true, NVidia and of course it's rubbish. ;)
 
How many Nvidia users had trouble with Witcher 3 again, how many game ready drivers were there before it worked..... how well did that game run on the drivers I installed like 4 months before that game launched? How much did my 290x utterly spank a 780ti(which cost a hell of a lot more) in that game, stable and from the moment of launch?

I am genuinely suprised that you think witcher 3 caused so many problems for anybody, let alone nvidia users. hairworks stuff aside, i thought the game was well received and pretty bug free from a technical standpoint. Certainly there were issues but the vast majority of the patches from CDPR addressed gameplay bugs, not technical issues.

Anyway, the 290x spanking the 780ti is not something i remember, not at 1080p or 4k (which was unplayable on any single card with ultra settings). Certainly looking back at reviews such as Techspots, the 780ti was either just in front or just behind (at unplayable framerates).

So, care clarify either of those statements? You know, with some proof or something.
 
I am genuinely suprised that you think witcher 3 caused so many problems for anybody, let alone nvidia users. hairworks stuff aside, i thought the game was well received and pretty bug free from a technical standpoint. Certainly there were issues but the vast majority of the patches from CDPR addressed gameplay bugs, not technical issues.

Anyway, the 290x spanking the 780ti is not something i remember, not at 1080p or 4k (which was unplayable on any single card with ultra settings). Certainly looking back at reviews such as Techspots, the 780ti was either just in front or just behind (at unplayable framerates).

So, care clarify either of those statements? You know, with some proof or something.


lol @



i have to say tho i do find it very very odd that the 980 is apparently 30% faster than the 970 with only 23% more CUDA Cores, thats impossible, its way over 100% scaling. one CUDA core does not make more than one CUDA core.
And we know at best core scaling is 0.7 in 1.
 
lol @



i have to say tho i do find it very very odd that the 980 is apparently 30% faster than the 970 with only 23% more CUDA Cores, thats impossible, its way over 100% scaling. one CUDA core does not make more than one CUDA core.
And we know at best core scaling is 0.7 in 1.

Indeed, the 970's memory situation is clearly crippling performance there.
 
Oh well will just stick with 290s then. That's a bit dissapointing, thought it might of been a bit sooner :(

well, your 290s should tide you over till something of higher end from AMD/nvidia comes along. AMD doesn't abandon their previous gen cards with their driver support, so you are not left out ;)
 
Back
Top Bottom