• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Polaris architecture – GCN 4.0

Computer science is basically the answer to how and why they would cull more triangles. With every iteration of hardware and software you learn more about the algorithms used, their weaknesses, their strengths and you find new and better ways to implement the algorithm, or just find a new way to implement an existing algorithm better within the hardware.

In exactly the same way TressFX 2.0 improved the speed dramatically over TressFX 1.0, and quality for that matter, you can usually find a way to improve things.

You also get more die size and transistor to play with. So that hardware culling unit that needed 1 billion transistors to improve efficiency by 15% is efficient and worthwhile when you have 15+ billion transistors in the processor but in a 5-8 billion transistor gpu is just too big. Same with memory compression, the Maxwell and whatever the heck the last GCN version it was memory compression wasn't the first nor last improvement in memory compression. There will be better more efficient ways of compressing memory found in the future and again die size of hardware to implement it factors in. Bigger dies with higher transistor costs mean being able to add more features to a new generation without the same kind of space reduction/power hit that feature would use up on the previous process so often what is not feasible for instance at 28nm they have room/power budget for at 14nm.

There is also the simple human cost of improving parts of a GPU. With a limited amount of man power you can't research every single part of a GPU every generation, you can't dedicate unlimited money or time to every part so you take an educated guess at which parts of the gpu can be improved to provide the highest efficiency boost for the time and money spent. So think that for the 290x culling or memory compression they thought would bring 9% gains but a change in the ROP efficiency could bring 12%, so they chose to improve Rops. Now this gen with ROPS having been improved culling can bring about a say 11% improvement(because a previous bottleneck in the rops was removed making culling a bigger bottleneck) and ROPS have less room for improvement and can only improve performance another guesstimated 5%, so this time you focus on culling improvements.

CPU, GPU, it's all balancing and frankly gambling where the biggest gains will come, where to focus time. This gen they improve A D, E and F, next gen they focus on B, C, G and H. There are teams doing ongoing research, teams who decide which way the next gpu architecture should go based off that research, teams who implement that into a shipping GPU.
 
Last edited:
Anywho... Kaap, what do you mean about HBM being unreliable? Can you give some examples? On a separate note, did you ever get your watercooled fury x rig together for benching?

To me, quite often a new tech in some area's is a backwards step until it matures a little, that's just life.

Personal experience and it has also been mentioned on the net by other people.

Out of the 4 cards I started with I had to RMA 2 of them with HBM memory problems running at stock.

As to getting my 4 watercooled Fury Xs running, at stock volts they can all do 1150/500 and all 4 together 1140/500.

With overvolting they can run 1170/500 4 up and even higher individually. Unfortunately like other people have found with overvolting the performance tanks and is worse than using stock volts.
 
I've mostly stayed out of this forum for ages because every single time anything is posted about AMD, Kaap and others are there to bash everything, ask silly questions and take a silly stance. When I say something half sensible in defence of Nvidia in the other thread but Kaap and Gregster instantly jumped on my post as being negative for absolutely no reason at all.

Then in defending my point of view Greg decides to call me rude or angry while ignoring that both he and Kaap in both threads were the person who was being rude. This is their standard behaviour, hound anyone not talking up Nvidia into leaving the forum. Of course, didn't we see a image captured of him calling for back up. It's clear as day a clique of them hound anyone remotely positive about AMD or negative about Nvidia off this section of the forum. It's painful and the reason why I barely stay around here any more.

It's pitiful how certain people on here act, they've destroyed any ability to talk about technology. HBM1 threads were trashed by Kaap talking utter nonsense, as every other thread I see on here gets trashed, instigators are always the same group of people.

EDIT:- the longer some people go on and consistently use the "I bought a card from XXX thus I can't be biased" the more I'm convinced Nvidia fanboys after years of insults and attacks bought some AMD gpus just to use that excuse.

I have to say I always find DM's posts interesting and they seem to me to be pretty well supported. And I don't find their posts negative. Often I would like to chime in in support of what DM said except its been so complete there's really nothing I can add.

I don't want to pour more oil on this fire. I just don't want to see DM get dog-packed because, quite frankly, I find DM's posts informative and interesting.
 
I have to say I always find DM's posts interesting and they seem to me to be pretty well supported. And I don't find their posts negative. Often I would like to chime in in support of what DM said except its been so complete there's really nothing I can add.

I don't want to pour more oil on this fire. I just don't want to see DM get dog-packed because, quite frankly, I find DM's posts informative and interesting.

the only problem is, DM's posts could be made without any insults thrown in, yet the vast majority of the time he feels he has to start off by insulting whoever it was he is replying to before then actually trying to pitch in with his opinion

the other thing is that he gets stuff wrong just as much as anyone else, then says that he never said something, even when you find that quote and show him what he actually said, he then just disappears off the site for a week and hopes no one remembers the next time he pops in to insult someone

I can't think of the last time DM made a post that couldn't have been at least half the length and still contained the same information
 
Can we keep the thread on topic as if everyone gets in to who they think is wrong and who is right, he said, she said it will just go further off topic.

IF people have an issue with posts use the RTM function, It will achieve a lot more and helps keep things in line :)
 
the only problem is, DM's posts could be made without any insults thrown in, yet the vast majority of the time he feels he has to start off by insulting whoever it was he is replying to before then actually trying to pitch in with his opinion

the other thing is that he gets stuff wrong just as much as anyone else, then says that he never said something, even when you find that quote and show him what he actually said, he then just disappears off the site for a week and hopes no one remembers the next time he pops in to insult someone

I can't think of the last time DM made a post that couldn't have been at least half the length and still contained the same information

:rolleyes:

Computer science is basically the answer to how and why they would cull more triangles. With every iteration of hardware and software you learn more about the algorithms used, their weaknesses, their strengths and you find new and better ways to implement the algorithm, or just find a new way to implement an existing algorithm better within the hardware.

In exactly the same way TressFX 2.0 improved the speed dramatically over TressFX 1.0, and quality for that matter, you can usually find a way to improve things.

You also get more die size and transistor to play with. So that hardware culling unit that needed 1 billion transistors to improve efficiency by 15% is efficient and worthwhile when you have 15+ billion transistors in the processor but in a 5-8 billion transistor gpu is just too big. Same with memory compression, the Maxwell and whatever the heck the last GCN version it was memory compression wasn't the first nor last improvement in memory compression. There will be better more efficient ways of compressing memory found in the future and again die size of hardware to implement it factors in. Bigger dies with higher transistor costs mean being able to add more features to a new generation without the same kind of space reduction/power hit that feature would use up on the previous process so often what is not feasible for instance at 28nm they have room/power budget for at 14nm.

There is also the simple human cost of improving parts of a GPU. With a limited amount of man power you can't research every single part of a GPU every generation, you can't dedicate unlimited money or time to every part so you take an educated guess at which parts of the gpu can be improved to provide the highest efficiency boost for the time and money spent. So think that for the 290x culling or memory compression they thought would bring 9% gains but a change in the ROP efficiency could bring 12%, so they chose to improve Rops. Now this gen with ROPS having been improved culling can bring about a say 11% improvement(because a previous bottleneck in the rops was removed making culling a bigger bottleneck) and ROPS have less room for improvement and can only improve performance another guesstimated 5%, so this time you focus on culling improvements.

CPU, GPU, it's all balancing and frankly gambling where the biggest gains will come, where to focus time. This gen they improve A D, E and F, next gen they focus on B, C, G and H. There are teams doing ongoing research, teams who decide which way the next gpu architecture should go based off that research, teams who implement that into a shipping GPU.
 
no performance figures for Polaris. we don't need them do we because we know the answer already:

within + or - 5% of the equivalent nvidia card.

we wont need two of the same vendor make cards for dx12 sli either, one nvidia card and one amd card of similar spec in dx12 will be quicker this generation than two cards from same vendor

coming to an end the fanboi wars are.
 
I think DM got out the wrong side of the bed yesterday :D

Anyways, I have to agree with Besty and both will be within spitting distance of each other. Not so sure on the cross vendor "CrossSLI" working or if it is, worth it and I am not sold on SLI or CF being viable with the notion of it now being in the hands of the devs to get working but I am sure one of the bigger tech review sites will be on hand to give it a go.

It is all about release time and wouldn't it be great to see them both release the same day and top end first?
 
Agree with Gregster, real world performance will be within ~ 5% ~ 10% of each other. With price and features also influencing which card to go for.

Probably gonna go AMD this time around, then Nvidia, then AMD again, then back to Nvidia and AMD :p:D
 
Agree with Gregster, real world performance will be within ~ 5% ~ 10% of each other. With price and features also influencing which card to go for.

Probably gonna go AMD this time around, then Nvidia, then AMD again, then back to Nvidia and AMD :p:D

Tbh Pascal better be bloody good, I just bought a Gsync screen :D ;)
 
Probably gonna go AMD this time around, then Nvidia, then AMD again, then back to Nvidia and AMD :p:D

Based on the little we know, It'll have to be AMD as they are looking like first to market, in a worlds first, AMD have info, Nvidia have nada info, so based off of previous releases when AMD let nothing out the bag, they have to be first to market right, right?

Would love to see some bubble gum and kick ass from AMD, it would be hilarious.

First person that disagrees is getting the calm down dear your getting personal tone from me!:D
 
Same for me. Whoever brings out their mid-range parts first, I'll buy that. Current card is dying (artifacts).

I thought that would be nV, but now AMD could be back in the races. Last two cards have been AMD, so I was swinging back to nV, but it's all about who comes out first (unless the price is ball-busting :p)
 
I hope AMD have reasonably priced card that will play games +60FPS at 1440p with all settings on Ultra. I want to get a Freesync monitor as I don't believe in paying a premium fr GSYC.

I'm willing to spend £250ish pounds for something like that. I bought my 970 on launch day for £235 and for that price I was really happy.
 
I hope AMD have reasonably priced card that will play games +60FPS at 1440p with all settings on Ultra. I want to get a Freesync monitor as I don't believe in paying a premium fr GSYC.

I'm willing to spend £250ish pounds for something like that.

Same here.
 
I hope AMD have reasonably priced card that will play games +60FPS at 1440p with all settings on Ultra. I want to get a Freesync monitor as I don't believe in paying a premium fr GSYC.

I'm willing to spend £250ish pounds for something like that. I bought my 970 on launch day for £235 and for that price I was really happy.

Not sure you'll get that. Not even a 980/ti will do 60 FPS at 1080p consistently in all games at max settings. You still get drops in some games (eg Fallout 4, a 980ti can drop to 30-40 FPS at 1080p).

My budget is also £250 ish, and I'm just hoping for rock steady 60 FPS at 1080p (finally).
 
Last edited:
Back
Top Bottom