• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Polaris architecture – GCN 4.0

The launch of Polaris would be a perfect time to do it. Lets hope that's what happens. I like crimson but it's made some things harder to do. i used to be able to duplicate my monitor with my TV easily now i can't or haven't figured out how to. My mate has the same problem since Crimson. It's just a few niggles that should be easily sorted out when they update the rest of the ccc.

I never used AMD's desktop manager, Windows built in does it well enough.
 
What about Pre-orders and the illusion of always being sold out to make people think the product is selling that great it's never available.

Not many people pre-order, and if they pre-order and then find the competing product is better they just cancel the pre-order.
 
It's been awhile since AMD have had any involvement with the community.

It was good when they had the AMA on here, there's probably a lot we'd like to ask about like async compute, HDR.
 
There are no problems with AMD drivers for the majority of people.

Only crossfire is struggling and going by some comments, SLI doesn't seem to be an awful lot better... Not to mention, mGPU just seems to be getting worse in general i.e. just cause 3 and batman AK, no support at all and with dx 12, from what I understand, it will be more or less entirely down to developers, which is not a good thing in my eyes given how broken most games are on release as well as missing basic features/options so if they are not concerned about that stuff, which affects all then why would they be concerned with something that a relatively small amount of people have...

I would dare say that AMD's drivers are better than nvidia's in the last several months tbh, given the issues that quite a large majority of nvidia users have had with crashing even on desktop usage with chrome, blue screens etc. as well damaging PC components.

https://forums.overclockers.co.uk/showpost.php?p=29450092&postcount=5977

Even on nvidia's forums, there are huge threads on the stability/crashing issues.

Give me stability over more features every day of the week.

AMD certainly need to get a better alternative to shadowplay etc. out though

It's all about UWP now Patricia :) maybe there is some hope for crossfire yet. At least things are moving forward nicely with UWP, did not see this update coming at all.

https://blogs.msdn.microsoft.com/di...cked-frame-rate-and-more-now-enabled-for-uwp/
 
Last edited:
AMD have no control over what a devs does with code inside the game. If X game optimised for Nvidia Shadow and not AMD, AMD cant do anything about it.
If the rendering was more optimised for Nvidia and not AMD again AMD have nothing they can do.
DX11 was and is in the hands of the devs how they optimize the code for each GPU architecture.

So can you please explain how the 290X is now competing with the GTX 980 in NVIDIA partnership games, and older games where it was barely able to match the Titan?

The developers did not go back and optimise the games for Hawaii and GCN specifically, the drivers however have been improved. Developers have had just as much time with Kepler as Hawaii, yet the latter is pulling ahead in older and new titles.

Now if it was simply due to devs spending more time with GCN it wouldn't benefit the older titles as much, but it does so for old and new.

A nice example is the new Far Cry Primal at 1440p where the 290 gets an average of 48ps compared to the GTX 980 getting 49fps, and the 290X 50fps.

Yet the GTX 780Ti a better card than the original Titan only gets 40.
Note this is the 290, not the 390X/290X which originally barely competed with the Titan.

http://www.guru3d.com/articles_pages/far_cry_primal_pc_graphics_performance_benchmark_review,7.html

Then look at the Division, an NVIDIA partnered game; and according to you once the game is developed for a certain render path they cannot really change it, and drivers do not help. Yet once again the GTX 780Ti loses to the 290X. 42 fps vs 37.
http://www.guru3d.com/articles_pages/the_division_pc_graphics_performance_benchmark_review,6.html

Once again in this case the developers have had as much if not more with Kepler vs Hawaii in their development cycle.

Also not once did I say it was ALL down to drivers, and they are the end all and be all. They are crucially vital though, and it shows time and time again as older cards are getting performance improvements from driver updates.

Now if AMD can work directly with developers more, especially for future games it'll do even more for the performance of GCN based cards.
 
Last edited:
Maybe it's more down to the fact consoles code is so much like pc code now so optimisation on the console code is benefiting the pc platform also? Least for amd GCN?
Ps4 and xbox one both use same hardware found on pc.
 
Maybe it's more down to the fact consoles code is so much like pc code now so optimisation on the console code is benefiting the pc platform also? Least for amd GCN?
Ps4 and xbox one both use same hardware found on pc.

It certainly could be, although even in that regard some developers barely put in any effort to optimise for PC. Such a shame really, as they're all x86, and GCN.

Hopefully with DX12 and Vulkan we might see things shape up better, but it's going to certainly take a while. Especially in regards to DX12 and UWP hamstringing it.
 
Wouldn't that be true for that timing? What about all the extra work Devs now have with DX11 and DX12. Optimising the code for the GPU surely is the hands of the Devs and not AMD.

True.

When PS4 was getting released, SONY said that they were expecting Async compute to pick half way the life circle of the product (so around 2016-2017).
Here we are in 2016 and Async compute started appearing in games, trickling down to PC.

Both the Xbone & PS4 support it and ofc all AMD cards released since the R9 290.

And lets not forget a big game takes roughly 3-4 years to be made. They cannot go half way the development circle and change the game engine on consoles.

Ofc if you are Creative Assembly with the money they have, just two months before the release of your next big title, you postpone it for another month and you ask AMD for assistance to fully implement DX12 and Async compute to the engine for release. :D
Ending the biggest issue hampering the TW games since forever with one stroke.
 
Another alternative is that P10 will be close enough to Fury/X speeds that AMD can discontinue a large, expensive and frankly poor selling GPU. With Nvidia having such a great success with 980Ti they could not afford to discontinue a £500 GPU that was selling well unless they had something to slot right in at that price. Hence 1080 will continue a very successful price slot for Nvidia.

Fiji is a similar size to Maxwell 980Ti with more expensive (HBM) and is not selling as well, so AMD would be keen to drop it entirely if possible IMHO.

Remember P10 allegedly has the same number of shaders as R9 390 and with clock increases and architectural improvements ~30% increased performance is not a fantasy. That would put P10 at roughly 980Ti/Fury X speeds IMHO but with a much smaller and cheaper chip/VRAM package.

We will know soon enough.

Actually as things are now, P10 could do both, let me explain:
News are it will be around 390(x) performance.
Again news are the TDP is 175W, but the actual cards will consumeuch less around 110-130W
Also we only no a few things about the cut diwn version of the P10.

It couldbe that the cut down, low TDP and low clocked (1000-1100mhz) version would be around 390(x) performance.
Then they have the full chip, and 35% more power to feed it to brong out something bigger. The new node is in theory let them do higher clicks, so this could go 10-15% higher clicks, and another ~10% from the full shader count.
This would put it somewhere Fury X-980Ti levels, msybe between the two.

OFC this is just brainstorming.
 
Mtom has a point, you could have two cards that are almost identical, one with power delivery limited to 100watts, so underclocked to 1000mhz, but then you could have the full fat 275watt version running at 2ghz (doubt AMD will get that high?). 2ghz 2560 shader chip would be mega fast would it not?

/end dreamworld :(
 
I still have cautious optimism.

Nvidia appear to have achieved 1.6x performance per watt with the GTX 1080 (compared to the 980).

So if AMD manage at least 2x (and they claimed 2.5x), then an ~150W card from them should be more powerful than expected.

Still depends what AMD were comparing against though. The R9 390X has DIRE performance per watt, but the Fury has very good performance per watt.
 
Last edited:
Back
Top Bottom