• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

have AMD stopped competing

AMD should build Radeon RX 590 based on two Polaris 10 chips.
Rely on DirectX 12 to see them as one GPU.

I can only think of 2 DX12 titles with multi gpu (GOW4 and Deus Ex) multi gpu is pretty dead, developers are unwilling to put in the work for little gain (from there POV) and vendors in particular nvidia who use to send there own people to help implement multi cpu are no longer willing to do so.
 
I can only think of 2 DX12 titles with multi gpu (GOW4 and Deus Ex) multi gpu is pretty dead, developers are unwilling to put in the work for little gain (from there POV) and vendors in particular nvidia who use to send there own people to help implement multi cpu are no longer willing to do so.

It is either AMD gives the developers the opportunity to support many small GPUs working together, and with the booming mining, these configurations are many.
Or developers push AMD to build large, expensive and power-hungry chips.

Which way is better, is up to you to judge!
 
WTH? :D :confused: You've just repeated what I said and what I linked.


No, you are claiming DX12 will magically solve SLIC/XF and make "2 GPUs appear as one".

The exact same thing can be said about, DX11, DX10 and DX9.


What DX12 actually allows you to do is make 2 GPUs appear as 2, which is the exact oposite of what you are trying to say.
 
DX12 linked GPU is exactly AFR.
DX12 unlinked GPU is whatever the developers want it to be. Unlinked is refereed to as explicit MGPU, because the develop has explicit control over multiple independent GPUS.
these post are on top[c, refuting 4K8KW10 inaccurate and ridiculous comments.

What makes you think it's the same technique? By all means keep digging if you want to.
 
Because that is what AMD and Nvidia both stipulate. By all means, keeps speaking verbal diarrhea i you want.

AFR implementation in explicit MGPU is not the same as classic AFR. Classic AFR has far less control over what happens between each frame where as the DX12 implementation has libraries which help combat issues present in the former as the conventional technique is too reliant on the previous frame, or what is known as inter-frame dependencies.

They are not one and the same. Best not to derail the thread, as both methods are practically non existent at this point...
 
Last edited:
It is either AMD gives the developers the opportunity to support many small GPUs working together, and with the booming mining, these configurations are many.
Or developers push AMD to build large, expensive and power-hungry chips.

Which way is better, is up to you to judge!

Developers will do F all if it means more work for little gains, making Multi GPU works takes time and exp for a small market share. Time = money, this is why things like multi GPU are very much vendor supported. This is on top the extra time and resources need to DX12/Vulkan well over DX9/DX11

AMD has had a long running issue of making some new/cool tech (Xfire, Tressfx, H3D, GPUOpen, Mantel, TureAudio) and then says "Here you go" with little support or further development.

I worked in games tech middle-ware for best part of a decade, no developer would use extra or new technology without 1. Developer support to implement it 2. Meaningful results and or a cost reduction as well over if they tried to do it them selfs.
 
If you remember, starting with HD 3870 / 3870 X2; HD 4870 / 4870 X2, HD 5870 / HD 5970, it was the AMD strategy to build one sweet spot, relatively small chip, and based on it, dual-GPU card every generation.

AMD never intended to build the HD5970 or for that matter the HD7990, both were released after AIB partners released their own dual GPU X2 cards and proved a demand AMD didn't believe was large enough to warrant such cards.
 
What does "stopped competing" mean?

Do you mean, are their GPU products competitive? - currently no.
Or do you mean, have they consciously stopped competing with Nvdia? - Again, no.
 
The hype came and went. As usual for and AMD card, the hype train was pretty massive.

Sadly for AMD they don't control the hype. And double sadly for AMD, 90% of the hype is driven by Nvidia fanboys hoping for a decent product from AMD so they can get cheaper Nvidia cards.
 
Sadly for AMD they don't control the hype. And double sadly for AMD, 90% of the hype is driven by Nvidia fanboys hoping for a decent product from AMD so they can get cheaper Nvidia cards.

“Poor Volta”, that marketing went well didn’t it as from statements like that I thought they may of had something special coming.

AMD created the hype.
 
I can only think of 2 DX12 titles with multi gpu (GOW4 and Deus Ex) multi gpu is pretty dead, developers are unwilling to put in the work for little gain (from there POV) and vendors in particular nvidia who use to send there own people to help implement multi cpu are no longer willing to do so.

DX12 has changed the situation. If developers want to push the PC as a gaming platform they can and should, otherwise the other PC based gaming platforms will take over.
 
What does "stopped competing" mean?

Do you mean, are their GPU products competitive? - currently no.
Or do you mean, have they consciously stopped competing with Nvdia? - Again, no.

Exactly.

AMD have done many things wrong of late, but you cannot fault certain aspects of their marketing.

Lets get one thing straight right now, NVidia will never adopt Freesysnc, they may very well adopt Adaptive-sync which is the underlying technology behind variable refresh technology, but Freesync is AMD's own copyrighted name for the technology and as such NVidia will never be allowed or want to use it.

More worrying for me would be wondering what AMD will concentrate on next, using the RD budget on the next Ryzen chip or target a new graphics chip to replace Vega. I don't think AMD will have the research budget to do both successfully.
If NVidia's next xx60 card is knocking on the door of the 1070's performance, then AMD could have another year of not competing at the top end, just like the 480/580 all over again.
 
Exactly.

AMD have done many things wrong of late, but you cannot fault certain aspects of their marketing.

Lets get one thing straight right now, NVidia will never adopt Freesysnc, they may very well adopt Adaptive-sync which is the underlying technology behind variable refresh technology, but Freesync is AMD's own copyrighted name for the technology and as such NVidia will never be allowed or want to use it.

More worrying for me would be wondering what AMD will concentrate on next, using the RD budget on the next Ryzen chip or target a new graphics chip to replace Vega. I don't think AMD will have the research budget to do both successfully.
If NVidia's next xx60 card is knocking on the door of the 1070's performance, then AMD could have another year of not competing at the top end, just like the 480/580 all over again.

I thought Freesynce was opensource and available to all? Nvidia wont adopt it because they have licence deals with monitor manufacturers for Gsync. Doesn't help the consumer. But extracts more money.
 
I thought Freesynce was opensource and available to all? Nvidia wont adopt it because they have licence deals with monitor manufacturers for Gsync. Doesn't help the consumer. But extracts more money.

Freesync is AMD's implementation of the Vesa standard, Adaptive sync.

There's nothing stopping Nvidia supporting Adaptive sync.
 
There's nothing stopping Nvidia supporting Adaptive sync.

You mean, apart from Greed. :p

Joking aside, surely it would be more lucrative for Nvidia to support Adaptive Sync.

I am sure this would make more money for them, as AMD users with a Freesync Screen could then buy an Nvidia card (If they so wished) and keep their current monitor.

Currently if you have an AMD GFX Card and Freesync Screen you would need to change both of them to get an Nvidia based VRR setup. If Nvidia supported Adaptive Sync they would only need to change the card, which is obviously cheaper to do but would make the user jump from one ship to the other. That has to be more lucrative for Nvidia and earn them more than just the £150 - £200 they get from each G-Sync Module (I have no idea how much Nvidia actually charge for the module and I am just going on differences in monitor pricing. If what Nvidia charges for the Gysnc Module is cheaper than the overall price difference then that gives my arguement even more weight). :)
 
Back
Top Bottom