• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Radeon GFX & DirectX 12

If DX12 can bring 2 card latency and performance better than 1 card for every game, then i may actually consider getting 2 cards, but until issues are fixed completely i'll be staying single card.
 
If DX12 can bring 2 card latency and performance better than 1 card for every game, then i may actually consider getting 2 cards, but until issues are fixed completely i'll be staying single card.

You will never get 100% compability for all games. Doesn't matter if crosfire support comes from AMD or developers, there's always that 1 game that is missing support. DX12 just makes developers responsible of that.
 
Proof? From what I understand amd GCN fully supports dx12.
Full DirectX 12 compatibility promised for the award-winning Graphics Core Next architecture
http://www.amd.com/en-us/press-releases/Pages/amd-demonstrates-2014mar20.aspx

9a4m0x.jpg

http://en.wikipedia.org/wiki/Direct3D
 
On this slide:


When it says "interesting multi-GPU use cases beyond AFR or SFR", does this mean it may be possible to just use one card as a VRAM slave?

I.E Your 7970 GHz is feeling the limits of 3GB but has more power to give, so you add a cheap 2GB R7-240 giving the 7970 a total of 5GB to run with.

I don't mean with those exact cards of course, just an example of something that would be awesome if possible with future cards.
 
I would imagine that you would need the same architecture to run Xfire still and that the GPUs run more independently working on different tasks rather than pooling RAM and processing power like multi CPU PCs.
 
If it's possible PCI-E bandwidth will be a massive bottleneck for making that useful that Ubersonic.

PCI-E 2.0 16x = 8 GB/s
PCI-E 3.0 16x = 16 GB/s
290X = 320GB/s

Maybe it could be used similar to GTX970 where non vital data/textures can be cached in the 'slow' access memory but main memory can already be used for that.
 
Last edited:
If it's possible PCI-E bandwidth will be a massive bottleneck for making that useful that Ubersonic.

In that case though how does it propose to work with any pair of GPUs?

If Bazza is rendering a frame and he needs some of the data Gazza is holding it doesn't matter if Gazza is an R7-240 there for storage or if he's a 7970 working in CF with Bazza, it's still going to have to go down the PCI-E lanes :S

If they bring back CF connectors on cards that may help alleviate some of the bottleneck. Or maybe if they set Bazza to only render things in his VRAM and Gazza renders everything in his VRAM. But then that is going to get extremely complicated isn't it?
 
On this slide:



When it says "interesting multi-GPU use cases beyond AFR or SFR", does this mean it may be possible to just use one card as a VRAM slave?

I.E Your 7970 GHz is feeling the limits of 3GB but has more power to give, so you add a cheap 2GB R7-240 giving the 7970 a total of 5GB to run with.

I don't mean with those exact cards of course, just an example of something that would be awesome if possible with future cards.


SFR (or as AMD call it Asynchronous Crossfire) won't be a preferable method for quite some time after DX12 hits the scene. There is just practically no scaling with this method. Even DICE used a clever HeterogeneousAFR method. And no, CIV:BE does not support VRAM scaling despite using split frame under Mantle. As for the PCIE bottleneck, depends a lot on how it's implemented. GEN 3.0 could well be able to cope or then again it might not. Don't really think anyone is in a position to comment right now.
 
Last edited:
Now the vmem will be combined, do ya think multi gpu setups will become the norm?

I find atm with my setup that 2 cards on a low voltage profile is way more powerfull than my single card on its max overclock and uses a lot less watts, so i guess once ya get over the initial cost of buying two cards its gonna be all good, and i guess since you can buy 2 mid range cards for less than the price of a top end card it might not even be much of a hit.
 
Last edited:
do ya think multi gpu setups will become the norm?

dont know about the norm, but if DX12 multigpu does offer more for less, this presumably will have an impact on gpu product tiers and price points.

and then theres the igpu factor which might have an impact within this...
 
If i can use the iGPU to boost the main GPU i might actually consider getting an APU, that APU would need a much stronger CPU part than what they currently are tho.
 
when comparing graphics capability of intel/amd integrated gpus...is all the talk of igpu/apu fluff?....does it come down to same issue of performance/3d mark scores/fps in games to set them apart..or something more subtle that makes an 'apu' something special?
 
when comparing graphics capability of intel/amd integrated gpus...is all the talk of igpu/apu fluff?....does it come down to same issue of performance/3d mark scores/fps in games to set them apart..or something more subtle that makes an 'apu' something special?

Performance in games, an iGPU might not be that powerful on its own but if one can add another 20% performance to ones discrete GPU it might mean the difference between 1440P and 1800P res.
 
Lets just pretend for a second that DX12 delivers on all of it's promises.

Your forgetting one important thing. Game design is going to change. With the increased draw calls (or whatever DX12 is bringing to the table) game design is going to get more complex and demanding, meaning it's going to be business as usual.

Your still going to need to buy powerful GPU's to driver performance in these ever demanding games.
 
Back
Top Bottom