• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Pioneering The Multi-GPU Future With Oxide Games

Soldato
Joined
7 May 2006
Posts
12,183
Location
London, Ealing
A little over a month ago, the tech community was abuzz over an article about using an AMD GPU and an Nvidia GPU in the same system running Ashes of the Singularity from Oxide Games in DirectX 12. That early evaluation covered the use of a number of top tier graphics cards as well as a pairing of some older top tier graphics cards from a few years ago. Although the test served as an example of the benefits of such a configuration, it generated a number of questions from the community, such as why integrated graphics weren’t tested out and what happens when you use two very different graphics cards, such as a brand new GPU paired with an older one from a previous generation, or a lower tier card paired with a higher tier card.

I recently had the opportunity to speak with Dan Baker, one of the co-founders of Oxide Games and the creator of the Nitrous Engine, to discover these answers and learn much more about the engine and DirectX 12.

Read more
http://www.tomshardware.com/news/oxide-games-dan-baker-interview,30665.html
 
Last edited:
I like this idea, You could buy a top shelf card and in 2 years add a mid level one to boost it's performance or use your CPU's iGPU to give a bit of a nudge to your main GPU, In theory it's awesome although we know of some in the tech community who apparently think innovation in this regard is pointless :rolleyes:
 
seems totally worthwhile to me if it pans out correctly, even if my IGP would only get 10fps in a game, tacking that on to my discreet card to boost my framerate by 10fps would be a free win, and finally give the igpu some use, other than a failsafe for once every 4-5 years when a graphics card kicks the bucket.
 
One step at a time guys. Dx12 hasn't even happened.

I imagine this sort of tech won't actually be implemented and actually usable by us the end user until after Dx12's life cycle.
 
One step at a time guys. Dx12 hasn't even happened.

I imagine this sort of tech won't actually be implemented and actually usable by us the end user until after Dx12's life cycle.

+1

I can also see AMD and NVidia coming out with drivers to prevent this and to err fix some problem or other I am sure they will find.

The alternative for them is less hardware sales as people will not have to upgrade as often.
 
+1

I can also see AMD and NVidia coming out with drivers to prevent this and to err fix some problem or other I am sure they will find.

The alternative for them is less hardware sales as people will not have to upgrade as often.

I do not believe they could with the way DX12 drivers work. Considering the application talks directly with each GPU. unless they disabled DX12 if another brand drivers is detected in the system. but then that would cause a lot of complaints considering iGPU drivers etc.
 
One step at a time guys. Dx12 hasn't even happened.

I imagine this sort of tech won't actually be implemented and actually usable by us the end user until after Dx12's life cycle.

oh ive no doubt that aside from some pioneering edge cases itll largely be ignored for quite a long time, still as igpu's get more powerful (but still not powerful enough by themselves) thats free performance for games developers to allow a wider crowd of lower spec machines to play there game, id expect it to see it in things like MMO's first that rely on large user bases, as you said though, it probably wont be seen in a mature implementation untill towards the end of DX12, but that wont mean its not worthwhile, (how many new games still come out using DX9 features only?)
 
I really do hope we get this kind of knowledge shared between developers, so not everyone should do multigpu from scratch for their games and engines. Maybe I'm living in a dreamworld.
 
I do not believe they could with the way DX12 drivers work. Considering the application talks directly with each GPU. unless they disabled DX12 if another brand drivers is detected in the system. but then that would cause a lot of complaints considering iGPU drivers etc.

Dx12 applications don't talk to the GOu directly at all, they talk to the DX12 driver just like DX 11 applications talked to DX11 drivers.

So much nonsense spent about DX12. It is really not so different to DX 11, it just does less automation,of thugs like memory management, and is re-written to be properly parrellel.
 
That was just Igpu and D-GPU... This is allowing AMD + Nvidia to work together.

No, you could run nvidia and amd with some lucid variants. It sucked balls.

http://hothardware.com/reviews/lucid-hydra-200-multigpu-performance-revealed?page=2

And the igp doesn't give 10 free fps, it doubles your frame latency for a small fps boost. A terrible idea.

All this stuff is dependent on devs to implement so will be in 2 techdemos and a half finished game, while everyone else uses normal setups.
 
Last edited:
This is just one of those things that is hyped and believed by gullible people, then never happens. Because it won't work in reality. But it's a way to get into the news cycle and gain mindshare.

Kinda like the perennial "PC in a fridge" or "add two cards VRAM together" ideas.
 
This is just one of those things that is hyped and believed by gullible people, then never happens. Because it won't work in reality. But it's a way to get into the news cycle and gain mindshare.

Kinda like the perennial "PC in a fridge" or "add two cards VRAM together" ideas.
Too true, and it even explains it in the aritcle:
The game renders alternating frames to each GPU somewhat like SLI or Crossfire, which works best with GPUs that offer similar performance. This mode doesn’t work well when two GPUs have a large performance delta between them, though, so Baker and his team are working on ways to do asymmetric multi-GPU rendering. Rather than alternate frame rendering, the team is working on ways to offload sections of the work to the slower GPU and then upload that rendered data back to the faster GPU to put the scene together.
The article is badly written IMO but it sounds like basically what they have running now is AFR, which means that the frame rate is going to be held back by the performance of the slowest GPU. How SLi works now.

What they need to do is to split the rendering work in such a way that both GPUs can participate in every frame with the rendering load split between them, but that is a really difficult proposition. Then on top of that you have to come up with some system that can predict how much work to offload to each GPU and if you get it wrong you either get no benefit or horrible frame delays while the main GPU waits for the slower ones.

The whole thing is so complicated and prone to failure that it's difficult to imagine it ever working very well. But Oxide seem committed to the idea and they are probably smarter than me :)
 
Kaapstad if anybody blocking it will be NV. Do you remember them blocking off Ageia PhysX Cards in the drivers ?? Mother****ers...
 
Back
Top Bottom