Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
Theoretically if they both support a standard feature in a standard way there is no reason why an abstraction layer couldn't be utilised to tie their capabilities into one pool.
Neevvveeeer goinnnnggg to happppeeeen.
At most you will get multi indifferent branded GPU support for desktop/2D work. For gaming....if it happens I will eat EVERYBODY'S hats.
Theory goes out the window quick though.
Both Radeon and GeForce are DX compliant yet both have to re-write shaders for every major game as well as all kinds of other vendor-specific tweaks we have come to demand as "minimum acceptable effort". I fully expect this to continue into the future until 5-6 years from now when the next API hype train comes along and this conversation kicks up again.
Nvidia don't even let different mem amount GPUs SLI, I don't think we will see AMD + Nvidia working on the same scene.
Nvidia don't even let different mem amount GPUs SLI, I don't think we will see AMD + Nvidia working on the same scene.
From what I understand DX12 will not care what GPU it sees. All it will see is a GPU, brand irrelevant.
Nvidia and AMD don't have to do anything to make this work, that's down to Microsoft.
Obviously over the years they've been collecting ideas for it and Hydra was a damn good idea it just never caught on because it was proprietary and you had to buy a motherboard with a lucid chip on.
But it did work. IIRC TTL tested it with four GPUs. Two Nvidia, two ATI and it worked fine when supported properly.
DX12 was described as having a very low level of abstraction, for best performance... the idea of a single pool of GPU resource based on any and all GPU hardware inserted in a system is a very high level of abstraction, which seems at odds with the whole premise of DX12
it does sound plausible that you could use split frame rendering and send as small a chunk as a GPU was capable of to it, but the performance gain probably wouldn't be anywhere near the maximum you could otherwise get from that particular GPU, but it shouldn't be worse than not having it
Any Gpu that runs DX12 should be seen no differently when it comes to the majority of DX12 commands, regardless of underlying hardware, due to the API. So one set of code should run equally well on both cards, but only be dependent on the performance of the underlying hardware to determine execution time. Similar to an x86 application that is using the same supported extensions on an AMD or Intel processor, the underlying hardware determines the performance while using the same code.
The reason that DX11 and under cards are not like this, is because of the different implementations of memory management and rendering pathways within the driver, which is different for both manufacturers, this is why one set of code might run better on one manufacturers hardware than the other and why they have to implement fixes and even completely replace shader code to make it run better with their drivers.
Do you have direct experience of this? or is this "glide is hardcoded on the GPU" all over again?
that isn't quite right
DX12 is low level, which means it lays directly in the developers hands what commands to use and when... it won't be down to nvidia, or AMD, or even microsoft to get it working, it will be squarely in the developers hands, which is why some people are being sceptical
Correct me if I am wrong but if it's hard coded into the API that means nothing has to be done right?
IE - it will instinctively know what to do with more than one GPU?
well, as long as the hardware supports the feature set and the driver converts the calls from the api into calls that the hardware understands and back again flawlessly, then the calls coming from the driver for Company A should be identical to calls coming from the Driver of company B. The software developer should only see the API interface and any extraneous hardware capabilities, such as the Tier 1 - 3 stuff and if the hardware is a direct Renderer or Tile based, maybe some extra relative performance metrics as well.
But besides that, if the hardware is running to the spec, then a call sent to either card, should work in an identical way and the rest is all just performance of the hardware, such as with an X86 CPUs, The situation is identical in this regards compared to with Directx 11 and under because that is how a specification and api works. The only reason this situation is skewed and broken with directx 11 as an api, is because of differences in abstraction implementation which is independent of the api spec.
but as long as the cards are running to spec and the developer sees the cards based on a few extra capabilities and relative performance metrics, then they should be able to produce a multi gpu renderer that could use underlying hardware from any manufacturer, no differently to using cards from the same manufacturer, as long as the manufacturer has not artificially placed locks in the driver.