• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Polaris architecture – GCN 4.0

Mgpu is dead.

As an example. Quantum break is a dx12 game. But no MGPU as the development team didn't want to spend the time implimenting it.

they even admitted it was too much work.

so yea.. I expect more devs to follow that road.

mGPU is far from dead, multiple gpu's on a single interposer will be how GPU's grow, the cost of creating massive GPU's like the 980Ti/Titan/Fury X is going to get increasingly more costly to point its no longer viable.

Having multiple small GPU's on a single interposer is how they will grow in power, much like CPU's have progressed over the past decade.
 
Having multiple small GPU's on a single interposer is how they will grow in power, much like CPU's have progressed over the past decade.

Its certainly an direction AMD hasn't abandoned with their attempts to get mGPU development dialled in at a grassroot level - they are gonna need to put more effort in than traditionally they have though in supporting that or it'll all fizzle out.
 
The whole idea behind DX12 is that they won't have to do it. Developers will and with the likely hood that the next wave of consoles will once again use AMD hardware, its almost certainly going to be based on an mGPU setup.

Yeah I don't think we are going to be any better of with them handling mGPU considering how broken the majority of games are these days :p

Until consoles have mGPU, mGPU is just going to get worse and worse for PC.
 
mGPU is far from dead, multiple gpu's on a single interposer will be how GPU's grow, the cost of creating massive GPU's like the 980Ti/Titan/Fury X is going to get increasingly more costly to point its no longer viable.

Having multiple small GPU's on a single interposer is how they will grow in power, much like CPU's have progressed over the past decade.

Not convinced, it's a significant burden to code for and without people coding for it what's the use? To be clear the games industry isn't a place of speculative development and great quality, it's a commercial industry where margins matter as costs can spiral massively and where there remains a significant problem of poor coding standards and reluctance to change processes. (The same is true of software development in general though the games industry as a whole seems especially poor in my experience. There are of course some exceptions)

Also, the comparison with CPUs is somewhat flawed. CPUs remain in the consumer space single CPUs with parallel processing via 'cores' ... GPUs have long since gone down this route and are massively more parallel than CPUs. Going multi-GPU is more like going dual-socket than like adding on cores/threads, which remains the realm of servers again as it's hard to utilise effectively for some workloads.

Not saying it'll never happen but for now I think buying mGPU cause it's how the tech is headed I think is either premature or wrong.
 
Last edited:
Bloody hell I actually feel sorry for Nvidia now.....I feel like someone has just revelled all the spoilers for future seasons of House of Cards.

With the money from China coming in they will have the funds to make sure this comes to fruition.
Ha, dont feel bad for Nvidia based on this video.

This guy is making lots of leaps of logic and also continues to use only ONE game as his proof of how games are being developed 'for AMD hardware' - Quantum Break. A game that is a massive outlier and frankly runs terrible on either AMD or Nvidia when you look at how an XB1 is running it. It definitely does not look like a high effort PC port.

The guy makes some good points and I even like *some* of his predictions. I do think AMD is being forward thinking and has the opportunity to regain some market share back over the next couple years. But seriously, stop short of thinking this is in any way guaranteed or even 'highly likely'.

For one, his whole talk about how AMD are going to start putting dual GPU's on consoles and just blindly expecting developers to be happy with that and will 'get everything out of the hardware' - this is highly naive. I can definitely see 1st party studios getting the support and time to get more out of a much more difficult configuration and API to work with but multiplatform developers? That's a big ask. They always lag behind quite a bit. They will improve, especially as general console API's and internal tools improve, but they will not be happy about it and it will take a good while for them to get on top of it. And then on the PC side, his assumption that because of this, every game will run terrible on Nvidia hardware because developers dont care what hardware users have - wrong. Just plain wrong. Developers do not want to alienate a massive portion of their potential market catering solely to one hardware manufacturer.

When it comes down to it, AMD have had hardware in consoles for a while now and it has not resulted in this huge advantage for AMD on the PC side like many predicted.

And we've really yet to see how much DX12/Vulkan are going to take off. It IS a lot more work for them and it has the added disadvantage of potentially having the game run a lot worse on for one or the other hardware manufacturer because of the nature of the lower level optimizations necessary. I still think that DX11.3 will likely be the favored graphics API on PC for quite a while. It saves on time and effort and is more hardware agnostic, comparatively.

Lastly, I'm no Nvidia fan, nor am I a 'fan' in general of hardware manufacturers. But I do notice that AMD fans can often be a bit overly optimistic at times. Always lots of theories about how they're going to turn things around, about how awesome their next line of 'x' are going to be, blah blah blah and it just often doesn't pan out. Maybe sometimes they're just thinking a little too far ahead and make decisions that simply dont pay off in the short term and then in the long term, it becomes irrelevant anyways as different paradigms take shape or competitors catch up. Whatever the case, the point is that I think a little caution is probably needed when making predictions about what is going to happen.
 
mGPU is far from dead, multiple gpu's on a single interposer will be how GPU's grow, the cost of creating massive GPU's like the 980Ti/Titan/Fury X is going to get increasingly more costly to point its no longer viable.

Having multiple small GPU's on a single interposer is how they will grow in power, much like CPU's have progressed over the past decade.

Why do two 2048 shader dies and put them on a interposer when you can do one 4096 shader die?

Also with that method the OS will surely see it as one 4096 shader die, so its still not mGPU in the sense of Crossfire or SLI, which are basically useless these days other than benchmarking.
 
Last edited:
Ha, dont feel bad for Nvidia based on this video.

This guy is making lots of leaps of logic and also continues to use only ONE game as his proof of how games are being developed 'for AMD hardware' - Quantum Break. A game that is a massive outlier and frankly runs terrible on either AMD or Nvidia when you look at how an XB1 is running it. It definitely does not look like a high effort PC port.

The guy makes some good points and I even like *some* of his predictions. I do think AMD is being forward thinking and has the opportunity to regain some market share back over the next couple years. But seriously, stop short of thinking this is in any way guaranteed or even 'highly likely'.

For one, his whole talk about how AMD are going to start putting dual GPU's on consoles and just blindly expecting developers to be happy with that and will 'get everything out of the hardware' - this is highly naive. I can definitely see 1st party studios getting the support and time to get more out of a much more difficult configuration and API to work with but multiplatform developers? That's a big ask. They always lag behind quite a bit. They will improve, especially as general console API's and internal tools improve, but they will not be happy about it and it will take a good while for them to get on top of it. And then on the PC side, his assumption that because of this, every game will run terrible on Nvidia hardware because developers dont care what hardware users have - wrong. Just plain wrong. Developers do not want to alienate a massive portion of their potential market catering solely to one hardware manufacturer.

When it comes down to it, AMD have had hardware in consoles for a while now and it has not resulted in this huge advantage for AMD on the PC side like many predicted.

And we've really yet to see how much DX12/Vulkan are going to take off. It IS a lot more work for them and it has the added disadvantage of potentially having the game run a lot worse on for one or the other hardware manufacturer because of the nature of the lower level optimizations necessary. I still think that DX11.3 will likely be the favored graphics API on PC for quite a while. It saves on time and effort and is more hardware agnostic, comparatively.

Lastly, I'm no Nvidia fan, nor am I a 'fan' in general of hardware manufacturers. But I do notice that AMD fans can often be a bit overly optimistic at times. Always lots of theories about how they're going to turn things around, about how awesome their next line of 'x' are going to be, blah blah blah and it just often doesn't pan out. Maybe sometimes they're just thinking a little too far ahead and make decisions that simply dont pay off in the short term and then in the long term, it becomes irrelevant anyways as different paradigms take shape or competitors catch up. Whatever the case, the point is that I think a little caution is probably needed when making predictions about what is going to happen.

Glad you posted that - much better articulated than I could put it - I just face palmed at him using the hitman stuff as proof and couldn't put into words how utterly stupid his leaps to conclusions were.
 
AMD will more than likely add an AFR mGPU implementation to GPUopen in the near future. It is the most basic form of mGPU.

AFR is far from the most effective way to utilise multiple GPUs - the biggest potential comes from developers having the ability to farm out the workload and then composite the results into a single frame - kind of like Lucid were trying to do - but without being able to do that at development level their gains were always limited.

Why do two 2048 shader dies and put them on a interposer when you can do one 4096 shader die?

There are quite a lot of challenges to just scaling up an architecture indefinitely - which is why every now and again GPUs drop back a bit on SP rather than having like 10K of them by now - all kinds of potential utilisation efficiency problems with pipeline/queue latency and caching, etc. at a point it actually becomes more efficient to spread the load over 2 distinct GPUs or redesign your architecture.
 
mGPU is far from dead, multiple gpu's on a single interposer will be how GPU's grow, the cost of creating massive GPU's like the 980Ti/Titan/Fury X is going to get increasingly more costly to point its no longer viable.

Having multiple small GPU's on a single interposer is how they will grow in power, much like CPU's have progressed over the past decade.

If this were to done I don't think these are likely to be multiple traditional gpu's on a single interposer. I could see multiple dies mostly shader hardware/engines on an interposer but controlled through a single command processor/work distributor so that all the OS/Driver/Game sees is 1 gpu. Thinking about it I imagine latency would be abysmal.
 
Last edited:
But the interposer isn't much easier to create and adds costs too.

AMD and Nvidia are already moving to using interposers with things like HBM anyway. So,smaller GPUs on the interposer as opposed to one large one is not going to change much.

However,it will massively improve yields and cut GPU R and D as you can scale up one basic GPU "unit".

People do you realise that JHH said the GP100 cost $2 to $3 billlion to develop and they hoped to make some money on it(or something to that extent)??

People also trying to dismiss what the chap is saying are also forgetting the massively increased costs of each node,and the fact software is changing the way multi-GPU is handled.

Even if people were to state it would mean more software would be in place for all of this to happen and devs won't bother - that is making assumptions too.

Why??

Costs. Being able to drop hardware costs longterm,will be a major impetus for Sony,MS and Nintendo to push forward- that is where most of the risk for them comes from. Plus the other aspect which is a massive impetus for Sony,MS and Nintendo is also the fact they can update the consoles more easily.

Making the consoles X86 was also about cost - people were poo-pooing that notion when the first leaks about the consoles being X86 were released.

Also,until Nvidia gets a decent CPU in-house it is far cheaper to deal with one company - so realistically only Intel is the other company in the fray,unless they both form an alliance of sorts.

Remember we are talking a few years from now.

Have people not even seen how more and more of the Gameworks titles are doing OK on AMD cards?? That indicates we are starting to see an effect of AMD being in the consoles.

Look at something like The Division - a total reversal of what we saw with Watch Dogs from two years ago. This is from Ubisoft who are very close to Nvidia.

Remember - most of the titles in the first few years of the consoles were developed with the older consoles in mind too. Titles like Watch Dogs being examples of this. These will have been years in development.

As time progresses more and more titles will be developed solely with the new consoles in mind,and many of those are now starting to be released.

The problem is too many enthusiasts expect these things to happen in a short period.

PS:

I also expect Nvidia to start putting multiple GPUs on an interposer too.

Far cheaper to develop one smaller GPU and scale it up.

Do people also think MS would include all this effort with new ways of doing multiple GPU in DX12 without consulting AMD or Nvidia. Very doubtful indeed IMHO.

AMD will get their first IMHO though,but it probably be at least 2018 or 2019 til we start to see this. I expect Navi to be more end 2018 if anything else.

This will take a few years to come to fruition but it is going to happen IMHO.

Process tech is getting more and more difficult to implement - even moving to 450MM wafers,look at how many years it has been delayed to the enormous costs?? Even Intel is finding it hard to afford!!

Hence,companies are going to have to think in more creative ways to make sure costs don't spiral out of control.
 
Last edited:
Hence,companies are going to have to think in more creative ways to make sure costs don't spiral out of control.


Issue with 2 gpu into a link is latency and even if dx12 give the developer a better option to control (fix) latency it isnt something that is easy to implement yet.

GCN has shown that AMD has a better technology that lasts longer and thrive on dx12 and Nvidia has to spend more money to fix their hardware as they lack such in hardware.
 
I also expect Nvidia to start putting multiple GPUs on an interposer too.

Far cheaper to develop one smaller GPU and scale it up.


AMD will get their first IMHO though,but it probably be at least 2018 or 2019 til we start to see this. I expect Navi to be more end 2018 if anything else.

This will take a few years to come to fruition but it is going to happen IMHO.

Maybe, maybe not. ;)

Pascal-play-the-future.jpg
 
Maybe, maybe not. ;)

Pascal-play-the-future.jpg

Forgot about that,but you might be correct Nvidia gets there first - but AMD has been running test samples of chips on interposers for nearly 8 years now.

But it just shows you if that picture is legit,that MS including all that extra stuff isn't a mistake.

Volta and Navi might be the most intereresting GPUs for a long time!!
 
2+ full gpus on an interposer might be feasible in the near term for HPC but I think it lacks wind for gaming, unless devs really are keen on supporting mgpu in their games in dx12/vulkan.

AMD's headstart on r&d'ing high throughput communication between 2 dies (while developing HBM) puts them in a better position for scalable multi-die interposer gpu's, than the example above which looks like the relatively simpler sticking of 2 gpu dies closer to each other.
 
Last edited:
2+ full gpus on an interposer might be feasible in the near term for HPC but I think it lacks wind for gaming, unless devs really are keen on supporting mgpu in their games in dx12/vulkan.

AMD's headstart on r&d'ing high throughput communication between 2 dies (while developing HBM) puts them in a better position for scalable multi-die interposer gpu's, than the example above which looks like the relatively simpler sticking of 2 gpu dies closer to each other.

i think AMD's argument about scalable APU, got microsoft and sony all excited, probably because of the cost and the opportunity to sell more, like different tiers of the consoles cheap one like regular consoles with 720p@30fps, then higher tier with 1080p@60fps, and maybe some luxury one with 4k, all they have to do is double the cores and try and stay within thermal limits, even in developement this wouldnt be that much of a problem for devs, and ppl might actualy buy multiple of the same console.
this is why they are both talking about xbone1.5 and ps4k, i am pretty sure that those upgraded consoles, doesnt use polaris but rather 2nd gpu to double the compute units, since vulkan/dx12 etc allow workload to be processed as one gpu, most problems when multi gpu will vanish, and all of this will translate to the PC ports, but will require more Nvidia work on drivers, but that doesnt bother them they always braged about having the best driver team, so this will work out well.
the arguments on the video might seem far fetched for some ppl, but in reality it isnt, most ppl on threads talking about latest console ports being better optimised for AMD concluded that it was probably of the GCN arch shared between them, now that they do not develope for old gen things will probably get better for them, especialy in VR
 
AFR is far from the most effective way to utilise multiple GPUs - the biggest potential comes from developers having the ability to farm out the workload and then composite the results into a single frame - kind of like Lucid were trying to do - but without being able to do that at development level their gains were always limited.

It might not be the most efficient for latency among other things, but explicit mgpu using AFR is still miles ahead of abstracted afr in dx11.

It is also the most simple to implement, it only becomes more complex when scaling beyond dual gpus.
 
Back
Top Bottom