• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Rumour: AMD Cancels RX Vega Primitive Shaders From Drivers

Ryan Smith from Anandtech wrote it best:

Moving to Maxwell, Maxwell 1 was a repeat of Big Kepler, offering HyperQ without any way to mix it with graphics. It was only with Maxwell 2 that NVIDIA finally gained the ability to mix compute queues with graphics mode, allowing for the single graphics queue to be joined with up to 31 compute queues, for a total of 32 queues. This from a technical perspective is all that you need to offer a basic level of asynchronous compute support: expose multiple queues so that asynchronous jobs can be submitted. Past that, it's up to the driver/hardware to handle the situation as it sees fit; true async execution is not guaranteed.

But then after AOTS the entire community became obsessed with the idea that when implemented the way AMD has it increases gaming performance. Which is fine but remember the whole complaint was that Nvidia had numerous whitepapers/slides/a statement that said Maxwell supported it (which again is technically true and even useful for some applications) - but because it didn't work specifically in games everyone continues to believe to this day that Maxwell never supported it - which is untrue.


Primitive Shaders have never been implemented in any game. They are not yet even enabled for RX Vega.

Great to hear from Nvidia PR on this. It didn’t work well enough to be worth using. As far as games are concerned it’s worthless.
 
Great to hear from Nvidia PR on this. It didn’t work well enough to be worth using. As far as games are concerned it’s worthless.


Async is a solution to a problem Nvidia never had so it is hardly surprising. If AMD's GPUs could attain similarly efficient balanced loads without mountains of compute lying underutilized then their situation would be similar.
 
I really don't get what AMD is/was trying to do with Vega - get it on something equivalent to TSMC's 16nm or better a proper 12FF, streamline it a bit with a narrower but higher tickrate implementation of the architecture, forget the stuff that needs developers to specifically program for, and it would compete with anything nVidia has got.

That probably because it's what Mr Koduri was trying to do and when he left his masterplan went with him. I'd bet that AMD are trying to push on towards Navi. If they do do a die shrunk Vega that doesn't come out until the end of this or early next year it'll mean Navi's going to have it's work cut out considering the performance increases we should be seeing from the next generations of Nvidia tech, That said it'd be no surprise if Nvidia decide sit on their laurels and milk the market as much as they can while waiting for AMD to catch up rather than pushing ahead regardless.

I'm hoping AMD are developing a dual+ core architecture using infinity fabric with Navi, There's a lot of talk about it on the internet but I haven't seen anything from AMD on the matter so it may be that it's nothing more than the usual guff the rumour mongers like to spout. But, if they are working on it and they can pull it off it may be RTG's ace in the hole. As it stands I can't see AMD worrying Nvidia over the next few years.
 
They better something a LOT better than infinity fabric up their sleeve if the gpu's are going to communicate directly with one another. The current iteration just won't be good enough imo.

The crossfire drivers would have to make a gargantuan leap before multi-GPU comes in to the picture. Even then with modern game engine design it is just not so obvious there is any viability here.

Unless people are mixed up with potential multi-chip designs, but we are a long way From being able to engineer that yet. Not capable on 7nm, probably not on 5nm. (Hint, ryzen only has 8-16 cores and can already run into issues on CPU tasks, Vega is already at 4000 cores and graphics have much tighter latency requirements).
 
The crossfire drivers would have to make a gargantuan leap before multi-GPU comes in to the picture. Even then with modern game engine design it is just not so obvious there is any viability here.

If they do go down that route, I can see it turning into a complete disaster at launch just due to the lack of games that support multi GPU well.

Even if Navi turns out to be a true multi-chip design then AMD better have descent software support across multiple popular game engines at launch otherwise, again I can see it ending very badly for AMD. If you end up with a situation like Ryzen CPU face in PUBG the card will be a very hard sell to most people, and the butt of jokes to most others.
 
That probably because it's what Mr Koduri was trying to do and when he left his masterplan went with him. I'd bet that AMD are trying to push on towards Navi. If they do do a die shrunk Vega that doesn't come out until the end of this or early next year it'll mean Navi's going to have it's work cut out considering the performance increases we should be seeing from the next generations of Nvidia tech, That said it'd be no surprise if Nvidia decide sit on their laurels and milk the market as much as they can while waiting for AMD to catch up rather than pushing ahead regardless.

I'm hoping AMD are developing a dual+ core architecture using infinity fabric with Navi, There's a lot of talk about it on the internet but I haven't seen anything from AMD on the matter so it may be that it's nothing more than the usual guff the rumour mongers like to spout. But, if they are working on it and they can pull it off it may be RTG's ace in the hole. As it stands I can't see AMD worrying Nvidia over the next few years.

Both AMD and nVidia are looking at moving away from monolithic cores in the future but the earliest that is likely to happen would be second gen of their next gen architectures after the current ones. It won't be dual core style but the ability to use a larger area to implement the parts that make up the traditional GPU core in multiple discrete packages.
 
They better something a LOT better than infinity fabric up their sleeve if the gpu's are going to communicate directly with one another. The current iteration just won't be good enough imo.

The crossfire drivers would have to make a gargantuan leap before multi-GPU comes in to the picture. Even then with modern game engine design it is just not so obvious there is any viability here.

Unless people are mixed up with potential multi-chip designs, but we are a long way From being able to engineer that yet. Not capable on 7nm, probably not on 5nm. (Hint, ryzen only has 8-16 cores and can already run into issues on CPU tasks, Vega is already at 4000 cores and graphics have much tighter latency requirements).

If they do go down that route, I can see it turning into a complete disaster at launch just due to the lack of games that support multi GPU well.

Even if Navi turns out to be a true multi-chip design then AMD better have descent software support across multiple popular game engines at launch otherwise, again I can see it ending very badly for AMD. If you end up with a situation like Ryzen CPU face in PUBG the card will be a very hard sell to most people, and the butt of jokes to most others.

I was getting it wrong if that's the case, I was thinking multi gpu cores without the need for crossfire drivers, similar to the way they connected the cores on the cpu's that had more than 8 cores.
 
Both AMD and nVidia are looking at moving away from monolithic cores in the future but the earliest that is likely to happen would be second gen of their next gen architectures after the current ones. It won't be dual core style but the ability to use a larger area to implement the parts that make up the traditional GPU core in multiple discrete packages.

That sounds more like what I was thinking of, The sooner the better for AMD.
 
Unless people are mixed up with potential multi-chip designs, but we are a long way From being able to engineer that yet. Not capable on 7nm, probably not on 5nm. (Hint, ryzen only has 8-16 cores and can already run into issues on CPU tasks, Vega is already at 4000 cores and graphics have much tighter latency requirements).

Recent advances in substrate technology make it feasible at 7nm using specialised links - the current general purpose nature of infinity fabric wouldn't cut it but they could implement "dual mode" type functionality in the future that allowed it to be switched into a specialised state where required.

That sounds more like what I was thinking of, The sooner the better for AMD.

I can't see it happening with the first generation of new releases - its likely both companies will implement architecture changes towards supporting it but still using a monolithic core for the first round and working towards MCM type setups with revisions of that. Depending on who is involved, etc. all the recent staff changes at RTG might have an impact on the timeframe for that.
 
Last edited:
I was getting it wrong if that's the case, I was thinking multi gpu cores without the need for crossfire drivers, similar to the way they connected the cores on the cpu's that had more than 8 cores.


But that is just my point. You can't connect GPU cores in the same way you AMD connects Ryzen.

There isn't the technology to do that yet, at least not in a way that wouldn't be incredibly slow.
 
It sounds good, apart from the timeframe, AMD needed this yesterday.

AMD have much more serious problems to solve. MCM doesn't solve the problems name is facing with GPU efficiency, in fact efficiency will likely decline.

The entire goal of MCM would be to reduces manufacturing costs in the future. As a consumer, you simply won't notice
 
http://research.nvidia.com/publication/2017-06_MCM-GPU:-Multi-Chip-Module-GPUs

For those that haven't seen it yet.

Recent advances in substrate technology should make this feasible in some form at 7nm - it is not the same as current multi GPU systems and won't employ something like SLI or CF and its pretty much impossible to tie together current architecture (or even any near future) monolithic cores via something like Infinity Fabric and get better results than CF/SLI - just won't happen. The only way to get better results is with MCM type designs which have both the advantages of reducing cost as you are implementing multiple smaller dies and better potential performance.
 
Worth highlighting that their heavily optimized MacM designs was "only" 10% slower than a monolithic design. The performance only.comes about because you can have a far higher footprint than a monolithic have chip, and far better yields reducing costs. There is no issue building the current chip sizes. If Vega was magically transformed in to an MCM design it would be 10% slower. Production costs may be more or less, depending on current yields and the costs of the additional substrate technology and memory bandwidth required.

GV100 and professional computing is where MCM will make more sense. Instead of an 800mm^2 chip with bad yields, 2-4 smaller chips could be packaged. Or Nvidia could have even gone for something like a 1000mm^2 die area in 4 dies. A 10% reduction in performance over a theoretical monolithic have die wouldn't be a problem since there would be an extra 200mm^2 die space for performance gains.


At the consumer level this only becomes interesting when 400mm sized chips become prohibitive. A 10% slower Vega that draws even more power and needs even more complex drivers won't be helping anyone
 
Back
Top Bottom