• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: Do you think AMD will be able to compete with Nvidia again during the next few years?

Do you think AMD will be able to compete with Nvidia again during the next few years?


  • Total voters
    213
  • Poll closed .
Permabanned
Joined
2 Sep 2017
Posts
10,490
Apparently, it's very easy to stall GCN's geometry engines by context switching. Interestingly, Nvidia's geometry engines are designed to context switch rapidly. So, the intelligent workgroup distributor in Vega (and GCN, generally) will group work as efficiently as possible to reduce context switching. It'll also hold work until there are enough instructions to fill a wavefront (64 threads). Nvidia operates with 32 warps and each section of 32 cores inside an SM has its own warp scheduler. Max threads is 2048 for Pascal over 64 warps per SM (likely virtualized).

It seems easier for Nvidia to fill and task its hardware effectively. GCN has typically struggled with underutilization of CUs, maybe from the workgroup distributor holding work when a 64-thread wavefront isn't reached or geometry engines stalling a bit. I'm really not sure. There is definitely pressure on vGPRs in gaming.

GCN is primarily in-order execution with async compute tasks prioritized to front of processing queue, while Nvidia's architectures based on the GPC/SM design are out-of-order execution, which is why every 32 cores has its own scheduler. Their PolyMorph engines can also work out-of-order, so if geometry data isn't ready for something in the pipeline, they can context switch and render something that is ready, then go back and finish the previous work.


MSAA and SSAA hit Vega pretty hard, which suggests that AMD didn't do any work to improve these ROP intensive anti-aliasing techniques. The market is moving away from them anyway and more towards shader based techniques.

Vega is maxed out on ROPs. 1 raster engine can have a max of 16 ROPs. There are 4 raster engines in Vega64, as it is still a 4 shader engine design.
AMD uses 4 physical ROPs capable of processing 4 colors each. In die shots, you can see this design choice in each shader engine. This gives a total of 16 ROPs per shader engine. This design is well suited to GCN's 4-wide SIMD structure.

To add more ROPs, you'd need more shader engines, which means you'd need to redesign and rebalance the entire architecture. There's also no guarantee of extra performance with added ROPs; there's a point of diminishing returns, which is why Nvidia is very aggressive with compression in their ROPs. The entire architecture is intrinsically linked, so if you have say 96 ROPs, but they're all underutilized, you'd draw even more power (from extra hardware units) for little to no gain. Everything must be thought out and designed carefully.

Doubling L2 cache and vGPR (vector general purpose registers) sizes would probably help more, but that's expensive in terms of die area, so architects and engineers work towards efficiency of rendering pipelines and extensive reuse of data.

  • Though the recent patent filing for AMD's Super SIMD also drastically increases vGPR efficiency and reduces register pressure overall (a weak point for GCN).


swear i saw some kind of image on beyond3d forums where a proposed design showed a 96 ROP vega, with 6x ACE and a reduced number of SPs per grouping under each of the ACE in around 768SPs (total being 4608 SPs on the gpu).

The proposal if i recall the design (which had some images) would eliminate the apparent lack of even the vega gpu failing to fully load all the SPs based on the data collected from the vega 56 as well as polaris in terms of how well they performed, and what appears to be a bottleneck with the ACEes themselves, and then also fixing up the backend ROP count. They figured that with a 7nm shrink and with these changes, that overall die size would still be about 20% smaller than the current, and potentially a straight up 50% faster than the current vega at LEAST in the areas in which the rops and ACEes were lacking. But this presumed that the ACEes were the root cause of the SP not loading fully at all times and that the lack of rops were causing the performance degradation and fall in performance in tasks specfiically suited for it. They also suggested that it would probably be extremely helpful for ray tracing... but i can't for the life of me find the post, perhaps it wasn't on beyond3d, but i thought it was.
https://www.reddit.com/r/Amd/comments/9lltgj/64_rops_is_a_bottleneck_for_vega/
 
Soldato
Joined
15 Jun 2005
Posts
2,751
Location
Edinburgh
There is something else: for 4K gaming, CrossFire behaves well:
Wow, another of your epically misleading posts. From the conclusion of the article you linked:

With some difficulty, we managed to find four games in which adding a second Radeon RX Vega 64 works well.
Of the games that came out the past year, only a small number support multi-GPU setup.
Now, even more so than ever before, we advise you to buy one fast video card rather than several slower cards.
Multi-GPU setups behave well if they have developer support. Unfortunately for the majority of the games that is not the case. You would be a fool to buy a crossfire setup as a strategy for 4K gaming.
 
Permabanned
Joined
12 Sep 2013
Posts
9,221
Location
Knowhere
With some difficulty, we managed to find four games in which adding a second Radeon RX Vega 64 works well.

Of the games that came out the past year, only a small number support multi-GPU setup.

Now, even more so than ever before, we advise you to buy one fast video card rather than several slower cards.

These quotes are bang on. Crossfire is all but dead unless you're a gamer who doesn't have time to play that often & only plays games that do scale well such as BF1 or GTA5. If that's you it may be a good option but if you tend to jump from game to game & usually grab new releases you'll be pulling your hair out over how bad the support is.

5 or so years ago I tried crossfire out with a pair of 7850's, I spent a week or so messing around with it before giving up & selling the second card, When Crossfire moved to XDMA I had high hopes for it's future but nothing improved enough to make it a viable option. Since then it seems things have only gotten worse.
 
Soldato
Joined
14 Sep 2008
Posts
2,616
Location
Lincoln
Anyone promoting xfire/sli likely hasn't used them... I was xfired on 7970s and then Fury X's because I was an early 4k adopter and there wasn't a single GPU solution to even come close to powering that - and even that experience wasn't exactly great.. to the point where I'm never using a multi-GPU set up again.
 
Man of Honour
Joined
21 May 2012
Posts
31,940
Location
Dalek flagship
But is still easier to make it work than to design a new GPU that succeeds Vega 64 and is up to 95-100% more powerful. It is either this or that.

It is actually easier for AMD to design a brand new GPU as this is totally under their control.

With DX12 it is impossible for AMD to get mGPU working as this has to be done by the game devs.
 
Permabanned
Joined
2 Sep 2017
Posts
10,490
It is actually easier for AMD to design a brand new GPU as this is totally under their control.

With DX12 it is impossible for AMD to get mGPU working as this has to be done by the game devs.

All GPU makers should send people or finance to the game developers to do it.
I begin to think about going Crossfire with two Vega 56 or two Navi GPUs and a 1000W power supply.

This 4K monitor should be powered with something after all, also new games will be heavier, so more powerful GPUs are always handy.
 
Soldato
Joined
16 Nov 2013
Posts
2,723
All GPU makers should send people or finance to the game developers to do it.
I begin to think about going Crossfire with two Vega 56 or two Navi GPUs and a 1000W power supply.

This 4K monitor should be powered with something after all, also new games will be heavier, so more powerful GPUs are always handy.
The cost of sending money/devs to every game to make this work would be a terrible cost to the gpu maker. Money far better spent making a newer better gpu
 
Soldato
Joined
3 Oct 2013
Posts
3,622
All GPU makers should send people or finance to the game developers to do it.
I begin to think about going Crossfire with two Vega 56 or two Navi GPUs and a 1000W power supply.

This 4K monitor should be powered with something after all, also new games will be heavier, so more powerful GPUs are always handy.

NO!

Finding a way for multiple GPU's to added but remain transparent to the host OS/Game would be better
 
Permabanned
Joined
12 Sep 2013
Posts
9,221
Location
Knowhere
I'm not convinced. Lisa Su has injected a hell of a lot of cash into R&D, and we all know they're trying a "Zen" approach in the GPU side of the business.

What's your source?
I've read plently of posts where people are suggesting they should do it but I can't remember ever seeing anything confirming that AMD actually are trying to do it.
 
Permabanned
Joined
12 Sep 2013
Posts
9,221
Location
Knowhere
I think 1080 Ti is a bit too much to ask for. I'm personally expecting 1080 non-Ti perf for ~£300.

Just don't see AMD being able to get that much extra perf from GCN on 7nm, to see a mid-range chip hitting 1080 Ti levels.

Definitely, Navi replaces Polaris, With Polaris we saw a die shrink & new architecture that was still slower than Fiji. With Navi I think we'll see Vega performance for less

Also isn't the chiplet thing just for Zen 2 at this time?

That's what I thought.
 
Associate
Joined
29 Jun 2016
Posts
529
What's your source?
I've read plently of posts where people are suggesting they should do it but I can't remember ever seeing anything confirming that AMD actually are trying to do it.

It was something that bounced around Reddit for a bit, struggling to find a solid reference, but did turn this up:
https://wccftech.com/amds-secret-ra...locks-efficiency-in-2018-navi-in-2019-beyond/

One of the most significant changes that have occurred after Raja’s departure is the assembly of a “Zen team” of engineers at the Radeon Technologies Group whose sole purpose is to drastically improve the performance of AMD’s GPU designs, by working alongside the company’s established engineering teams at RTG

 
Permabanned
Joined
12 Sep 2013
Posts
9,221
Location
Knowhere
It was something that bounced around Reddit for a bit, struggling to find a solid reference, but did turn this up:
https://wccftech.com/amds-secret-ra...locks-efficiency-in-2018-navi-in-2019-beyond/

One of the most significant changes that have occurred after Raja’s departure is the assembly of a “Zen team” of engineers at the Radeon Technologies Group whose sole purpose is to drastically improve the performance of AMD’s GPU designs, by working alongside the company’s established engineering teams at RTG


Hi,
First off Merry Christmas, Thanks for the links but I wouldn't put my faith in a word from these sources if I was you, WCCF built their reader base by either making news up or using what others have made up knowing full well it had no real merit, as for the second site, Sentences like "AMD execs too to the YouTubes yesterday to discuss how the last twelve months of Zen has been ahead of the imminent launch of Ryzen 2" just don't read right, I presume they meant ''''AMD execs took to Youtube yesterday to discuss blah blah'''' but does that sound like something an exec would be doing? I'd put money on it being fake news, I'm not saying AMD aren't exploring the possibility just that sites like these use guesswork more than facts from genuine sources.
I hope AMD's bigwigs have got their eggheads working along those lines as it could potentially allow AMD to leapfrog Nvidia at some point but if they are working on something along those lines it'll be very early days & it'll require a completely new architecture to take advantage of & use it properly, not another iteration of GCN as Navi is meant to be. The idea sounds like it has promise but I'd guess that they're still working on the basics, that's if they are in fact really working on it at all.
It's why so many sites use the cliche to take it with a grain of salt so often, like what I've written here it's just an opinion or idea, not a fact.
 
Soldato
Joined
26 Sep 2010
Posts
7,157
Location
Stoke-on-Trent
Navi is the last GCN iteration so that's going to be a single die as always. Arcturus is the brand new architecture in 2020, but I don't think we'll see a chiplet design then either. But given how EPYC Rome performs, you just know AMD will want to crack that approach for GPUs too.
 
Soldato
Joined
19 May 2012
Posts
3,633
AMD's pricing over the past generation of cards has hardly been competitive at the same price/performance ratio (apart from the 20 series).

I hold no hope in them competing at the 4k price front for a long time. By the time they are competitive, we'll probably see 4k/60fps consoles.
 
Back
Top Bottom