• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Is it likely that high end Navi will be a redesign of Vega 64?

Associate
Joined
23 May 2016
Posts
834
Location
Lurking over a keyboard
Nobody would love that to be true more than me, but given what we actually know right now that's extremely speculative.

All we really have right now to base that assumption on are some unsubstantiated leaks of optimism within the RTG from some niche youtubers.

I mean, I kind of agree with you, but I'm trying not to get my hopes up :D

Would agree with this. Not gonna count the chickens until they've hatched :D .
 
Soldato
OP
Joined
30 Jun 2019
Posts
7,875
I think this says everything about AMD's recent strategy/approach to very high end GPUs:

kbgqYQt.jpg

I.e they can make the most profit by selling them to Apple, and even more if they are exclusive to Mac Pro workstations, presumably because they are expensive and difficult to produce.

Until now, I thought the Radeon VIIs were the highest spec GPUs AMD had, it does explain why it 'only' has 3840 Shader Processors, rather than 4,096 like the Vega 64 (and Vega II).

£1,855 for a Vega II anyone? Weirdly, the Vega II with 2 GPUs costs more than twice this amount at ~ £4,020 because reasons

And this workstation comes with a 256GB SDD by default when the price is $5,999.00 :eek:
 
Last edited:
Associate
Joined
12 Jan 2012
Posts
120
Location
Aberdeenshire
I think this says everything about AMD's approach to very high end GPUs:

I.e they can make the most profit by selling them to Apple, and even more if they are exclusive to Macs and the supply is kept low. I thought the Radeon VIIs were the best AMD had, it does explain why it 'only' has 3840 Shader Processors, rather than 4,096 like the Vega 64.

Apple are not the reason.

Vega was a compute architecture right from the start. AMD make money selling MI series accelerators, the VII existed only to recoup losses on dies that didn't make the grade for the MI50.
 
Soldato
OP
Joined
30 Jun 2019
Posts
7,875
If the RDNA 2 GPUs end up performing at least 1/3 better performing than the RX 5700 XT, for £350-£450 I'll be quite happy with that. They would be excellent GPUs for 1080p and QHD even for demanding games like Red Dead Redemption 2 and would perform at more than double the FPS I get from my R9 390.

If you look at the last few generations of high end AMD cards, none of them have been more than 30% faster than the previous generation (at 1080p):

https://tpucdn.com/review/powercolo...vil/images/relative-performance_1920-1080.png

The only exception to this was the Vega 64 vs the RX 590, which increased performance by 34% (on average) at 4K resolution:

https://tpucdn.com/review/powercolo...vil/images/relative-performance_3840-2160.png

I rest my case ;)

Not hard to imagine a 40-50% increase at 4K for the next flagship Nvidia GPU, however:

https://tpucdn.com/review/nvidia-geforce-gtx-1080-ti/images/perfrel_3840_2160.png
 
Last edited:
Associate
Joined
29 Aug 2013
Posts
1,176
30% over a 5700xt will be 2080ti performance :D that's what I'm trying to say, they're not far off

RX 590 released over a after year Vega 64, it was a bad card at a bad price. I'm sure Vega was 40-50% over the Fury which it replaced
 
Soldato
Joined
6 Feb 2019
Posts
17,562
AMD's cards typically haven't gained much between releases in the last few years, But rememeber that they were all just reworks of eachother. The last few years of releases were all just GCN cards.
Now we have new RDNA architecture, where what should be a low to mid range GPU like the 5700xt can be built for higher yields and then overclocked to get decent performance.

If the rumors are true, full fat RDNA 2 can house +- 8100 shader cores, compare that to the +- 2500 or so found in the 5700xt and yeah the 5700xt is actually a replacement for the RX580 - not that it should come as a surprise to anyone as prior to CES, AMD's own team leaked slides that showed the 5700xt with a RX680 badge on it. AMD overclocked the crap out of the card so they could jack up the price and improve margins, then at the last minute renamed it to 5700xt so it didn't sound like the low end GPU it actually is.
 
Last edited:
Soldato
OP
Joined
30 Jun 2019
Posts
7,875
I think the only way AMD can compete with Nvidia at the very high end is to produce dual GPU cards like the Vega II Duo, built with RDNA 2. And hopefully without the insane price tag.

The problem there is (as some of you pointed out) virtually all games don't support dual GPUs, AMD would have to find a way to make the OS detect the graphics card as a single GPU.

If they could create a dual GPU with equivalent performance to approx. 2x RTX 2080 Supers though, this would beat any of Nvidia's current / next gen. single cards.

Melmac - As it turns out the next Xbox will use a GPU that will have more in common with the RX Vega 56, in terms of Shader Processors. It seems very likely though that there would be 2 or more graphic card variants created for RDNA 2 (as with the 5700 and 5700 XT) and that one of these would have more SPs than the Xbox GPU, but, maybe not as many as the Vega 64.

Ofc, I know there's big differences between Vega and Navi, my question was intentionally quite simple...
 
Last edited:
Associate
Joined
29 Jun 2016
Posts
2,152
Location
Up Norf
But AMD never tries to compete with Nvidia's top end (they struggle enough in the mid-high end), people with that kind of money tend to buy Nvidia graphics cards. AMD might release a higher end card later and charge a premium for it, like they did with the Radeon VII.

Why would next gen. consoles be designed for 300w max, considering the top end models will be nearly as powerful as current top end PCs with higher power requirements?

I think you are right about the clock speed though, the 1700mhz is apparently for the boost clock, so the 'game clock' / real spec is probably 100/200 mhz less than this.

To be fair, they have been very competitive in the mid tier cards.
 
Soldato
Joined
26 Sep 2010
Posts
7,154
Location
Stoke-on-Trent
I think the only way AMD can compete with Nvidia at the very high end is to produce dual GPU cards like the Vega II Duo, built with RDNA 2. And hopefully without the insane price tag.
Not true, on paper at least. Take the 5700 XT, take it to 80 CUs and it'll beat the 2080 Ti. And that's RDNA 1. Granted in the real world that's probably not possible, but given the power optimisations seen in the Renoir APUs that are being added to RDNA 2, I really don't think AMD would need to Crossfire multiple GPUs to match and beat top-end Turing.
 
Soldato
OP
Joined
30 Jun 2019
Posts
7,875
I think they already took it as far as they could with RDNA v1, at 40 CUs (or at least, what was considered worthwhile and competitive). ON release, I remember an AMD engineer stating that there were power issues with Navi, implying that customers would have to wait for more powerful GPUs.

I really doubt RDNA v2 would allow a 100% increase in the quantity of CUs. It would have to be much more power efficient than 7nm Navi to keep TDP at 300w or less.
 
Last edited:
Soldato
OP
Joined
30 Jun 2019
Posts
7,875
I think the best info we have to go on at the moment is the spec of the Xbox One X Series RDNA2 GPU, because it gives us an idea of the performance per watt that RDNA2 can achieve, which does seem to be a significant improvement on RDNA v1.

I think another important question is, how much will peformance scale up, relative to the Xbox One X Series GPU?

Do you guys think the rumoured specifications of the console GPUs are close to the hardware they will actually have on launch?

EastCoastHandle - I've just realised what you mean now... that the PS5 GPU is based on 'Navi 10' which would technically make it RDNA v1 only (probably 7nm too, not 7nm EUV), combined with hardware based raytracing presumably.

Tbh, I ignored this info before, as I thought it was an error... If true, it does partly explain why the PS5 GPU appears to be significantly weaker than the X series GPU.
 
Last edited:
Soldato
Joined
6 Feb 2019
Posts
17,562
I think the best info we have to go on at the moment is the spec of the Xbox One X Series RDNA2 GPU, because it gives us an idea of the performance per watt that RDNA2 can achieve, which does seem to be a significant improvement on RDNA v1.

I think another important question is, how much will peformance scale up, relative to the Xbox One X Series GPU?

Do you guys think the rumoured specifications of the console GPUs are close to the hardware they will actually have on launch?

EastCoastHandle - I've just realised what you mean now... that the PS5 GPU is based on 'Navi 10' which would technically make it RDNA v1 only (probably 7nm too, not 7nm EUV), combined with hardware based raytracing presumably.

Tbh, I ignored this info before, as I thought it was an error... If true, it does partly explain why the PS5 GPU appears to be significantly weaker than the X series GPU.

So the ps5 has 10.3TF of RDNA, don't know if it's v1 or v2. And that was enoug to give it a firestrike graphics score on par with a rtx2080.

but this scaling doesn't seem that good? It's on par with the 5700xt, the ps5 seems to score better because it must have more cores therefore higher tflop output.

Also worth saying just before someone wants to say Sony can't afford a gpu better than the 5700xt - y'all like to forget that the 5700xt is literally a rebranded RX680, a cheap low to mid end GPU, there were RX680 branded cards ready for CES when it was shown off at the time. AMD be making good profit off the 5700xt when in normal times it would be selling for $200usd less than it's rrp
 
Last edited:
Soldato
Joined
25 Sep 2009
Posts
9,627
Location
Billericay, UK
High end navi won't be GCN but RDNA 2. Right now Navi10 is hybrid of GCN and RDNA1.
I'm not sure it's fair to call it a hybrid sure the instruction set is the same but the underlying architecture is brand new. Navi is a is very streamlined compared to GCN and doesn't have the same limitations (such as not being able to keep all the stream processes feed with instructions).
 
Soldato
OP
Joined
30 Jun 2019
Posts
7,875
Grim - Not sure I have much trust in the Teraflops of GPUs as a metric to determine performance. The Vega 64 offers 12.66TFs, but if this represented actual performance, it would overtake the RTX 2080 Super (11.15 TFs).

For the last few of GPU upgrades I've brought ( HD 4870 > HD 7870 > R9 390), I've always tried to get at least a doubling in the Texture and Pixel Rates, which I think is a reasonably effective way to guarentee a significant boost in GPU performance.

Another useful stat is memory bandwidth, but I believe this matters less.

Assuming the Xbox X series GPU spec is correct, and a PC equivalent was created, it would offer more than double the pixel (64000 vs 165600) and texture rate (160000 vs 386400) compared to my R9 390, but 'just' a 75% improvement in memory bandwidth.

I think it's worth pointing out that the Xbox Series X GPU apparently has a higher pixel rate and memory bandwidth than even the ~£1000 RTX 2080 TI :rolleyes:

The pixel rate should certainly help it perform well at 4k resolution.

It's conceivable that a RDNA2 graphics card variant could equal the RTX 2080 TI on texture rate performance too (the Radeon VII already has 420.0 GTexel/s).
 
Last edited:
Back
Top Bottom