• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The RTX 3080 TI isn't going to be- EDIT - No We Were All Wrong!

Associate
Joined
27 Jun 2009
Posts
256
A +20% 4k performance improvement, low impact RTX, cooler, quieter & cheaper; say £899-949 for the FE, £849-899 base. I think nVidia fans would be happy.

But if they push the price higher on relatively low gains I could see some, myself included, waiting to see what AMD have to offer.

I'm still disappointed by nVidia after the total lack of support for SLI with the 20 series.
 
Soldato
OP
Joined
30 Jun 2019
Posts
7,875
One of the main differences I notice between Turing (Tu102) and Ampere (a100) architecture is the transistor density, which has been improved from 24.7M / mm² to 65.6M / mm², an increase of 165.5%.

Unfortunately, it doesn't look like AMD has been able to make similar improvements from RDNA v1 to RDNA v2. On the plus side though, they're no where near reaching the ~800 mm² die size limit, unlike NV.

Since the Xbox series X RDNA 2 GPU has a TDP of 200w, AMD could probably produce a 300w desktop RDNA 2 gpu, with 50% more transistors, with a corresponding 50% larger die.

So that would be 22.1 billion transistors, with a die size of ~540 mm².

That would be more than twice the transistor count of Navi 10 (10.3 billion). That won't mean 2x the performance though.

Ampere (a100) has more than 2x the transistors of Turing (2080 Ti), but the perf. improvement is more like 50% in benchmarks (302 pts vs 446 pts in Octanebench). So I think a 50% improvement in performance from RDNA v1 to v2 is entirely possible.

The RRP of the rx 5700 xt was 400 dollars on launch, so if the high spec RDNA 2 gpu performs 50% better, it could cost 600 dollars, which is about £460... Id expect prices of £500 in the UK (because reasons), especially if they are initially in short supply.
 
Last edited:
Soldato
Joined
20 Apr 2004
Posts
4,365
Location
Oxford
The RRP of the rx 5700 xt was 400 dollars on launch, so if the high spec RDNA 2 gpu performs 50% better, it could cost 600 dollars, which is about £460... Id expect prices of £500 in the UK (because reasons), especially if they are initially in short supply.

Just the USD to GBP alone would make it £500 +20% tax = £600 + other tax's UK rip off tax etc you are looking closer to £650
 
Soldato
Joined
20 Apr 2004
Posts
4,365
Location
Oxford
One of the main differences I notice between Turing (Tu102) and Ampere (a100) architecture is the transistor density, which has been improved from 24.7M / mm² to 65.6M / mm², an increase of 165.5%.

Yer going from a 12nm (which is a 16nm enhanced node) to 7/8nm will do that

Unfortunately, it doesn't look like AMD has been able to make similar improvements from RDNA v1 to RDNA v2. On the plus side though, they're no where near reaching the ~800 mm² die size limit, unlike NV.

both are on the same node pretty much, so improvements in density wont be that great

Since the Xbox series X RDNA 2 GPU has a TDP of 200w, AMD could probably produce a 300w desktop RDNA 2 gpu, with 50% more transistors, with a corresponding 50% larger die.

Well don't have the TPD of the xbox SOC, just some specs and not enought data to make a call on a its TDP, So any estimations on a desktop RDNA2 gpus is massive guess's at best

So that would be 22.1 billion transistors, with a die size of ~540 mm².

Based on what ? Rumoured specs which have been wrong in the past and presuming shaders/cache/memory controllers will be the same size ? even if it was just scaled up RNDA1 it dos'nt scale up like that

That would be more than twice the transistor count of Navi 10 (10.3 billion). That won't mean 2x the performance though.

Again based on what ? all we know for sure is there be +50% per performance per watt and thats from AMD, always take vendor figure's with a bit of salt. Everything else is guessing at this point.

Ampere (a100) has more than 2x the transistors of Turing (2080 Ti), but the perf. improvement is more like 50% in benchmarks (302 pts vs 446 pts in Octanebench). So I think a 50% improvement in performance from RDNA v1 to v2 is entirely possible.

  • Closer to 3 times the transistors (18.6 v 54.2)
  • One synthetic benchmark with no sources of a high end compute product t don'st give the full view of things what about gamine, different types of compute loads RTRT etc
  • Its possible but unlikely without a lot of compromises. RNDA2 will be faster than RDNA1, but will need a massive up tick in clocks and shades to do so sucking up even more power, The PPW differences between Vega and navi/rdna1 was quite small only about 7% when comparing the 7nm VII and the 7nm Navi, not much more vs the 14nm

So you and a lot of peeps think there be a massive uptick in overall performance and reduce power consumption on the same node with similar architecture, when AMD couldn't do those sort of gains with a brand "new" architecture and a new node.
 
Soldato
OP
Joined
30 Jun 2019
Posts
7,875
The TDP, die size and transistor count of the Xbox series X GPU are all from Techpowerup specs:

https://www.techpowerup.com/gpu-specs/xbox-series-x-gpu.c3482

The official specs from Microsoft are also listed here:

https://news.xbox.com/en-us/2020/03/16/xbox-series-x-tech/

So, the only thing we know for sure is the die size, I think the transistor count and TDP on Techpowerup are just rough estimates, I simply added 50% to the figures listed there.

Still, I think a doubling of transistors from Navi10 to RDNA 2 wouldn't be that far fetched.

If the Xbox series X GPU TDP is actually more than 200w, that could limit how much of a performance increase we will see from a desktop RDNA 2 GPU.
 
Last edited:
Soldato
Joined
20 Apr 2004
Posts
4,365
Location
Oxford
The TDP, die size and transistor count of the Xbox series X GPU are all from Techpowerup specs:

https://www.techpowerup.com/gpu-specs/xbox-series-x-gpu.c3482

The official specs from Microsoft are also listed here:

https://news.xbox.com/en-us/2020/03/16/xbox-series-x-tech/

So, the only thing we know for sure is the die size, I think the transistor count and TDP on Techpowerup are just rough estimates, I simply added 50% to the figures listed there.

Still, I think a doubling of transistors from Navi10 to RDNA 2 wouldn't be that far fetched.


I'll give you that on the die size, did'nt know the official soc specs llisted it.

Still the TLDR of my post is there been LOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOTS of stretches when it comets to the power/performance/specs hopes and dreams of big Navi/RNDA2 from a lot of people Fueled by rumour you tubers throwing mud at the wall and seeing whats sticks. Especially Given what information we do have and AMD's track record from the Fury X right up to Navi/RNDA1.
 
Soldato
OP
Joined
30 Jun 2019
Posts
7,875
tbh, if you believe AMD, an RDNA v2 GPU with the same TDP (225 W) as the 5700 XT would have +50% performance.

I think +50% performance even with a higher TDP of 300w is more likely, because I don't think AMD can do it without doubling the transistor count from Navi 10.

I think I know how the transistor count for the Xbox series X GPU was estimated, on Techpowerup.

The die size of the Xbox series X GPU is 43.2% larger than the Radeon 5700 XT.

143.2% of 10.3 billion transistors is 14.749 billion transistors, rounded up = 14.75 billion transistors.

Seems quite a reasonable estimate to me. I'm not sure how the 200w TDP was arrived at though.

RDNA v2 will apparently be using an improved 7nm fabrication process, which could be either N7P or N7+. It will probably be N7P, rather than the N7+ EUV process, because AMD removed the 7nm+ from their 2020 marketing slides, now it's just 7nm.

Link here: https://www.anandtech.com/show/1558...7nm-7nm-for-future-products-euv-not-specified

Perhaps that could lead to improvements in power consumption?
 
Last edited:
Soldato
OP
Joined
30 Jun 2019
Posts
7,875
The other thing about the Xbox Series X GPU is that it has 30 % more Compute Units than the 5700 XT (40 vs 52). It also has about 25% greater processing power (9.75TFs vs 12.15TFs). So, that should result in at least +25% performance vs the 5700 XT. We don't know the other specs for sure like ROPs, Shading units etc.

I don't think it would be difficult to gain another 25% performance increase for the desktop RDNA 2 GPU if the TDP is 200w (plus or minus 10%). The TDP of the Xbox One GPU was 95w, the TDP of the Xbox One X GPU was 150w, another 50w increase would be in line with previous generations. Also, more power always means more heat (more expensive cooling needed, or an increase in console RMAs!), so doubt they would go for a massive increase in TDP.

The 7nm 5700 XT (RDNA v1) has 23.7% lower power consumption (TDP) than the 7nm Radeon VII (Vega 20), for almost the same level of performance (225w vs 295w). The Vega 64 has a TDP of 295w also, and performs about 17% worse than the 5700 XT.

Perhaps we will see a similar reduction in energy usage for RDNA v2?
 
Last edited:
Soldato
Joined
21 Jul 2005
Posts
20,061
Location
Officially least sunny location -Ronskistats
The 7nm 5700 XT (RDNA v1) has 23.7% lower power consumption (TDP) than the 7nm Radeon VII (Vega 20), for almost the same level of performance (225w vs 295w). The Vega 64 has a TDP of 295w also, and performs about 17% worse than the 5700 XT.

Perhaps we will see a similar reduction in energy usage for RDNA v2?

Correct, I think its only going to be 300w if AMD push it to compete beyond its intended price point (like the 5700XT). It is likely then to be more than 50% better so lets see what materialises.
 
Soldato
OP
Joined
30 Jun 2019
Posts
7,875
Yeah, maybe it will be more like +50% extra performance compared to the Radeon VII, with a similar memory bandwidth due to GDDR6 memory + improved memory bus.
 
Soldato
OP
Joined
30 Jun 2019
Posts
7,875
The Radeon VII also has 50% more shaders than the 5700XT. So, a 5700XT like GPU with the same amount of shaders (3840) and a corresponding performance increase must be possible, with a TDP less than 295w. So, more than 3840 shaders could be possible.

The obvious move for AMD is to create a more power efficient and powerful Radeon VII, with less expensive hardware.
 
Last edited:
Soldato
Joined
6 Feb 2019
Posts
17,600
The other thing about the Xbox Series X GPU is that it has 30 % more Compute Units than the 5700 XT (40 vs 52). It also has about 25% greater processing power (9.75TFs vs 12.15TFs). So, that should result in at least +25% performance vs the 5700 XT. We don't know the other specs for sure like ROPs, Shading units etc.

I don't think it would be difficult to gain another 25% performance increase for the desktop RDNA 2 GPU if the TDP is 200w (plus or minus 10%). The TDP of the Xbox One GPU was 95w, the TDP of the Xbox One X GPU was 150w, another 50w increase would be in line with previous generations. Also, more power always means more heat (more expensive cooling needed, or an increase in console RMAs!), so doubt they would go for a massive increase in TDP.

The 7nm 5700 XT (RDNA v1) has 23.7% lower power consumption (TDP) than the 7nm Radeon VII (Vega 20), for almost the same level of performance (225w vs 295w). The Vega 64 has a TDP of 295w also, and performs about 17% worse than the 5700 XT.

Perhaps we will see a similar reduction in energy usage for RDNA v2?

The Xbox series x which uses 52 CU RDNA2 clicked at 1800mhz has a 200w TDP. In total the Series X uses a 350w PSU so the remaining 150w is shared between the 4800HS CPU, memory, ssd etc

Pretty impressive considering the Series X is on par with a RTX2080 which has a 225w TDP. For a long time AMD has multi generations behind Nvidia in efficiency, but finally they have caught up and slightly exceeded it.

While Nvidia top end GPU this September will have a 350w tdp, I don't think AMDs 82 CU big Navi will go over 300w
 
Last edited:
Soldato
OP
Joined
30 Jun 2019
Posts
7,875
If the Xbox series X GPU really has a TDP of 200w, then that is very promising as RDNA 2 should scale up very well. Do you have a link that proves that? I've seen a few websites suggest a total TDP of 350w for the console.

Assuming a TDP of 200w is true, I think upto 22.1 billion transistors (50% increase in transistor count and die size vs Xbox series X GPU) for a approx. 300w desktop RDNA 2 GPU is entirely possible (yup, I'm repeating myself :rolleyes:).

If AMD can double the transistor count of Navi 10 (10.3 billion transitors), I'd expect a minimum of 50% performance increase vs 5700 XT, putting it ahead of the RTX 2080 TI.

It could be potentially even more than that, but it's virtually impossible to predict the exact shader count at this point for the top end RDNA 2 GPU.

Although I like to guess, so if you increase the shader count of the console GPU by +50% that would be 4992 shader units. I think that's too high, so I think it will fall in a range between 4096 (Vega 64 amount) - 4992 shaders).

Another other thing that occurs to me is that AMD's statement of +50% performance per watt vs RDNA v1 can't be true, simply because the Xbox series X GPU does not perform 50% better than the 5700 XT (with a TDP of 225w), even if you scale up the power consumption + performance of the console GPU a bit from 200w to 225W (assuming an optimistic 12.5% performance increase). Here's what you would actually get:

Texture rate 12.4% increase
Pixel rate: 34.7% increase
TFlops 40.1% increase
 
Last edited:
Soldato
Joined
26 Jan 2004
Posts
6,277
Location
Scotland
Anyone have any more information regarding the 3080TI as im holding out for one with itchy fingers lol
i hope it comes out before CyberPunk gets released :/
 
Soldato
Joined
6 Feb 2019
Posts
17,600
Anyone have any more information regarding the 3080TI as im holding out for one with itchy fingers lol
i hope it comes out before CyberPunk gets released :/

I heard it has multiple hdmi 2.1 ports, so you can get 4K 120hz with Gsync on an LG OLED TV, best TVs by the way, unmatched color contrast, responsiveness and pixel refresh. Does anyone else have an OLED and excited about hdmi 2.1?
 
Associate
Joined
26 Mar 2016
Posts
150
Anyone have any more information regarding the 3080TI as im holding out for one with itchy fingers lol
i hope it comes out before CyberPunk gets released :/

Everything point to a release (or at least announcement) in september. Latest in october it should be available. All other speculation out there is coming from 2 twitter accounts and it's impossible to know how good their info is. According to them there is not 3080Ti, but a 3090 with 5248 Shader, 24 GB RAM, 50% faster in Timespy extreme than a 2080TI and 350W TDP. Take it with a grain of salt.
 
Back
Top Bottom