• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The RTX 3080 TI isn't going to be- EDIT - No We Were All Wrong!

I do think this round will be much much closer than the last few years though, AMD cannot release another half baked high end card and nvidia cannot allow them to take the top spot on the tables so rumours of wild TDPs from both sides could well be er... semi-accurate :p
 
This architecture doesn't scale linear with TDP like AMd fans claim navi2 does

300tdp will not lose tflops like you think it is does op
 
Last edited:
I wouldn't be surprised if this was in the ballpark considering there was that leaked benchmark of a Gpu at around 18tf which would be about right + nvidia are more interested in pushing prices these days over perf. Also going onto Samsung 8nm is not going be as big a jump as tsmc 7nm.

Nvidia certaintly doesn't have any competition for top end graphics cards, except their own last gen... I think this means they can 'get away with' another incremental upgrade from the last gen.

Is that still a rumour, or will it definately be 8nm rather than 7nm? Seems odd considering the Ampere Tesla GPU was built with 7nm (TSMC according to specs). If true, maybe they are struggling with yields / production?

I suspect this info is incorrect though, AMD managed to mass produce 7nm GPUs in 2019, but with TSMC, rather than Samsung.
 
Last edited:
Yeah, I'm sure you will see some high clock custom models, with decent cooling (very expensive). the GTX 1080 TI had some impressive overclocked models...

I'm talking about the base model though, there probably will be an improvement in clock speed, but not a drastic one.

If you have the cooling like custom water then all you need is a firmware flash and you get the higher clocks as long as the ASIC is good enough. (70%+)
 
Not really hard to work out the max performance of the RTX 3080 TI, considering the 400w TDP of the already released Ampere A100 SXM4. Should be around 25% lower if the TDP is 300w so that would be approx:

Pixel rate:
169.2 GPixel/s

Texture Rate:
456.8 GTexel/s

14.6 TFLOPS

A100 SXM4 Specs:
https://www.techpowerup.com/gpu-specs/a100-sxm4.c3506

If we assume a similar boost clock to the RTX 2080 TI (1545 MHz), the figures would be:

Pixel rate:
120 ROPs x 1545mhz = 185.4 GPixel/s

Texture Rate:
324 TMUs x 1545mhz = 500.5 GTexel/s

5184 Shading Units x 1545mhz (x2) = 16.018 TFs (maybe 17TFs if GPU clock frequency is higher than the RTX 2080 TI)

So, if that ends up being correct, the raw computing performance (TFLOPS) of the RTX 3080 TI will only be about 19% higher than the RTX 2080 TI, and 31% higher than the Xbox X Series GPU.

The rumours of the RTX 3080 TI being a 20/21 TF GPU are almost certainly complete crap, Nvidia couldn't reach that with their ultra expensive Tesla / Ampere based GPU. It's not that surprisingly really, the Titan RTX and previous Tesla GPU were only 13-18% faster than the RTX 2080 TI overall, link here: https://www.techpowerup.com/gpu-specs/titan-rtx.c3311


EDIT - The PCI version of the Ampere A100 SXM4 has a lower TDP of 250w, but during a " sustained load will provide 10 to 50% lower performance than SXM4 based variant".
https://videocardz.com/newz/nvidia-announces-a100-pcie-accelerator

Let me know if you agree / disagree and why
I disagree on clockspeed. Even without a hidden water chiller I think it will be running at 2000mhz.
 
I disagree on clockspeed. Even without a hidden water chiller I think it will be running at 2000mhz.

The stock 2080ti has a boost clock of 1540mhz or there abouts - we've already seen Ampere cards show up with 1750mhz boost clocks so yeah it's clocked higher than Turing. 1750mhz boost clock means it runs at 2000 to 2100mhz under load and assuming good temps = 21tflops before overclocking + DLSS = game performance as if it has 30tflops
 
Last edited:
Sustained 2200mhz out of the box on reference model/FE. Clocks have consistently gone up for the last three generations that's not going backwards and wtf is 1545mhz anyway? Pascal and Turing didn't run at that speed it's like AMD CPU speeds in reverse. Nvidia cards consistently very easily vastly exceeded their box base and boost clocks in my experience and I expect that trend to continue so real world speeds will be 1900-2100 I'd guess.
 
Sustained 2200mhz out of the box on reference model/FE. Clocks have consistently gone up for the last three generations that's not going backwards and wtf is 1545mhz anyway? Pascal and Turing didn't run at that speed it's like AMD CPU speeds in reverse. Nvidia cards consistently very easily vastly exceeded their box base and boost clocks in my experience and I expect that trend to continue so real world speeds will be 1900-2100 I'd guess.

1545 is the official 2080ti non A 300 gpu clock - it's reserved for the lowest quality 2080ti silicon - the EVGA Black 2080ti has this spec it's like the cheapest one you can buy. The reference Nvidia 2080ti is technically overclocked out of the box.

When you put these cards under load GPU boost 3.0 overclocks the cards even further if temps and power support it that's why you can buy a 2080ti that says 1650mhz boost clock but runs at 2000mhz when you play a game - that's also why trying to calculate Tflops using boost clock is almost pointless - that is the bare minimum Tflops, Jay two cents tested this - you have to remove the heatsink and run a hair dryer pontes at the gpu to make a 2080ti run at it's rated spec
 
1545 is the official 2080ti non A 300 gpu clock - it's reserved for the lowest quality 2080ti silicon - the EVGA Black 2080ti has this spec it's like the cheapest one you can buy. The reference Nvidia 2080ti is technically overclocked out of the box
It may well be 'technically overclocked' but they all do it with plenty of room to spare afaik. Which means the box specs are ultra conservative and don't represent real world performance unless in a super small form factor case in Dubai in an airing cupboard without air con in the middle of summer.
 
It may well be 'technically overclocked' but they all do it with plenty of room to spare afaik. Which means the box specs are ultra conservative and don't represent real world performance unless in a super small form factor case in Dubai in an airing cupboard without air con in the middle of summer.

I know, like I said the specs you find on the box/techpowerup database are almost impossible to achieve - you have to use a hair dryer to blow hot air on the gpu to get that low clock speed
 
The problem is that you pay a lot more just for a few hundred Mhz, the MSI RTX 2080 Ti @ 1755 boost clock goes for ~£1,295 at the moment but the Zotac GeForce RTX 2080 Ti @ 1545 boost clock for ~£1,050. I can't see a reason why it would be different for the RTX 3080 TI, there's lots of money to be made out of 10-20% overclocked models.

But like I said, the base model will probably have a slightly higher boost clock than the RTX 2080 TI.

I hope they do a POS edition, I'd be tempted :D

BTW, the Ampere wikipedia page says the Fabrication process is 7nm (TMSC), so I'm going with that for now. Link:
https://en.wikipedia.org/wiki/Ampere_(microarchitecture)#cite_note-verge-A100-1
 
Last edited:
The problem is that you pay a lot more just for a few hundred Mhz, the MSI RTX 2080 Ti @ 1755 boost clock goes for ~£1,295 at the moment but the Zotac GeForce RTX 2080 Ti @ 1545 boost clock for ~£1,050. I can't see a reason why it would be different for the RTX 3080 TI, there's lots of money to be made out of 10-20% overclocked models.

But like I said, the base model will probably have a slightly higher boost clock than the RTX 2080 TI.

I hope they do a POS edition, I'd be tempted :D

BTW, the Ampere wikipedia page says the Fabrication process is 7nm (TMSC), so I'm going with that for now. Link:
https://en.wikipedia.org/wiki/Ampere_(microarchitecture)#cite_note-verge-A100-1
They all operate at the same speed more or less. What it says on the box vs the reality is pretty consistent. They all boost to somewhere between 1900 and 2000 assuming a well ventilated case. The'overclocked' models simply have a higher number on the box and a slightly better cooler but the difference in effective boost clocks doesn't vary remotely in proportion to the price variation. Manufacturers make their margins from people like you who imagine there's a wide variation that doesn't exist in practice.
 
When are we going to hit a regression in clock speeds? I thought it wasn't given that a smaller processes would clock higher and that it could actually be worse with the higher density of transistors?
 
The speed of light in a vacuum = 299792458 meters per second but its easier to round it to 300000000 m/s
at 5Ghz how far per clock cycle does light travel:
300 million divided by 5 billion = 0.06m/s or 6cm per clock cycle (thanks Participant)
Electrons through silicon travel about half that
Now you begin to see the problem with clock speeds. Light, or electrons aren't fast enough.
The problem is resistance, which is why cooling with liquid nitrogen or liquid helium allows for even higher clocks.
Even with a supercooled-near absolute zero kelvin CPU you still have the fundamental law of nature above as an absolute limit. Quantum computing research is where all the action is because silicon reached its clock speed limit around 15 years ago.

edit: oh yeah we're talking about GPUs.. :) I'll forward that to nVidia and AMD, they will release a patch and we can all enjoy double performance. thank me later.
 
Last edited:
well, there's no doubt that the GPU models with better coolers + higher quality silicon will OC higher.

even if we use the super duper RTX 2080 TI Founders Edition seen here:
https://www.tomshardware.com/uk/reviews/nvidia-geforce-rtx-2080-ti-founders-edition,5805-11.html

they still only got 1860mhz (sustained) with a closed case. the same oc on an rtx 3080 ti would get you to 19TFs (assuming 5120 shader units). So it could be anywhere between 16-19TFs depending on GPU clocks, but I really think this is scraping the barrel, you will be paying well over £1,000 for the privilege of a higher end model.

Its hard to gauge the quality and max clocks of the cheapest models, for some reason the reviewers don't like doing it!

The GPU clock also effects the total pixel rate and texture rate, so it absolutely does matter, arguably these two measurements are more important than the teraflop stat we see thrown around in marketing slides.

Keyser, that stuff about the speed of electrons / light when transferring information through a material is very interesting. I would've thought light / lasers would speed things up if it were even possible, wouldnt it be a good idea to try that first before trying to build some illogical, hugely complex quantum computing hardware?

EDIT - they have already built experimental processors, made out of glass (Chalcogenides) that transfer information via light:
link: https://www.sciencealert.com/comput...icity-will-apparently-be-here-within-10-years

But, what is the airspeed velocity of an unladen swallow?
 
Last edited:
You mean 6cm per clock cycle :/

The rest of your post is mainly BS too because it ignores IPC.
I what way do I ignore Instructions Per Cycle? All instructions boil down to a voltage through a tiny circuit, the length of that circuit is fundamentally limited due to quality of silicon, nm and finally by the speed of electrons through silicon.

Edit: I should probably say the length of circuit and GHz is linked. So the circuits can be longer as long as the clock speed goes down.
 
Last edited:
It's worth noting that the Boost clock of the RTX 2080 TI was very similar to the previous generation's TITAN X Pascal (1545 MHz vs 1531 MHz). Based on this, I'd estimate that the base model RTX 3080 TI will have a very similar boost clock to the current gen. Titan RTX (1770 MHz), just to guarentee an advantage overall in benchmarks.

Another thing to take into account is that Nvidia will want to leave a significant performance gap inbetween the RTX 3080 TI and the Ampere based Titan, just like the Titan RTX did with the current generation. So, the ampere based Titan model will probably get an extra couple hundred Mhz boost clock and around 256 extra shaders, just like the current gen. Maybe if production yields are good, it will be called the RTX 3090 instead? I also think that if any card goes over 300 TDP, it would be this one, not the 3080 TI.
 
Last edited:
Back
Top Bottom