• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA 4000 Series

Soldato
Joined
15 Oct 2019
Posts
11,656
Location
Uk
Interesting the 4080 has a dedicated 103 die now rather than using the 104 die. That the xx80 class has used prior to the 3080.
Seems like AMD won't be catching Nvidia on the back foot this time. The difference in the SM count between 102 and 103 is huge though. Unless there is a significant reduction in clock speed, that is going to suck energy. But it isn't really translating into performance. 60% more SMs but only 26% faster based on what ever those peformance numbers are supposed to represent.

Based on the 256 and 128 bit bus of the 4080 and 4070 respectively it seems like Nvidia will have their own version of infinity cache. I wonder if people will go around saying these cards are being held back by their memory bandwidth and that Nvidia should just use a bigger bus ;).
Maybe the 4080 will be much higher clocked than the 4090 to compensate for less SMs, the rumoured TDP difference between them is only 30w which if true would back this up.

The 4080 may escape this time but I'm sure there will be a 10gb not enough thread for the 4070.
 
Soldato
Joined
6 Feb 2019
Posts
17,464
Mock all you like, but if it's such a big thing (or not) why was there no thread way back when (AFAIR) about 11GB being overkill for the 1080 Ti while it was current?
Surely with the 4080 rumoured to be around three times faster than the 1080 Ti was, back then 6GB would have been plenty enough and any more a total waste of money!

We already had games asking for 10gb in the 1080ti days, shadow of Mordor with the 4k texture pack would use 10gb vram on my 1080ti

It feels like games haven't really needed more vram than that since then, even though graphics have improved, so the only logical answer is compression has improved

And as we've found with CPUs the more cache you have the less you have to use system ram, so if we can give gpus enough cache you won't need much vram at all
 
Soldato
Joined
28 Oct 2009
Posts
5,260
Location
Earth
So I just did the maths.
The 4070 with the 160 bit bus means 5 or 10GB of VRAM
The 4080 with the 256 bit bus means 8 or 16GB of VRAM

You seriously think 5gb or 8gb I can't see that , they will have similar to what AMD has with infinity cache which will increase the performance even with lower bus
 
Soldato
Joined
12 May 2014
Posts
5,225
You plan on going Nvidia this time Chuky? Or just getting warmed up for the 10/12GB is not enough thread? :p :cry: ;)
I’m getting warmed up for the thread:D.

You seriously think 5gb or 8gb I can't see that , they will have similar to what AMD has with infinity cache which will increase the performance even with lower bus
I only listed both so that people can see the options available. I don’t think Nvidia put 5 or 8 gb in those cards
 
Associate
Joined
31 Dec 2010
Posts
2,427
Location
Sussex
I think 10-12gb for the 4070
Well, if the 160 bit bus rumour is correct, it can only be 10GB.
Looks like Nvidia is going to go heavy on not-Infinity-Cache (they might have to come up with their marketing slogan for that), so narrow buses. And hence the demand for 2Gbit GDDR chips is going to be insane.
 
Associate
Joined
26 Jun 2015
Posts
668
Wow, very surprising considering that it's such an advanced engine and a perfect example of next-gen gaming for upcoming GPU's.
Actually WoW has a incredible comprehensive options menu for graphics and effects that puts most next gen games menu settings to shame.

nwtH6gAl.png.jpg
7Wv1Nfbl.png.jpg

What you are trying to judge is art style, which is fair but WoWs engine is pretty advance now.
lol imagine thinking world of boring craft will outsell Elden ring, yeah right
Just cos you don't like WoW doesn't mean its not a monster still in the industry, and looking at someone who quoted you, Shadowlands which is probably WoW most ill received expansion still out sold Elden ring on PC even slightly in this case but we are looking at the worst for WoW, WoW expansions have historically always hit best ever selling game on PC, it makes most PC releases look like niche titles, Elden ring and perhaps a few are outliers but WoW is consistent, so yes, WoW expansion release is major for the PC platform.
Well yeah..... if you add barely any ray tracing then of course performance on amd will be fine :cry:
Raytraced shadows in WoW is actually well implemented, it works very well for the type of game it is, its done a better job of it then most of the niche games that you play that don't implement properly still.
 
Last edited:
Associate
Joined
29 Jan 2015
Posts
357
What if amd come out with double the vram and 4080 comes with 16gb vram.... bet you we'll have a "is 16gb enough" thread :cry:

They won't

The only real question is are AMD going to make the 7800XT a cut down N31 die with 20GB of VRAM or are they going to use the top N32 chip with 16GB of VRAM.
 
Soldato
Joined
6 Aug 2009
Posts
7,070
They won't

The only real question is are AMD going to make the 7800XT a cut down N31 die with 20GB of VRAM or are they going to use the top N32 chip with 16GB of VRAM.
I'd hope they are going to give us a reason to buy it, but I suspect they will position it so it will be a choice of 7700XT or jump to the 7900XT.
 
Associate
Joined
15 Sep 2009
Posts
1,414
Location
London
I'm very suspicious of these rumors:


I hope (for consumers) they prove to be true *but* the specs for the '4070Ti' show a card with 3072 less CUDA cores, a 192-bit bus (vs. 384-bit) and 50w less power consumption matching a 3090Ti? Ada's going to have higher clocks and a generational advantage but even so I'd be amazed if any of these rumours match the reality at launch. Going to be fun to find out though (although this gen I'm actually more curious to see what RDNA3 is bringing to the table).
 
Associate
Joined
15 Sep 2009
Posts
1,414
Location
London
Look at the previous generation as an example:

RTX 2080 Ti = 1545 MHz, 4352 SUs, 544 Tensor Cores, 68 RT Cores, 256-bit bus, 220W TDP.
RTX 3070 = 1725 MHz, 5888 SUs, 184 Tensor Cores, 46 RT Cores, 256-bit bus, 220W TDP.

So with a new process, 35% more SUs, 60% less Tensor Cores and 30% less RT Cores plus a 180 MHz bump to boost clocks and the same bus width/TDP (also double the L1 cache and 1.5MB less L2), nVidia made a card that in most tests matches the 2080Ti.

So this '4070Ti' has 30% more SUs than the 3070 (though still 40% less than the 3090Ti and 5% less than the 3070 gained over the 2080Ti) a 192-bit bus instead of the 256-bit bus the 2080Ti/3070 have (or the 384-bit bus the 3090Ti has) and GDDR6X instead of the 3070's GDDR6.

Unless they're using a magic foundry this looks more like a 3080 -> 4070Ti to me (still less CUs and a slower bus than a 3080 though). Still, they are supposedly pumping an extra 180W into it so I guess that's gotta do something right?
 
Associate
Joined
15 Sep 2009
Posts
1,414
Location
London
The 2070->3070 makes an interesting comparison too as a generational jump.

The 3070 is 50% faster with 2.5x the number of CUs (5888 vs. 2304), the same max clock, bus bandwidth and TDPs of 175W for the 2070 vs. 220W for the 3070.
 
Back
Top Bottom