• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA 4000 Series

4 cores was overkill for gaming in 2008, I remember those days. I owned one of the first quad core CPUs and people told me I should have bought a dual core because they overclocked better
Overclocked CPU’s diddnt make much of a difference in gaming though and still the case today!

Gpu’s however….
 
@Purgatory

Again, see below :p

I didn't even mention intel and I'm not disputing any of the points about intel either so not sure why you went on a spiel speech about that thinking I'm defending them?

Also, we're talking about purely gaming here, not professional workloads, which is where we have all agreed upon, ones needs and willing to spend more will be completely different to those who mostly just want to game (especially for cpu core count, no denying, this is extremely important)

4 cores was overkill for gaming in 2008, I remember those days. I owned one of the first quad core CPUs and people told me I should have bought a dual core because they overclocked better

Whilst on the topic of core count, I remember the same being said for 4 vs 6 core when 6 core cpus started to come out. The i7 980x was great but the benefit of the extra 2 cores wasn't there in "gaming", not to mention the price difference..... The i5 750 was a beast and in some cases was better than the i7 980x, it took years for that difference to become noticeable and by that time, both cpus were vastly outdated and crap compared to the new tech.

Overclocked CPU’s diddnt make much of a difference in gaming though and still the case today!

Gpu’s however….

Not entirely true, overclocking made a huge difference especially back in the older days. It give my i5 750 a huge boost and allowed the cpu to last a couple years longer (01% and 1% low jump was massive from OC the CPU). Sandybridge was also a killer for OC.

Agree it is less important these days though, more so on the current gen ryzens.
 
There is no way i can see these gpu being 60-70 percent faster than the current gen. Nvidia hasn’t made that huge jump like that for a very very long time
Nvidia haven't had much competition for a while. It appears that they now think they have some you can be sure they will push it as much as they can to win. Look how Intel responded to increased core counts, no-one expected that either. Everyone should be hoping AMD are very competitive this generation.
 
Overclocked CPU’s diddnt make much of a difference in gaming though and still the case today!

Gpu’s however….

I had a £50 2 core intel 5300 overclocked from 2.5ghz to 3.8. Huge difference in games. I replaced it with a q6600 because it was everyone's fav CPU on this forum... which made no difference in any game I could tell. Tbh the only thing the overrated q6600 was good at was video editing, sucking power and making noise.

Edit: In an effort to get on topic, I'm going to be more interested in what efficiency gains the 4000 series has once you drag their power down to sensible levels. I might be interested in a 4090s rendering performance at, say 275W in 2.5 years time 2nd hand.
 
Last edited:
There is no way i can see these gpu being 60-70 percent faster than the current gen. Nvidia hasn’t made that huge jump like that for a very very long time

We will soon see I guess. I can’t see why they could not achieve 50-60% from 3090 to 4090. As I mentioned they will be jumping to a much better manufacturing node, new architecture and based on rumours much higher power usage are all signs that this is possible.

We do not know if the architecture is a big leap or rubbish like Turing was.
 
No because all Nvidia have to do is use 2GB memory IC's instead of the 1GB ones they do use, don't quote me on it but i remember someone from AMD commenting that these days the 2GB IC's don't actually cost meaningfully more than the 1GB ones.

AMD's lacking RT performance is architectural, if i had the equipment i can turn my 8GB 2070S in to a 16GB one by littrally swapping the 1GB IC's to 2GB ones. I can't do anything about AMD's RT performance....


Different types of memory though. GDDR6x costs more than GDDR6. 6x has twice the through-put of 6 - though in actual performance terms it doesnt seem to have made any impact compared to good 'ol 6. Never seen any quantifiable evidence that 6x does anything over 6 which is a shame. Dunno if devs didnt make use of it or it just doesnt equate to any difference. Will probably die the same way HBM did for AMD.
 
It's getting late so have skimmed above post but "crippling 2080 to make the 3080 look better than it really is"........ Can I have some of what you're smoking please :cry:
So are we just going to pretend like Nvidia didn't come on stage and tell everyone the 3080 is twice as fast as the 2080 and used doom eternal as one of the benchmarks to prove this. Or did you misread the post you quoted because you were too busy thinking up a joke about smoking weed?

But but but the people who don't even own one have been telling insisting that there would be loads of games by now! :cry:
Nobody said that, maybe you need to lay off the ganja ;) . People said as time goes on more games with higher VRAM requirements will come out. Stop acting like 2 years is a long time in the context of AAA game development.
Obviously when those games come out you will start saying how everyone who owns a 3080 should have upgraded to the 4000 series anyway. Thereby proving what everyone said was right. It is an artifical limitation to get people to upgrade GPUs.
 
Different types of memory though. GDDR6x costs more than GDDR6. 6x has twice the through-put of 6 - though in actual performance terms it doesnt seem to have made any impact compared to good 'ol 6. Never seen any quantifiable evidence that 6x does anything over 6 which is a shame. Dunno if devs didnt make use of it or it just doesnt equate to any difference. Will probably die the same way HBM did for AMD.
Agreed. Nearly twice the throughput but runs nearly half the speed so the modules on the 3080 only ended up at 19Gbps compared to 16Gbps for GDDR6 - with higher cost and power draw to boot. It looks like GDDR6 is coming on nicely for next year - both cheaper 2GB chips and up to 24Gbps coming next year or so. GDDR6X has also improved and will probably just about hold the performance lead but I bet it still remains costly.
 
I think the argument for claiming VRAM is wasted money goes out the window when everyone buying a RTX card is forced to buy RT cores which 95% of people wont be using.

I have yet to play a game that utilises ray tracing.

With VRAM it will be utilised at some point unless you only playing a very narrow range of games and just keep the card short term, whilst with RT cores there is a big chance lots of people will never ever use them.

Would I swap a RTX 3080 10 gig for a GTX 3080 12/16 gig, 100% any day of the week. Likewise I would take more of GDDR6 vs less of GDDR6X. The grunt vs VRAM balance was too lopsided.

What I noticed with Nvidia, they release software innovations to sell hardware, they release variable refresh rate to sell Gsync modules, they release AI scaling to sell tensor cores, they release RT to sell RT cores, they just simply do everything the expensive way. Historically on average every 2 gens, they tend to under spec VRAM and usually fix it the gen after. Its done often enough I now believe it to be planned obsolescence. I generally stuck with them as AMD were always behind and had worse drivers, but I noticed now AMD driver complaints have fell of a cliff, and on rendering performance they have got very close. Hence my earlier comment about considering them on a my next GPU purchase. All this makes VRAM been wasteful small in comparison.
 
Last edited:
So are we just going to pretend like Nvidia didn't come on stage and tell everyone the 3080 is twice as fast as the 2080 and used doom eternal as one of the benchmarks to prove this. Or did you misread the post you quoted because you were too busy thinking up a joke about smoking weed?


Nobody said that, maybe you need to lay off the ganja ;) . People said as time goes on more games with higher VRAM requirements will come out. Stop acting like 2 years is a long time in the context of AAA game development.
Obviously when those games come out you will start saying how everyone who owns a 3080 should have upgraded to the 4000 series anyway. Thereby proving what everyone said was right. It is an artifical limitation to get people to upgrade GPUs.
The VRAM debate has been hotly contetsted over one or two games. You can load up mods to make even a 3090 fall over. So yes, whilst over time games will use more VRAM, lets take FC6 as an example. On release it was reviewed widely that the textures were terrible. PS2 I think was quoted by some game reviewers. THen they issued a fix (called an HD texture pack) that required loads more VRAM and aside from RT it looked no different to FC5 which incidentally also had an HD texture pack. Cards with >10GB of VRAM were £850+ so notwithstanding people are saying to game you need to spend £850 cards. Bye bye PC gaming and get a console.

You make the game for the cards, which incidentally most still play on 8GB cards, though enthusiast forums like this one will have more people with higher cards as they are enthusiasts so will seem skewed.

Load up any mod (which is really a better version of the oprinal usually) and you can run out of VRAM. The point is that vanilla games on release are done so in often unplayable shocking states. Claiming buying a card with more VRAM is the way to go is madness. If the gfx were glaringly better then I can understand it. Aside from RT, they aint. Crysis was teh first card not made for the cards as it made everything fall over - but the gfx were glaringly much better.

To play the latest games you WILL get a better experience on the most expensive cards. If you wanna pay MSRP to be an Alpha tester and X2 the cost of a card to get the best experience but only 15% extra raster perf then crack on. But it was more to do with the supply rather than the games themselves.
 
I think the argument for claiming VRAM is wasted money goes out the window when everyone buying a RTX card is forced to buy RT cores which 95% of people wont be using.

I have yet to play a game that utilises ray tracing.

With VRAM it will be utilised at some point unless you only playing a very narrow range of games and just keep the card short term, whilst with RT cores there is a big chance lots of people will never ever use them.
DLSS is good though. :D
 
Once RT doesn't require upscaling to max out my monitor's refresh rate then I'll be interested. I can see those who like single player games and don't mind lower frame rates might want it on. DLSS etc only exist because the cards aren't fast enough to run RT at native. I see it as more useful for the low end where it's a choice of DLSS or not play at a sensible framerate.
 
DLSS is good though. :D
I do think DLSS is a much more useful feature than RT and ironically make's the rasterization performance vs VRAM even more misbalanced as DLSS reduces the need for pure grunt. DLSS doesnt need RT cores ;) it uses tensor cores. :)
The problem early on is of course DLSS required a game to support it so the tech was limited to specific new titles only (which would most likely only be AAA titles). DLDSR of course changed that and now DLSS tech is properly useful. What they need though is AI upscaling to work without it needing to be baked into the game, so on every game.

Ideally for me Nvidia would have two variants of each card, priced the same, one would have more VRAM and no RT cores, the other would have RT cores less VRAM. I expect the non RT variant would sell more. Both would have tensor cores.

Nvidia did make a version of DLSS interestingly that doesnt use tensor cores, it shipped with an older version of the Control game.
 
Last edited:
Different types of memory though. GDDR6x costs more than GDDR6. 6x has twice the through-put of 6 - though in actual performance terms it doesnt seem to have made any impact compared to good 'ol 6. Never seen any quantifiable evidence that 6x does anything over 6 which is a shame. Dunno if devs didnt make use of it or it just doesnt equate to any difference. Will probably die the same way HBM did for AMD.
Higher memory bandwith can help with performance. Gamers nexus touched upon it in their 3080 review when comparing it to the 2080ti:


Good article explaining the difference too:


Will be interesting to see if direct storage will do better with say gddr6 vs gdrr6x.

So are we just going to pretend like Nvidia didn't come on stage and tell everyone the 3080 is twice as fast as the 2080 and used doom eternal as one of the benchmarks to prove this. Or did you misread the post you quoted because you were too busy thinking up a joke about smoking weed?


Nobody said that, maybe you need to lay off the ganja ;) . People said as time goes on more games with higher VRAM requirements will come out. Stop acting like 2 years is a long time in the context of AAA game development.
Obviously when those games come out you will start saying how everyone who owns a 3080 should have upgraded to the 4000 series anyway. Thereby proving what everyone said was right. It is an artifical limitation to get people to upgrade GPUs.

I can't remember the exact claims nor the "conditions" of said claims but where am I disputing about this?

pretend like Nvidia didn't come on stage and tell everyone the 3080 is twice as fast as the 2080

The bit I am disputing is this:

crippling 2080 to make the 3080 look better than it really is

Shock horror brands might lie in their PR slides/footage to try and promote their new products :eek:

There are plenty of benchmarks out there for you to look at. Let me know what you think of the 3080 vs 2080 perf. when you have checked them out especially in RT workloads and again, don't forget the key point, price.....

You're kidding me right.... go and look in the "is 10gb enough thread" and you will see plenty of posts from the usual suspects making out like we would be seeing loads of games suffering because of lack of vram and even had them cherry picking results to suit their narrative i.e. HZD, forza 5, re village, halo, godfall, dishonoured, deathloop etc. and all were "debunked".

As per my earlier post, do you then consider RDNA 2 RT performance to also be an "artificial limitation"?

And nope I won't be, if people don't want to be upgrading so soon to get a better experience "overall", I'll be telling people to reduce 1 setting, same as we tell RDNA 2 users to reduce RT settings or turn off entirely if they desire more performance. I've stated why I am upgrading to a 40xx series card or rdna 3 (if rt perf is there) and that's because I need/want more "grunt" for 2 reasons:

- to get as close to 175hz/fps @ 3440x1440
- better RT performance, even an extra 25/30fps will be a sizeable jump as it'll bring for example DL 2 fps from 70-80fps to my ideal fps of 100/110+

My 3080 is just fine for my 4k gaming as my 4k tv is only 60hz so this is easy to hit a locked 60 with a 3080 especially with dlss, obviously if I had a 120+HZ 4k screen then I would also be wanting more grunt here too.

Everyone will have different needs/wants and what they deem acceptable.

I think the argument for claiming VRAM is wasted money goes out the window when everyone buying a RTX card is forced to buy RT cores which 95% of people wont be using.

I have yet to play a game that utilises ray tracing.

With VRAM it will be utilised at some point unless you only playing a very narrow range of games and just keep the card short term, whilst with RT cores there is a big chance lots of people will never ever use them.

Would I swap a RTX 3080 10 gig for a GTX 3080 12/16 gig, 100% any day of the week. Likewise I would take more of GDDR6 vs less of GDDR6X. The grunt vs VRAM balance was too lopsided.

Yup that's what we have all been saying, if you don't care for RT then obviously this won't be a factor for the user when it comes to buying said hardware. At the time of my 3080 purchase I didn't care "that" much for RT nor dlss tbf, it was only really cp 2077 and control which had worthy RT and really "needed" dlss but a 3080 FE worked out cheaper and easier than trying to get a 6800xt at msrp, not to mention 3080 is matching a 6800xt in non RT workloads anyway so RT tensor cores + dlss were just a bonus, which given that I played more RT titles than rasterization along with dlss, it worked out very well and a much better buy than I was expecting in the end, if I had picked up a 6800xt, personally I would have regretted it massively and probably tried to switch over to a 3080 asap after seeing all the games I am interested in getting rt and dlss.

The VRAM debate has been hotly contetsted over one or two games. You can load up mods to make even a 3090 fall over. So yes, whilst over time games will use more VRAM, lets take FC6 as an example. On release it was reviewed widely that the textures were terrible. PS2 I think was quoted by some game reviewers. THen they issued a fix (called an HD texture pack) that required loads more VRAM and aside from RT it looked no different to FC5 which incidentally also had an HD texture pack. Cards with >10GB of VRAM were £850+ so notwithstanding people are saying to game you need to spend £850 cards. Bye bye PC gaming and get a console.

You make the game for the cards, which incidentally most still play on 8GB cards, though enthusiast forums like this one will have more people with higher cards as they are enthusiasts so will seem skewed.

Load up any mod (which is really a better version of the oprinal usually) and you can run out of VRAM. The point is that vanilla games on release are done so in often unplayable shocking states. Claiming buying a card with more VRAM is the way to go is madness. If the gfx were glaringly better then I can understand it. Aside from RT, they aint. Crysis was teh first card not made for the cards as it made everything fall over - but the gfx were glaringly much better.

To play the latest games you WILL get a better experience on the most expensive cards. If you wanna pay MSRP to be an Alpha tester and X2 the cost of a card to get the best experience but only 15% extra raster perf then crack on. But it was more to do with the supply rather than the games themselves.

Insert the joker meme about "everyone losing their mind" when it comes to having to potentially sacrifice 1 setting in a vram heavy game at 4k without FSR but when it comes to sacrificing RT settings almost entirely (which makes a much bigger difference to visuals) across numerous games from day 1, no one complains ;)

I do think DLSS is a much more useful feature than RT and ironically make's the rasterization performance vs VRAM even more misbalanced as DLSS reduces the need for pure grunt.
The problem early on is of course DLSS required a game to support it so was limited to specific new titles only (which would most likely only be AAA titles). DLDSR of course changed that and now DLSS is properly useful.
DLSS/FSR also reduces vram usage.

Once RT doesn't require upscaling to max out my monitor's refresh rate then I'll be interested. I can see those who like single player games and don't mind lower frame rates might want it on. DLSS etc only exist because the cards aren't fast enough to run RT at native. I see it as more useful for the low end where it's a choice of DLSS or not play at a sensible framerate.

There are rasterization titles where my 3080 isn't even fast enough for 3440x1440 175hz. I use dlss all the time as it produces better IQ overall compared to native/TAA especially where temporal stability and motion clarity are concerned.
 
I do think DLSS is a much more useful feature than RT and ironically make's the rasterization performance vs VRAM even more misbalanced as DLSS reduces the need for pure grunt. DLSS doesnt need RT cores ;) it uses tensor cores. :)
The problem early on is of course DLSS required a game to support it so the tech was limited to specific new titles only (which would most likely only be AAA titles). DLDSR of course changed that and now DLSS tech is properly useful. What they need though is AI upscaling to work without it needing to be baked into the game, so on every game.

Ideally for me Nvidia would have two variants of each card, priced the same, one would have more VRAM and no RT cores, the other would have RT cores less VRAM. I expect the non RT variant would sell more. Both would have tensor cores.

Nvidia did make a version of DLSS interestingly that doesnt use tensor cores, it shipped with an older version of the Control game.
DLSS is imo one of the most important developments in the last decade, the performance benefit in supported games is similar to moving to a new generation of GPU's!

If only Skyrim had DLSS. :(
 
I had a £50 2 core intel 5300 overclocked from 2.5ghz to 3.8. Huge difference in games. I replaced it with a q6600 because it was everyone's fav CPU on this forum... which made no difference in any game I could tell. Tbh the only thing the overrated q6600 was good at was video editing, sucking power and making noise.
You wash your mouth out, young man! The Q6600 was a beast! I only got rid of mine on my main rig 2 years ago - was still doing a fair job. Was definitely starting to creak though :)
 
DLSS is imo one of the most important developments in the last decade, the performance benefit in supported games is similar to moving to a new generation of GPU's!

If only Skyrim had DLSS. :(
Yep, this is why I think they need to get it baked in the driver. It might happen as in the past other Nvidia techs originally required per game support but then they figured out how to do driver side (MFAA e.g.), although it would probably be limited to DX12 games.

However AMD's version apparently will work on Nvidia hardware, we might eventually see that working on older games via custom drivers and mods.
 
4 cores was overkill for gaming in 2008, I remember those days. I owned one of the first quad core CPUs and people told me I should have bought a dual core because they overclocked better


Nah, it was those i7's that you needed to play BF4 as the alpha ran better, so everyone ran out and bought i7's and there was a shortage. Ran on i5 fine on release.

The emphasis on early game performance is funny, particularly these days where even on release they aren't even alpha. Bf 2042 still awful, they took folks money and never made a game of it. The free BF3 mod is way better than 2042. At least you can shoot people in it.
 
The bit I am disputing is this:


Shock horror brands might lie in their PR slides/footage to try and promote their new products :eek:
So you dispute his statement that the 2080 was crippled by its 8GB of VRAM in doom eternal and that Nvidia knew this and used it to try and sell everyone that the 3080 is twice as fast as the 2080? Okay

You're kidding me right.... go and look in the "is 10gb enough thread" and you will see plenty of posts from the usual suspects making out like we would be seeing loads of games suffering because of lack of vram and even had them cherry picking results to suit their narrative i.e. HZD, forza 5, re village, halo, godfall, dishonoured, deathloop etc. and all were "debunked".
I don't remember anyone giving a timeline but it could be after I checked out of the thread.

However I do remember some of your examples, for example the HZD issue was about someone going from a 5700XT to 6800XT. I don't remember anyone using it as an example of the 3080 VRAM issue, but it was used to raise the question of game engines automatically trying to account for VRAM issues by lowering textures, even if this instance it was a bug of the system dowgrading the textures on the wrong object.
Godfall did use more than 10GB when you activated all the bells and whistles including the AMD bolt ons. Did this change?
Regarding the deathloop problem, I didn't pay close attention but I remember in that thread it was like watching you have an argument with yourself because you kept bringing up points that people were not talking about. TBF I could have just missed those points.

I don't know about the rest of them but something tells me you've stretched the truth like Nvidia's markerting talking about the 3080.

As per my earlier post, do you then consider RDNA 2 RT performance to also be an "artificial limitation"?
Now you are being disengenious or do you really believe that a feature that needs to be designed and baked into the GPU core was intentianlly designed to make them look bad in benchmarks (and potentially damage sales) because it would be more important in selling a product 2 years down the line . Wait hold on, Is this you secretly admitting that RT didn't acutally matter back in 2020. :eek:
 
Back
Top Bottom