• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA 4000 Series

Second hand prices were less for the 2080ti than a new 3070. Only issue I would have been careful about buying a second hand 2080ti is the space invaders issue they had with VRAM, but any decent AIB didn't have the issues from what I remember and was the FE models that suffered the failures back then. 2080ti second hand was the better buy especially if it came with warranty and then you had people that were silly enough to let them go for peanuts.
3070 forced those prices to be lower on the 2080ti, otherwise it wouldn't have been that cheap even second hand. Like a potential 5070 selling for $500 while offering 4090 performance level. :)
 
I wonder if even faster NVME drives (PCIe5) might do a better job of games like this.

Especially if they ever get around to supporting DirectStorage 1.1, which is still very new.

PCIe5 X4 slots max out at 15.7GB/s throughput, which does seem like a hell of a lot.

Presumably, the console version streams in lower quality textures, so the already impressive console NVME drives handle this game comfortably?

I'm just wondering when it might be worth getting an NVME drive for games...

The finalised PCIe6 specification was released in January 2022, so I suppose we'll see even faster NVME drives in 2-3 years time, potentially upto 30 GB/s.

Maybe someone could check task manager to see if there's a lot of drive activity when playing Hogwarts Legacy?
NVME in itself without direct storage I think creates CPU bottlenecks.

Run a crystal diskbench on your NVME drive and watch the cpu usage. Its an eye opener.

Now imagine playing a CPU demanding game, and then it suddenly loads in textures whilst you playing (As modern games mostly do now), and suddenly it has to allocate those CPU cycles to loading in the textures at NVME speeds, then we might understand why stutters are happening. NVME has effectively move the bottleneck from storage to the CPU i/o system.

The proper version of DS, the 1.1 version I think if I am right bypass the CPU on this and on supported games should kill the problem like its now killed on consoles. (hopefully).

On the DS thread someone showed the high CPU usage on the 1.0 version of DS.
 
GDDR has higher latency as GPUs need bandwidth, while RAM has lower latency and less throughout as is optimized for the tasks that a CPU does. I think the first Ryzens had a performance problem due to latency which obviously translated into games as well.

Probably they're happy with the performance loss while they'll gain something in other areas, but a console is designed to do a very specific task (gaming), while a PC does a lot more.

Moreover that unified momory has everything in a SoC, so lacks the modularity needed and welcomed in the PC world. Btw, we already have faster storage drives thanks to that ...

There is nothing wrong with the concept of the PC hardware, the problem is the price. Xbox and PS4 are making money through games, so their interest is to sell the hardware at the lowest price possible, while AMD and nVIDIA are going for stupid high margins and prices and make zero from gaming ( online streaming sh**t not included).

Nothing is stopping game devs to properly optimize their pos software other than "high margins" (as in invest as little resources as possible while making the best profit possible, **** quality!). Look at Just Cause, Arma or even Star Citizen which have no problems streaming stuff because of vRAM.

Personally I'm not gonna buy and support these practices . At best I'll throw a fiver in a sell.

They still have huge patch downloads, poor performance and image quality issues that you're stuck with (going down in res to 720p without dlss is probably quite bad) and so on. They were also scarce and scalped, so I wouldn't put that as a clear win. Works ok for a niche, while some aspects are rather marketed as better than actually being significantly better. :)
If i decided to not buy games that have problems, I would have to quit gaming, most games I play have some kind of optimisation issue.

As much as the utopia of dev's doing a better job is in the dream, the reality is it wont happen. When looking at the wider picture of software development, most things happen now to make developer lives easier. The problem with the current system VRAM has no modularity anyway, if you want more of it, you got to buy an entire new GPU. System RAM is effectively been phased out for gaming now.

What do you think happens when two sides keep pointing fingers at each other "its your fault" with none of them deciding to grasp the problem and address it themselves? Nothing gets fixed. Dev's wont care as they still get paid, Hardware companies wont care as the problem sells hardware.

But I get your point, if implemented the solution would likely not be modular. So wouldnt be much of a solution unless the board which integrates the VRAM is cheaper to replace than a GPU.
 
Last edited:
Oof, so much jelly here...
It takes a lot of effort to build it up, you've gotta work at it.

I like how NVIDIA only release the super duper /maxed out versions of GPU dies long after the actual launch of the series.

You can get away with this when you've been the market leader for such a long time.

They are trying very hard not to release cards on launch for $2000, if they release these later, they will always get less s*** from the press and customers. It just becomes the new normal.

I have a question about the RTX 4080, if someone has the money and wishes to buy a RTX 4080, why buy any other model than the RTX 4080 FE? Which appears to be readily available due to the high price.

Some of the AIB cards are priced £300-£400 higher than the FE, for basically the same product.

They are cutting the price of the FE model apparently, presumably because they don't want it to be readily available (and undercut AIB models - if we accept the notion that FE models are supposed to exist purely for promotional purposes, and to boost the reputation of their products).
 
Last edited:
I like how NVIDIA only release the super duper /maxed out versions of GPU dies long after the actual launch of the series.

You can get away with this when you've been the market leader for such a long time.

They are trying very hard not to release cards on launch for $2000, if they release these later, they will always get less s*** from the press and customers. It just becomes the new normal.

I rarely have cause to defend Nvidia these days, but in this case there is at least a tenuous reason why they release mildly cut down versions first - when you start producing a large chip on a new node yields can be less than perfect, so they can build up stocks of full chips while selling the less than perfect cores and even then they often release the full core as a professional product first.

All chip makers do this because in a way they have.
 
Last edited:
With the RX 6900 XT and 7900 XTX, these were the fully fledged versions of Navi21 and Navi31, and were available within a month of the initial desktop series launch.
https://www.techpowerup.com/gpu-specs/radeon-rx-6900-xt.c3481
https://www.techpowerup.com/gpu-specs/radeon-rx-7900-xtx.c3941

No smoke and mirrors here.

EDIT - The same was true for RDNA1 /Navi10 too:
https://www.techpowerup.com/gpu-specs/radeon-rx-5700-xt.c3339

You get what you get...

6950XT was a refresh wasn’t it
 
I have a question about the RTX 4080, if someone has the money and wishes to buy a RTX 4080, why buy any other model than the RTX 4080 FE? Which appears to be readily available due to the high price.

Some of the AIB cards are priced £300-£400 higher than the FE, for basically the same product.

They are cutting the price of the FE model apparently, presumably because they don't want it to be readily available (and undercut AIB models - if we accept the notion that FE models are supposed to exist purely for promotional purposes, and to boost the reputation of their products).

I went for an FE, because as you say AIB cards cost more and don't really offer you anything extra. The cooler on the FE is really well put together unlike years ago when the FE coolers were ****.
 
NVME in itself without direct storage I think creates CPU bottlenecks.

Run a crystal diskbench on your NVME drive and watch the cpu usage. Its an eye opener.

Now imagine playing a CPU demanding game, and then it suddenly loads in textures whilst you playing (As modern games mostly do now), and suddenly it has to allocate those CPU cycles to loading in the textures at NVME speeds, then we might understand why stutters are happening. NVME has effectively move the bottleneck from storage to the CPU i/o system.

The proper version of DS, the 1.1 version I think if I am right bypass the CPU on this and on supported games should kill the problem like its now killed on consoles. (hopefully).

On the DS thread someone showed the high CPU usage on the 1.0 version of DS.
CPU bottlenecks are usually due to less than ideal thread optimization, even under Vulcan/DX12. There is still room for such jobs as you won't need to stream full speed all the time and the CPU is not fully loaded.

If i decided to not buy games that have problems, I would have to quit gaming, most games I play have some kind of optimisation issue.

As much as the utopia of dev's doing a better job is in the dream, the reality is it wont happen. When looking at the wider picture of software development, most things happen now to make developer lives easier. The problem with the current system VRAM has no modularity anyway, if you want more of it, you got to buy an entire new GPU. System RAM is effectively been phased out for gaming now.

What do you think happens when two sides keep pointing fingers at each other "its your fault" with none of them deciding to grasp the problem and address it themselves? Nothing gets fixed. Dev's wont care as they still get paid, Hardware companies wont care as the problem sells hardware.

But I get your point, if implemented the solution would likely not be modular. So wouldnt be much of a solution unless the board which integrates the VRAM is cheaper to replace than a GPU.
Is much cheaper to replace the GPU alone (or any other component), than the whole package: cpu, GPU, ram, motherboard.
 
Last edited:
Back
Top Bottom