• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA 4000 Series

This was really interesting, shows that the 4090, all 4090s, are essentially not thermally or current limited. The performance numbers between stock and OCd to 3GHz+ core yield some gains, but not a huge difference. This also explains why even on the quiet mode VBIOS switch on my trinity, that an 80-81 degrees core temp made no impact on the boost clock as it always stayed at 2700 or above.

Makes you wonder what Nvidia can do to a 4090 ti in terms of clocks, they'd have to add more shader units and stuff instead to make it a meaningful bump up from a 4090, or maybe they just won't bother with a 4090 ti and just beef up the 4080 to a Ti which would still fall quite a bit below a 4090, and allow the 4090 to stay at its price point.

I think the 4090 is memory limited. Ocing the core nets me about a 2-2.5% increase in performance, memory gets me 9-10%.
 
Ocing the VRAM only saw my cyberpunk benchmark drop by 1-3fps I found, could be variance of course but it showed me that even a 1GHz VRAM oc didn't do anything on my card. The fps is so high anyway that even 10 doesn't mean anything though on these cards.
 
Last edited:
Ocing the VRAM only saw my cyberpunk benchmark drop by 1-3fps I found, could be variance of course but it showed me that even a 1GHz VRAM oc didn't do anything on my card. The fps is so high anyway that even 10 doesn't mean anything though on these cards.
I got a 9% performance bump from stock all the way to +1900.
 
If performance goes down with vram overclock it means your overclock is unstable; remember GDDR6x has ECC enabled and the VRAM will correct errors to prevent crashing and error correction lowers framerate
 
Last edited:
I got a 9% performance bump from stock all the way to +1900.
Likewise - I get a nice bump from VRAM OC'ing - tested it extensively with Superposition.

Here's my OC in Cyberpunk (Suprim X):

Untitled.png
 
Last edited:
ECC is off by default though:

LJ1ry5X.png

I will re-run the bench with VRAM OCd again and check as I only did the one run before, as I suspect it was just a variance thing.
 
Reran Cyberpunk, this time with frame gen off to rule out FG faffery. Path tracing still on.


Code:
Cyberpunk2077.exe benchmark completed, 4321 frames rendered in 62.422 s
GPU Stock
                     Average framerate  :   69.2 FPS
                     Minimum framerate  :   53.7 FPS
                     Maximum framerate  :   83.4 FPS
                     1% low framerate   :   51.0 FPS
                     0.1% low framerate :   45.7 FPS

Cyberpunk2077.exe benchmark completed, 4505 frames rendered in 63.796 s
GPU VRAM +1100
                     Average framerate  :   70.6 FPS
                     Minimum framerate  :   62.1 FPS
                     Maximum framerate  :   82.8 FPS
                     1% low framerate   :   53.7 FPS
                     0.1% low framerate :   46.3 FPS

I've since found out that my VRAM crashes Cyberpunk when set to +1200, but +1100 is fine. I think the min recorded fps there was just a split second hitch is all since the other numbers are basically the same.
 
Last edited:
Reran Cyberpunk, this time with frame gen off to rule out FG faffery. Path tracing still on.


Code:
Cyberpunk2077.exe benchmark completed, 4321 frames rendered in 62.422 s
GPU Stock
                     Average framerate  :   69.2 FPS
                     Minimum framerate  :   53.7 FPS
                     Maximum framerate  :   83.4 FPS
                     1% low framerate   :   51.0 FPS
                     0.1% low framerate :   45.7 FPS

Cyberpunk2077.exe benchmark completed, 4505 frames rendered in 63.796 s
GPU VRAM +1100
                     Average framerate  :   70.6 FPS
                     Minimum framerate  :   62.1 FPS
                     Maximum framerate  :   82.8 FPS
                     1% low framerate   :   53.7 FPS
                     0.1% low framerate :   46.3 FPS

I've since found out that my VRAM crashes Cyberpunk when set to +1200, but +1100 is fine. I think the min recorded fps there was just a split second hitch is all since the other numbers are basically the same.
Sadly you got some mediocre ram chips. Mine crash at 2k. On the other hand my core is pretty horrible, 2970 is the absolute limit and then kaboom, it crashes.
 
Saying it's the memory correcting errors is perhaps a bit misleading, having said that though I'm not sure if unstable overclocks on the memory are handled in the same way as the GPU itself.

From what i understand the reason you can see lower scores with an unstable OC is because the GPU detects errors/issues so pulls back on the clock speed for a microsecond so you end up with the clock speed bouncing up and down, faster than most software that polls these things can detect. The same could be happening with the memory but IDK enough about that to say one way or the other.
 
Well the 4090 is now 6 months old, That means it's out of date, Subpar garbage that needs replacing ASAP but seeing as Nvidia aren't lauching the 5000 series until Q3 next year we're stuck with garbage.
 
Last edited:
Is the 288gb/s bus really going to be good enough?

Seems like its going to fail hard at some games/resolutions.
Well, if they get this to work in time...

According to the paper, neural textures can be rendered in real-time with up to 16x more texel than BC approach. The cost of 4K render is 1.15ms which is higher than 0.49ms (measured on RTX 4090). More information will be presented at SIGGRAPH 2023 on August 6.

Or maybe will arrive just "in time" with the 5xxx series...
 
Nvidia is so anti vram, they're willing to come up with new texture compression algorithms

It does look kinda cool - so you can have a texture that looks closer to native 4k but only takes up the space in vram that a native 1080p texture would. Main issue seems to be the processing time since it runs on the TMU - it takes the GPU 3 times longer to render these compressed textures
 
Last edited:
Nvidia is so anti vram, they're willing to come up with new texture compression algorithms

It does look kinda cool - so you can have a texture that looks closer to native 4k but only takes up the space in vram that a native 1080p texture would. Main issue seems to be the processing time since it runs on the TMU - it takes the GPU 3 times longer to render these compressed textures

They are like Apple,wanting to skimp on hardware as much as possible to increase margins.
 
Back
Top Bottom