• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA 4000 Series

Tempted to switch to a 13600K from my 12700K then, a drop in P cores but overall for gaming it looks like it may be the stronger cpu (and poss clock higher).

Definitely clocks higher - though I'd stretch to the i7 if you can as some games still want those 8 P core. 4090 will thank you for it!

Something special about being able to drop in upgrade CPU on same motherboard as well, something I've not done since the Athlon XP days :D
 
Question - what is the purpose of the frame cap? Surely anything over the refresh rate of the monitor is superfluous anyway?
Input latency. Only matters in certain games. In sole RP games makes no difference

Say if you want 3/4ms latency then you are looking at 250-333FPS for the game to accurately draw a frame and got your input latency to match.
 
Last edited:
But the hardest hit will be mainstream and entry level gamers,because the RTX4050 and RTX4060 will be based on a relatively worse dGPU. As result the generational increase(compared to the RTX4090) is most likely to be less and the dGPUs will cost much more too.
Totally this. My Dad needs a UEFI / Windows 11 graphics card and you just get terrible value for money still right now. We're still coming down from the mining / covid issues.
 
Totally this. My Dad needs a UEFI / Windows 11 graphics card and you just get terrible value for money still right now. We're still coming down from the mining / covid issues.

Also these companies are all going about RT this and RT that,but the RT performance in entry level and mainstream mass market dGPUs is not great. With a global recession happening,rising energy costs,rising food costs and rising interest rates these companies are trying to charge more,when they should be trying to charge less. Instead they think after loosing billions of USD in lost revenue,they need to be jacking up pricing to increase margins. They deserve more revenue losses because they care more about the accountants/shareholders than their customers. But it is also the fault of PCMR,who defended the general dGPU and CPU price increases and then "had" to pay beyond the odds for dGPUs and CPUs because of FOMO,which added to the miners who did so. PCMR forgot last time what happened with Turing,when gamers "had" to pay more for dGPUs during the second mining boom. Hence Nvidia thought Pascal was too cheap.

The only way for prices to go down is for people not to pay these elevated prices. But knowing how well that worked with microtransactions in games,gamers will eventually cave in because of FOMO.
 
Last edited:
Also these companies are all going about RT this and RT that,but the RT performance in entry level and mainstream mass market dGPUs is not great...
The only way for prices to go down is for people not to pay these elevated prices. But knowing how well that worked with microtransactions in games,gamers will eventually cave in because of FOMO.

Absolutely spot on.
 
Also these companies are all going about RT this and RT that,but the RT performance in entry level and mainstream mass market dGPUs is not great. With a global recession happening,rising energy costs,rising food costs and rising interest rates these companies are trying to charge more,when they should be trying to charge less. Instead they think after loosing billions of USD in lost revenue,they need to be jacking up pricing to increase margins. They deserve more revenue losses because they care more about the accountants/shareholders than their customers. But it is also the fault of PCMR,who defended the general dGPU and CPU price increases and then "had" to pay beyond the odds for dGPUs and CPUs because of FOMO,which added to the miners who did so. PCMR forgot last time what happened with Turing,when gamers "had" to pay more for dGPUs during the second mining boom. Hence Nvidia thought Pascal was too cheap.

The only way for prices to go down is for people not to pay these elevated prices. But knowing how well that worked with microtransactions in games,gamers will eventually cave in because of FOMO.

TBF, that is largely down to the developers more. We have seen when done from the ground up like metro ee, RT is perfectly acceptable on entry level dGPUs (I highly doubt anyone buying an entry level dGPU will be targetting 4k more likely 1080p), even for current gen consoles too, as confirmed by developers, the big performance hit we usually see from "added on" RT is largely because of the way they have coded for a hybrid model of raster + RT.

To see proper adoption and implementations of RT, developers need to first drop support for the old gen consoles and also would mean having to drop support from any dGPU before turing, obviously this is still a good couple/few years away so for now, we kind of need brute force and upscaling to tackle the current RT methods we are accustomed to.
 
TBF, that is largely down to the developers more. We have seen when done from the ground up like metro ee, RT is perfectly acceptable on entry level dGPUs (I highly doubt anyone buying an entry level dGPU will be targetting 4k more likely 1080p), even for current gen consoles too, as confirmed by developers, the big performance hit we usually see from "added on" RT is largely because of the way they have coded for a hybrid model of raster + RT.

To see proper adoption and implementations of RT, developers need to first drop support for the old gen consoles and also would mean having to drop support from any dGPU before turing, obviously this is still a good couple/few years away so for now, we kind of need brute force and upscaling to tackle the current RT methods we are accustomed to.

But to get more RT effects,it's still dependent on large generational jumps. Nvidia quietly put in an AD103 this generation,so the AD103 is really a 104 series dGPU,and the AD104 is more a 106 series dGPU. Everything from the sub 300MM2 die size,memory bus,shaders relative to the top die,etc indicates the RTX4080 12GB is more an RTX3060/RTX3060TI successor. Instead we will probably get the equivalent of a massively cut down AD104,or even a 107 series type die for an RTX4060. So there will be a jump in RT performance,but nowhere as big as we had a straight replacement of each tier with equivalent chips.
 
Same here, except 3900x to 5800x3D.....

In same boat. Got a 5600x which does the job pretty well tbh but 1 and 0.1% lows could be better especially when I'm pushing for high refresh rate gaming. Could drop in a 5800x3d on my b450 but fancy a new MB (current one is 4 years old). What I do, will largely depend on on how RDNA 3 does, if great then I'll stick with amd but if poor, think I might go back to intel with the 13600 (continue to use dd4 ram until ddr 5 prices come down) as seems nvidia prefer intel cpus again now, at least the 4090 does, I suspect be the same when the inevitable 4080ti arrives.....


Q8qz4bz.png

Not in any rush though.

But to get more RT effects,it's still dependent on large generational jumps. Nvidia quietly put in an AD103 this generation,so the AD103 is really a 104 series dGPU,and the AD104 is more a 106 series dGPU. Everything from the sub 300MM2 die size,memory bus,shaders relative to the top die,etc indicates the RTX4080 12GB is more an RTX3060/RTX3060TI successor. Instead we will probably get the equivalent of a massively cut down AD104,or even a 107 series type die for an RTX4060. So there will be a jump in RT performance,but nowhere as big as we had a straight replacement of each tier with equivalent chips.

True that, I think portal rtx will be the new main title for the most RT effects so it will be interesting to see how it does but it still seems a bit tacked on when compared to metro ee.
 
The limiting factor in the world of graphics card now is a decent 4K monitor. I want OLED level colours now after seeing them on my LG TV. And I want it in a sensible 32inch format. So I might upgrade in reverse like I did last time; monitor first, then the PC to power it.
I think sometime next year those are coming out, I think I head LG is doing 165hz 27" 4k oleds. I used to have a 4k LG-GN950 but went down res to the alienware QD-OLED for that reason. Colour and contrast is in a different league and I love it for that, but I do miss the resolution. Wish a 4k UW existed with oled.
 
Just asked my brother who is in HK to see how much the 4090's are and blimey the price difference and stock availability is night and day!

He could walk into a shop now and pick one up off the shelf for around HKD$13999, im tempted to ask him to bring one back for me. The best bit is, you can even barter the price abit :cry:, goddamnit!
 
Last edited:
Just asked my brother who is in HK to see how much the 4090's are and blimey the price difference and stock availability is night and day!

He could walk into a shop now and pick one up off the shelf for around HKD$13999, im tempted to ask him to bring one back for me.
Is this not down to nvidia drip feeding into the uk market ?
 
Is this not down to nvidia drip feeding into the uk market ?
You'd think they would drip feed everywhere but why just the UK? Us markets seem unaffected as well, people on reddit seem to be able to walk and grab one quite easily.

Dunno about the EU either, if anyone has any feedback.
 
Last edited:
Just asked my brother who is in HK to see how much the 4090's are and blimey the price difference and stock availability is night and day!

He could walk into a shop now and pick one up off the shelf for around HKD$13999, im tempted to ask him to bring one back for me. The best bit is, you can even barter the price abit :cry:, goddamnit!

Because manufactured fake shortages in regions with high 30 series stocks still and the issues with the economy in these regions, they are worried if they have lots of 40 series stock available all the prices will plummet. You have to keep them high margins going somehow :cry:;).

It's coming to bite them in the rear soon, starting now with AMD and the 7000 series cpus and AM4 motherboards. Intel will soon be starving the market of cpus and motherboards and AMD has admitted now cutting back on manufacturing of 7000 series cpus because demand is low. Low because of the pricing of the whole system needed cpu,motherboard and ddr5 ram that is still too expensive.
 
But to get more RT effects,it's still dependent on large generational jumps. Nvidia quietly put in an AD103 this generation,so the AD103 is really a 104 series dGPU,and the AD104 is more a 106 series dGPU. Everything from the sub 300MM2 die size,memory bus,shaders relative to the top die,etc indicates the RTX4080 12GB is more an RTX3060/RTX3060TI successor. Instead we will probably get the equivalent of a massively cut down AD104,or even a 107 series type die for an RTX4060. So there will be a jump in RT performance,but nowhere as big as we had a straight replacement of each tier with equivalent chips.

AD103 is not a new gpu sku, we had it with ampere too GA103 they ended up as 3080ti laptop gpus, but of course they were planned to be 3080s originally and then AMD threw a spanner into their plans with the 6000 series so every sku had to be moved up a tier to compete. Things have just gone back to plan now as Nvidia clearly knows AMD has nothing this time to compete or AMD have done a better job at securing their data on the 7000 series, we have to wait and see.

If AMD has something and Nvidia didn't know that then just wait and see how quickly the AD102 becomes a slightly higher end 4080 again and also a 4080ti in time and a 4090ti.
 
Last edited:
That's at normal 100% power target, This card doesn't go over 100% and stock clocks also.

Memory temps are very good compared to what the 30x series were but for some reason I can't help but feel Inno have used maybe stiff memory pads hence better memory temps than core temps, I know the issues from the 3080 I had trying to get the right balance of viscosity of memory thermal pads if they were too hard core temp wouldn't make full proper contact and it looks like this is what I am seeing here.

I mean it's not the worst considering it's stock fan profile which is low on this Inno but I do these delta readings should be a little closer than what I am getting.
Got the inno 3d ichill x3 and got memory is at 1500 stress tested and stable which is pretty good considering. Makes sense what you are saying about the stiff thermal pads and better cooling.

I've got a 12700k and have been considering whether to bother going for the 13th gen but sticking with this, it runs 5ghz p cores 4ghz e cores with 4 ring at 1.22vcore.
 
Last edited:
Back
Top Bottom