• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia gimmicks / features pushing up Graphics card prices?

I can watch all the videos I'd like, but it doesn't replace trying it where latency is a potential issue. Reserving judgement until I try it myself, which at these prices will be when I'm old, grey, and my vision needs upscaling, not my GPU!

If 4080 and 4070ti would be best suited for FG, they should be running things at a high base frame rate anyway, so FG conversely becomes less important, theoretically. It's like DLSS or FSR - no need to turn it on if you're already getting huge frame rates.
i saw people crowing on about dlss 3 latency, but one of the vids i cant remember which as it was a month or so back was showing regular latency cut down to 1/3 with dlss 2 and with dlss 3 it was only slightly higher than dlss 2 like 55ms vs 40 ms but both were massively improved compared to standard refresh
 
i saw people crowing on about dlss 3 latency, but one of the vids i cant remember which as it was a month or so back was showing regular latency cut down to 1/3 with dlss 2 and with dlss 3 it was only slightly higher than dlss 2 like 55ms vs 40 ms but both were massively improved compared to standard refresh

Nvidia seem to have improved/reduced the latency when FG is turned on by quite a bit now, that and iirc, they also fixed the issue with vsync (which is recommended to be turned on in the NVCP when using gsync) i.e. when hitting your monitors refresh rate because of FG, vsync was activating and adding a lot to the input lag so the only "fix" was to ensure your fps weren't hitting the refresh rate of your monitor (something which only DF/Alex highlighted)

I know, but without Nvidia pushing it would we have it in games as much as we do now? Probably not.

AMD haven't really caught up on the ray tracing side.

Completely agree. And people keep going on about nvidia sponsored games intentionally harming amds rt perf. when reality is, they simply just aren't as good in both the hardware and driver space for RT yet.... e.g. lego (free on epic store), sponsored fully by nvidia and pretty heavy on the RT yet it runs relatively well on amd hardware:

1tuqJO9.png

Now if we're talking about portal rtx then nvidia 100% have sabotaged amd here :cry: That will be entirely down to them using their own developed remix tool to implement RT into portal though :p
 
i saw people crowing on about dlss 3 latency, but one of the vids i cant remember which as it was a month or so back was showing regular latency cut down to 1/3 with dlss 2 and with dlss 3 it was only slightly higher than dlss 2 like 55ms vs 40 ms but both were massively improved compared to standard refresh
Yeah, it's why I'm not making a final decision until I've tried it. I know that personally I'm quite sensitive to it which is why I'm concerned. 55ms to 40ms is a fair chunk different.
 
Yeah, it's why I'm not making a final decision until I've tried it. I know that personally I'm quite sensitive to it which is why I'm concerned. 55ms to 40ms is a fair chunk different.
so how would you cope with the standard refresh latency which was massively reduced with dlss2/3?
 
When was it reduced? I haven't really used DLSS for the last month or so, so can't comment. I only really use it on F1 2022 I think, but haven't had a go on that for a while. Like I said, it may well be fine, but I have spent ages with guitar modelling where you want latency in the single digits - 15ms is noticeable, and 40ms is a whole different world.
 
When was it reduced? I haven't really used DLSS for the last month or so, so can't comment. I only really use it on F1 2022 I think, but haven't had a go on that for a while. Like I said, it may well be fine, but I have spent ages with guitar modelling where you want latency in the single digits - 15ms is noticeable, and 40ms is a whole different world.
in the rest of my comment when i said dlss2/3 brought latency down to 1/3 of what it was on the video compared to base latency and the 40/55 were much lower, cant rmemeber which video i saw it on tho as therre has been quite a few ive watched over the last month or so
 
Mine still looks like the left one after RTX on, what gives

Lies. We are still waiting for you to smash your 3090 with a hammer which has not been forthcoming..

And no cheating by buying a broken 3090 off eBay to smash. We want to see it run a game/benchmark in your pc, followed by it being removed and smashed to pieces in one shot! :cry:;)
 
Speaking of FG/DLSS 3, good timing, HUB were overly critical/negative in their initial review but a more "balanced" view point from them here now:


So as we all already knew, it will come entirely down to the base fps and what the individual finds "playable/good", at least in terms of base latency, for me, that is definitely 60 fps, ideally 70/80
Translated; they slated it before, how dare they. Now they are less critical so its now a more "balanced" view? :D
 
Last edited:
The original HUB review of the 4090 did make me think the DLSS 3 was going to be a bit of a gimmick, and I bought my 4090 thinking I wouldn't really use it.

I have to say though in the 3 games I've tried it (MSFS, Plague Tale, Witcher III), I've been very impressed by it. The glitches with UI etc HUB mentioned in their original review have largely been dealt with, and I'm not noticing any latency issue in any of the three games.

There is perhaps a slight image quality degrade in flight sim, but you really have to be looking for it. I'm playing a lot of the Witcher just now and I can't see any degrade in image quality at all in that game when I enable it. I imagine that's down to the Witcher's base frame rate being a little higher than in MSFS, but I don't know.

It clearly isn't suitable for every scenario, but in a single player game it's genuinely really good in my experience.

I was a bit worried about what the new Cyberpunk Overdrive update might be like, but based on what I've seen with the Witcher I'm looking forward to it.
 
If you look at the RT cores and SM count, I think they are always equal on RTX GPUs. The problem is, that this doesn't deliver nearly enough performance.


Couldn't the architecture have been designed (or optimised for the RTX 4000 series) to double the number, per SM. Or quadruple it? You should be able to turn on RT, without worrying about crippling your framerate, otherwise, it will remain a premium feature, that will keep pushing up the price of the high end/flagship cards.

It can't really be claimed that ray tracing is still a new technology, so I certainly think they could have gone a lot further.

Nvidia could have decided to increase the number of RT cores much more across the whole generation, but they presumably felt that they had such a strong RT advantage already with the RTX 3000 series, that this simply wasn't necessary - that the RTX 4000 series will sell with little effort anyway.

What you got was approx a 52% increase in RT cores, comparing the RTX 3090 TI to the RTX 4090. For the 'RTX 4090 TI', this could be perhaps a 60% increase.

When you look at the design of the RTX 4000 series, it's really just a scaled up Ampere, with significantly improved cooling, built with a new fabrication technology. The production costs have increased, because of the transition to one of the most advanced TSMC nodes (they have certainly made use of the improved transistor density on the top end models).
 
Last edited:
When you look at the design of the RTX 4000 series, it's really just a scaled up Ampere, with significantly improved cooling, built with a new fabrication technology. The production costs have increased, because of the transition to one of the most advanced TSMC nodes (they have certainly made use of the improved transistor density on the top end models).
I wouldn't be supprised if the 4080 costs the same or is even cheaper to make than a 3080. The die is 40% smaller and while it has 6 more gb of VRAM it uses 2gb chips which are not double the cost of 1gb while it only needs to occupy 8 instead of 10 and looking at both PCBs side by side the 4080 is half empty compared to the 3080.

The coolers are larger but that's an unnecessary cost as the card uses less power than the 3080 but that was probably done to make the cards appear more premium than they actually are.
 
The coolers are larger but that's an unnecessary cost as the card uses less power than the 3080 but that was probably done to make the cards appear more premium than they actually are.
Nope, better cooling = definite progress in my mind. Less broken / failing GPUs.
 
Don't think we've seen any news about ampere Gpus failing in large numbers despite a lot of people also mining with them for an extended period.
I probably should have said graphics cards. My last graphics card (EVGA RTX 3080) was a used card, and the fans were failing within months.

There was evidence of memory controllers being degraded overtime by GPU mining - depending on what voltages were used. In some cases, GPUs even failed within a few months.
 
Last edited:
Other than the 4090, AMD have caught up. Raster performance is strong, FSR competes with DLSS and RT is usable. Not sure what else people expect them to do?

They have caught up... with what is technically Nvidia's low-mid tier performance capability.

We have had 2x tier shifts from Nvidia since AMD fell behind.

In the 680 era, we had Nvidia release their x60 chip as an x80 chip.

This generation, it has happened again.

So - AMD are a long way behind.

It's not what any of us want - for price or performance - but there's no point hiding from the truth of the situation.
 
Back
Top Bottom