• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia gimmicks / features pushing up Graphics card prices?

I love that nVidia buyers are excited for fake frames. Spending thousands on hardware that comes with software to make it feel faster. Like someone else said, nVidia are great at inventing problems so they can sell you the solution.

Fake frames?

I don't use DLSS3 frame generation, don't need to & don't want to.

In base performance, without frame generation - the 4090 is still far ahead of AMD & we haven't seen the release of the full size chip yet.
 
They have caught up... with what is technically Nvidia's low-mid tier performance capability.

We have had 2x tier shifts from Nvidia since AMD fell behind.

In the 680 era, we had Nvidia release their x60 chip as an x80 chip.

This generation, it has happened again.

So - AMD are a long way behind.

It's not what any of us want - for price or performance - but there's no point hiding from the truth of the situation.
It doesn't really matter, because Nvidia's latest cards are still significantly slower in supported games, with RT enabled. Both Nvidia and AMD are using upscaling tech to try to make up for this performance deficit.

and they are increasing their prices, rather than reducing them.
 
Last edited:
Isn't the whole RT thing a little bit like Tessellation was a few years ago?
It was a hit on performance for both sides (more of a hit on AMD, at least at the start) but now we don't even think about Tessellation. They didn't stop doing it did they?
But if we went with the attitude back then that Tessellation was a hit on performance so we shouldn't ever do Tessellation where would we be now?
If we went with the idea of RT is a performance hit so lets just abandon the idea and never do it again how might that affect us in 5 or 10 years time?
 
Isn't the whole RT thing a little bit like Tessellation was a few years ago?
It was a hit on performance for both sides (more of a hit on AMD, at least at the start) but now we don't even think about Tessellation. They didn't stop doing it did they?
But if we went with the attitude back then that Tessellation was a hit on performance so we shouldn't ever do Tessellation where would we be now?
If we went with the idea of RT is a performance hit so lets just abandon the idea and never do it again how might that affect us in 5 or 10 years time?
No, developers as a whole don't use "tessy" like that any more. The problem with tessellation was that it was abused so much that objects and background scenes show no/to very little visible difference with it vs without it. With an unnecessary performance penalty. Which is why it's now into obscurity with phsyx, hairworks, etc.

Take heaven benchmark for example. Look what that benchmark had to do to make heavy use of tessy. It literally made the scene look cartoonish and out of place. Spikes on the dragon protruded out more then normal. The cobble flooring protruded out beyond what look normal to walk on, etc.

Both Tessy and RT share commonalities that make it obvious to it's over use of it. While tessy simply tanked performance beyond a certain point. So does RT. Yet RT is unique because the game itself is rasterized. What makes both an equal fail to me is that both tessy and RT was suppose to be used to make the game "life like" in it's prospective use cases. But neither of them accomplished that from what I've seen. The only thing they are reminded of, as you've already pointed out, is that they both tank performance as an optional way to play the (rasterized) game.
 
Last edited:
Isn't the whole RT thing a little bit like Tessellation was a few years ago?
It was a hit on performance for both sides (more of a hit on AMD, at least at the start) but now we don't even think about Tessellation. They didn't stop doing it did they?
But if we went with the attitude back then that Tessellation was a hit on performance so we shouldn't ever do Tessellation where would we be now?
If we went with the idea of RT is a performance hit so lets just abandon the idea and never do it again how might that affect us in 5 or 10 years time?
It's nothing like Tessellation. RT is pure junk and has set back performance and visual improvements by years.
 
Last edited:
If the RT hardware was scaled up enough, game developers wouldn't have to design two different visual styles for many modern games. The RT cores in RTX cards aren't slow, it's just that there aren't enough of them yet - Nvidia's raytracing is not being held back by their level of technological development.

It creates a lot of extra work... Limiting the number of RT cores also creates unnecessary tiers of gamers (maybe tears too, when a high end graphics card costs more than all other parts of a PC put together :)).
 
Last edited:
It's nothing like Tessellation. RT is pure junk and has set back performance and visual improvements by years.
Yeah, like those goddamn high settings too. And antialiasing, and SSAO, and and Ultra textures. And other visual options!

Damn settings, we had performance back in the days, not things to make games look better. CSGO still the pinnacle of graphics!

Pfft…
 
It doesn't really matter, because Nvidia's latest cards are still significantly slower in supported games, with RT enabled. Both Nvidia and AMD are using upscaling tech to try to make up for this performance deficit.

and they are increasing their prices, rather than reducing them.

What do you mean supported games?

There have always been a handful of games / developers where AMD lead the way... just like Nvidia have their own funded titles... the last decade of more, those Nvidia titles have been more prevelent than the AMD ones.

AMD hardware tends to favour things like Assassin's Creed... IIRC.

In anything and everything that's not specifically funded by AMD & coded directly for AMD, Nvidia wins... by quite a big margin.
 
New formula to calculate RT performance of a Nvidia card at native 1080p:
Max framerate with RT options enabled = number of RT cores

https://www.techpowerup.com/gpu-specs/geforce-rtx-4080.c3888
This has 76 RT cores.

The performance of a system is always determined by it's slowest part :D

But... even the 4080 is a 2560x1440 or higher resolution part??

Who's buying a 4080 or 4090 for 1920x1080 gaming?

Heck... even the new AMD 7900xt(x) would be a waste at that resolution unless you're a competative gamer searching for 300+ fps

To be fair on AMD, given their lower CPU overheads... for 1080p high-fps competative gaming - they may have the win there...

... as long as the drivers prove stable enough to finish a game without crashing to desktop - a common problem, even today, for AMD - something they should have resolved a decade ago!!!
 
Last edited:
What do you mean supported games?

There have always been a handful of games / developers where AMD lead the way... just like Nvidia have their own funded titles... the last decade of more, those Nvidia titles have been more prevelent than the AMD ones.

AMD hardware tends to favour things like Assassin's Creed... IIRC.

In anything and everything that's not specifically funded by AMD & coded directly for AMD, Nvidia wins... by quite a big margin.
I wasn't making a comparison between AMD and Nvidia.

I was saying that regardless of what Nvidia card you buy to play RT games, turning on RT always has a heavy impact on framerate.

That's because they haven't increased the amount of RT cores as much as they need to.

The RTX 4000 series was heavily marketed as a large improvement in RT, but the official Nvidia slides weren't showing an improvement in native resolution performance with ray tracing enabled, but with the latest DLSS technologies instead.

That's pretty crazy, when the current flagship card (the RTX 4090), is being sold for £1,600 or more (£1,700 for an AIB card), at present:
 
Last edited:
I wasn't making a comparison between AMD and Nvidia.

I was saying that regardless of what Nvidia card you buy to play RT games, turning on RT always has a heavy impact on framerate.

That's because they haven't increased the RT cores as much as they need to.

The RTX 4000 series was heavily marketed as a large improvement in RT, but the slides weren't showing an improvement in native resolution performance, but with the latest DLSS technologies.

That's pretty crazy, when the current flagship card (the RTX 4090), is being sold for £1,600 or more, at present:

But... it's still a decent improvement over last gen.

Although, arguably - I will give you this - AMD have made a much bigger stride/progression in their RT performance this generation... relative to themselves and/or Nvidia.

I'd argue that's AMD's biggest win this generation... hopefully next gen they will get closer to Nvidia's relative RT performance.
 
Indeed. In my opinion, it's hard to recommend AMD for ray tracing either :D

But they are making progress.

Yes.

While I may seem like an AMD-nay-sayer... I do want them to succeed.

For us consumers - competition at the higher levels (even if the low-mid range components are the highest volume units) is only good.

I really want things to go back to the times like my teenage years with ATi... where I started getting excited about hardware releases & ATi/Nvidia were trading blows every 9 months or so between releases.

My very first dGPU was a GT460 or something... next it was an X800XT and in the years after I had more ATi cards than Nvidia.

I miss those times... but now, even something like RT performance is a winner for me - I'm in the minority, being someone who's been craving effective real time raytracing for many years - so even though the current implementation isn't "full" RT... the way it's been implemented is so good that arguing over what we have vs the ideal full/total RT has become irrevelevant wrt the user experience & I'm very happy with what we already have - looking forward to how it is developed & hope AMD can match or better Nvidia's implementation in the coming generations.
 
Maybe if the can double the RT cores for the RTX 5000 series for all cards, that wouldn't be far off from the performance required.

Nvidia's roadmap suggests we won't see that until 2024 though:

I dunno - I game at 5120x1440.

I only enable RT for SP games, not MP.

3090/3090ti wasn't quite there.

4090 - I'm well over 60fps in anything I care about playing.

I didn't get interested in CP2077, but I know some find that a bit lacking without DLSS - but from the reviews I've seen, the frame generation of DLSS3 isn't required to enjoy 60+ fps at my resolution or standard 4k.

That's pretty good in my book - looking forward to what the full chip (4090ti or whatever) can do when it's released.

Curious to see how the likes of Atomic Heart will perform, that looks like the main GPU-killer coming soon.

First time in a very long time I've felt content with GPU horsepower.
 
Last edited:
If the RT hardware was scaled up enough, game developers wouldn't have to design two different visual styles for many modern games. The RT cores in RTX cards aren't slow, it's just that there aren't enough of them yet - Nvidia's raytracing is not being held back by their level of technological development.

It creates a lot of extra work... Limiting the number of RT cores also creates unnecessary tiers of gamers (maybe tears too, when a high end graphics card costs more than all other parts of a PC put together :)).

They can always follow the example of Metro EE, it works well even with RT:

metro-exodus-rt-3840-2160.png


I wasn't making a comparison between AMD and Nvidia.

I was saying that regardless of what Nvidia card you buy to play RT games, turning on RT always has a heavy impact on framerate.

That's because they haven't increased the amount of RT cores as much as they need to.

The RTX 4000 series was heavily marketed as a large improvement in RT, but the official Nvidia slides weren't showing an improvement in native resolution performance with ray tracing enabled, but with the latest DLSS technologies instead.

That's pretty crazy, when the current flagship card (the RTX 4090), is being sold for £1,600 or more (£1,700 for an AIB card), at present:
cyberpunk-2077-rt-3840-2160.png


40fps native, with DLSS Quality (which is good), should be more than 60fps easily. Is twice as fast as the 3090. Performance good, price bad.
 
Last edited:
The performance is only there with the RTX 4090, when DLSS is enabled to produce upscaled 4k frames. No doubt, there will still be a demand for a faster RTX 4090 Ti, or Titan /super duper edition.

In the case of the RTX 4080, still an expensive card, they basically need to more than double it's performance (e.g. for the 'RTX 5080') when running games with RT on at native resolution. Unfortunately, it's only slighter faster than previous gen cards (e.g RTX 3090 TI), under these conditions.

The problem is that cards like the RTX 3090, RTX 3090 TI and now the RTX 4080, are priced at or above £1,000, which is much more than most can afford. I think the RTX 3080 TI was sold over £1000 also, when these were still being sold. EDIT 'Jeff' is still selling these for >£1,000.

Whatever the production (+ R and D) costs may be, Nvidia must be making supernormal profits on these cards.

One thing you notice, is that models with over 16GB of VRAM, don't seem to receive any special boost to performance.
 
Last edited:
MSRP used to be a target to keep products within a certain price range (true for the RTX 3000 series to some extent), I suppose Nvidia thinks their MSRP prices will help to keep prices high this time :cry:

Cards designated as 60-70 used to be a clue that they would be priced as mid-range, mass market products - Just goes to show that product naming schemes should be ignored.
 
Last edited:
Back
Top Bottom