• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia gimmicks / features pushing up Graphics card prices?

Soldato
Joined
30 Jun 2019
Posts
8,031
DLSS 2
DLSS 3
Frame generation lame
Ray tracing.
Tensor cores for AI processing.
Floating point performance (e.g. 48.74 TFLOPS)
Huge amounts of VRAM
G.Sync
'Low latency' modes
Support for very high framerates refresh rates
Fancy new power connectors :D
Ampere +++

Do these things seem familiar?

None of these things are direct indicators of the performance of the GPU itself. They don't indicate a graphics card with more processing cores, or higher pixel rates /texture rates. Nor do they indicate the overall 3D graphics (rasterising) performance of the card.

We see these features on the box and decide they are must have features, and it pushes prices up. I admit that DLSS 2 is a very nice thing to have, if you have a 1440p/4K monitor. But - AMD is now competitive in terms of their resolution upscaling and detail enhancement technologies, so, I think we should basically all consider buying AMD this time around...

Of course, I may end up being a big hypocrite, if the RTX 4070 /4070 TI prices seem affordable. Other wise, based on the prices of the RTX 4080, Nvidia can do one. They have reverted to type, and we are seeing similar prices to the RTX 2080 TI (or even higher) again. And endless product variations and rereleases.
 
Last edited:
Soooo... the 4090 is cheaper than the 3090.
Even if true, the RTX 3090 was only a bit better than the RTX 3080 10GB and was stupid expensive anyway.

Then some people that bought the RTX 3090 were undoubtedly annoyed when the RTX 3090 TI came along and was even more stupidly expensive.

The same thing will happen with the RTX 4090.
 
Last edited:
i imagine 3090 owners were more annoyed at the fact the ti model actually got the vram temps sorted out and the price was crazy cause nvidia just made the new sku in crypto boom and gave it a massive msrp cause it would have sold regardless, back to the topic tho i dont think the features are pushing the prices up thats just corporate greed doing what it does and hoping the crypto prices can be sustained forever
The gimmicks play a big part. Lots of people will now only buy a Nvidia RTX GPU, because of DLSS 2/3. The other one that is a must have for some is Gsync
 
Throwing more and more power behind traditional rasterisation was always going to hit a point of diminishing returns.
I think you've picked up on my main point. If we think back a few years, most gamers just wanted higher framerates in 3D graphics. That (mostly) was what a graphics card was for.

And steady performance (decent minimum framerates).

Not ray tracing, nor the ability to upscale in high resolutions.

Antialising technology was generally advancing enough to deal with most of the unpleasant artifacts we see from running in lower resolutions.

Because games have these presets (Ultra, Ultra RT etc), now the PC community has decided that these are things we should all want.

We didn't get a new architecture for the RTX 4000 series, but we pay top whack for it anyway.
 
Last edited:
It's another strange thread by the OP. He's almost as bad as that Sewerino guy.
You could do what most sensible people do when they read a point of view they they disagree with - ignore it :p

The level of debate from some people is basically - I have lots of money and I will do what I want. This is fine, but why subject other people to this bias?

I do not believe I've said anything particularly controversial - if you consider what I've written to be a bit lacking in depth, perhaps that is so. But I'm really looking at it from the point of view of someone who is either trying to buy a new card, or just trying to get into PC gaming in general.

I will buy Nvidia or AMD. Maybe even Intel after a couple more generations... It makes no difference, the only thing that counts is value. Reviewers have measured this almost entirely based on average framerate per dollar, or, in some cases the minimum framerate, measured across several games.
 
Last edited:
I will critique my own point of view a little - The problem is that we haven't seen what AMD has to offer in the mid/high end yet.

They aren't rushing to release the '7700XT' and '7800XT', which are likely to be (more) desirable in terms of value. A delay seems quite possible.

The 6700 XT and other cheaper cards didn't arrive until months after the initial RDNA2 launch - Unfortunately, this is simply the norm now.

My optimism partly comes from the success of GPUs like the 6700XT... If AMD can offer similar value again, then we may see some real competition in the graphics market again. Apparently, this was one GPU that did appear in reasonable quantities :)

But, the shortfall of mass produced mid range cards is still a massive problem - it could be that AMD is still focusing on console GPU production?? Where RDNA2 is still sufficient (and cheaper to produce ofc).

AMD too is fond of marketing and gimmicks in a similar vein - I haven't really discussed that because their marketing simply hasn't had the same affect on customers.
 
Last edited:
We know what AMD are releasing, a 7700XT renamed as a 7800XT as the real one got renamed to a 7900XT and price jacked so don't expect it to be much faster than a 6800XT as it will likely have just 60CUs
I don't. Why would it be?

The 7700XT should be approx. 50% faster than the 6700XT (50% improvement is the target for the whole generation I think). That would put it roughly where the 6800 XT is.

The only way they will manage >50% performance improvement in 3D graphics, is jacking up the power consumption for the higher end models. I would guess they will need to overclock a lot (beyond 3ghz?), which is probably gonna be something the refresh GPUs can handle.
 
Last edited:
One thing I will say about the RTX 4070 (non TI). It may surprise us (many already seem to be writing it off...), just as the RTX 3070's performance surprised people on launch.

It appears to be a cut down version of the AD104 die, with essentially a similar spec, except for having fewer CUDA cores than the full chip. That could affect performance a fair amount, however...

it could end up being the highest clocked GPU of the series, which would help to make up for this reduction.

I remember a while ago, saying on a thread that higher clocked GPUs (relative to other cards in the same generation) tend to offer the best value for customers, in my view, it is something to watch out for. We also know Nvidia has improved their cooling a fair bit, something that I definitely approve of, given the short life of my RTX 3080 fans.
 
Last edited:
This all smacks of going back to 1997 and complaining about these fancy new 3d accelerators. Why couldn't they just keep making the VGA graphics cards we'd had for years, rather than driving up the cost with this new 3d acceleration stuff?
You don't need RT to play games, it's a nice looking cinematic effect. You do need a GPU capable of dx9/dx10/dx12 3D acceleration (or vulkan/opengl), for all but the most basic of games.

The thing is, they don't actually include many RT cores in their bottom/mid tier GPUs. They could, and it wouldn't cost much more to double it, but they won't. Because it is still a premium feature, that enthusiasts will pay for.

I will support this feature, but only when they can massively increase the number of RT cores for all but the lowest tier GPUs.
 
Last edited:
I know, why do they have to make stuff better?
It's a lateral move, because RT at the moment always reduces framerate. Why do they have to go sideways?

It's fine if you can hit your framerate target. Should this be 30 or 60? These are the options we see on many console games with RT on or off.

It could be argued that they haven't gone far enough with the amount of RT cores, at a certain point, the RT penalty to framerate is likely to be small, assuming the CPU can keep up. Instead, we got DLSS and upscaling tech. This doesn't actually help at lower resolutions like 1080p, you lose too much of the detail in frames.
 
Last edited:
how do you think everything else got to where it is?
Some evolutionary dead-ends, a complicated series of events leading to bipedal, furless, talking apes, followed by quite a lot of arbitrary decisions, the invention of currency? Feudalism then lead to hordes of console peasants. Then the golden age of mere mortals transcending into members of the PC master Race? There's too many things to list really :)
 
Last edited:
And you don't think there was ever a sideways step in any of that that lead to something better eventually?
Crabs. Good at sideways stepping. Crabs and crab like creatures have evolved multiple times in Earth's history - It's a good design it seems.

Tasty too:
maxresdefault.jpg
 
If you look at the RT cores and SM count, I think they are always equal on RTX GPUs. The problem is, that this doesn't deliver nearly enough performance.


Couldn't the architecture have been designed (or optimised for the RTX 4000 series) to double the number, per SM. Or quadruple it? You should be able to turn on RT, without worrying about crippling your framerate, otherwise, it will remain a premium feature, that will keep pushing up the price of the high end/flagship cards.

It can't really be claimed that ray tracing is still a new technology, so I certainly think they could have gone a lot further.

Nvidia could have decided to increase the number of RT cores much more across the whole generation, but they presumably felt that they had such a strong RT advantage already with the RTX 3000 series, that this simply wasn't necessary - that the RTX 4000 series will sell with little effort anyway.

What you got was approx a 52% increase in RT cores, comparing the RTX 3090 TI to the RTX 4090. For the 'RTX 4090 TI', this could be perhaps a 60% increase.

When you look at the design of the RTX 4000 series, it's really just a scaled up Ampere, with significantly improved cooling, built with a new fabrication technology. The production costs have increased, because of the transition to one of the most advanced TSMC nodes (they have certainly made use of the improved transistor density on the top end models).
 
Last edited:
The coolers are larger but that's an unnecessary cost as the card uses less power than the 3080 but that was probably done to make the cards appear more premium than they actually are.
Nope, better cooling = definite progress in my mind. Less broken / failing GPUs.
 
Don't think we've seen any news about ampere Gpus failing in large numbers despite a lot of people also mining with them for an extended period.
I probably should have said graphics cards. My last graphics card (EVGA RTX 3080) was a used card, and the fans were failing within months.

There was evidence of memory controllers being degraded overtime by GPU mining - depending on what voltages were used. In some cases, GPUs even failed within a few months.
 
Last edited:
They have caught up... with what is technically Nvidia's low-mid tier performance capability.

We have had 2x tier shifts from Nvidia since AMD fell behind.

In the 680 era, we had Nvidia release their x60 chip as an x80 chip.

This generation, it has happened again.

So - AMD are a long way behind.

It's not what any of us want - for price or performance - but there's no point hiding from the truth of the situation.
It doesn't really matter, because Nvidia's latest cards are still significantly slower in supported games, with RT enabled. Both Nvidia and AMD are using upscaling tech to try to make up for this performance deficit.

and they are increasing their prices, rather than reducing them.
 
Last edited:
If the RT hardware was scaled up enough, game developers wouldn't have to design two different visual styles for many modern games. The RT cores in RTX cards aren't slow, it's just that there aren't enough of them yet - Nvidia's raytracing is not being held back by their level of technological development.

It creates a lot of extra work... Limiting the number of RT cores also creates unnecessary tiers of gamers (maybe tears too, when a high end graphics card costs more than all other parts of a PC put together :)).
 
Last edited:
What do you mean supported games?

There have always been a handful of games / developers where AMD lead the way... just like Nvidia have their own funded titles... the last decade of more, those Nvidia titles have been more prevelent than the AMD ones.

AMD hardware tends to favour things like Assassin's Creed... IIRC.

In anything and everything that's not specifically funded by AMD & coded directly for AMD, Nvidia wins... by quite a big margin.
I wasn't making a comparison between AMD and Nvidia.

I was saying that regardless of what Nvidia card you buy to play RT games, turning on RT always has a heavy impact on framerate.

That's because they haven't increased the amount of RT cores as much as they need to.

The RTX 4000 series was heavily marketed as a large improvement in RT, but the official Nvidia slides weren't showing an improvement in native resolution performance with ray tracing enabled, but with the latest DLSS technologies instead.

That's pretty crazy, when the current flagship card (the RTX 4090), is being sold for £1,600 or more (£1,700 for an AIB card), at present:
 
Last edited:
Back
Top Bottom