• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA 4000 Series

Not a lot of point in 16GB if the card is no where near powerful enough to handle 4K, or raytracing.

It would just add cost.

Also, the spec is probably wrong because that guy just makes stuff up, and frequently changes his mind then says that 'the information has been updated'.

I guess it gives AMD an opportunity to be a bit more generous with their spec at the mid end, if Nvidia does go with 8GB for this card.
If you get some "good" optimization going on, like FC6 and its high texture pack, 8GB are not enough even for 1080p....

In a way is understandable that older cards have lower vRAM, but the minimum should be starting at 12GB nowdays...
 

8gb, many games saturate that today at even 1080p. Any more than £300 and it's trash.
Also



Nbidya plz!
 
Also



Nbidya plz!
If they have so much excess of 3000 series that they're basically making the 4000 equivalent weaker then they might as well just rebrand the 3000 series as a 4000 series or a '3600' (like the 1600 before).... it could solve the (self inflicted) issues for nVidia but it would still mean we're likely to get shafted on pricing....
 
  • Like
Reactions: mrk
There rumours don't make sense. If anything, AMD is likely to be more competitive at the mid end with RDNA3.

Nvidia would be failing pretty hard if they allowed AMD to take the lead here (with Navi32).

Look at the number of times predicted GPU specs have been mostly / completely wrong, especially from Twitter users. You've gotta be pretty precise to actually get an idea of how these GPUs will perform, but what you get are the widest predictions possible.

It looks like there's something missing in the predictions, looking at the now released AD104 laptop GPUs:
https://www.techpowerup.com/gpu-specs/nvidia-ad104.g1013

For example, where is the desktop equivalent to the RTX 4080 Mobile?

Why would they choose to make the gap between the RTX 4070 (supposedly 5,888 shaders) and RTX 4080 mobile so large?

EDIT - It does look like these RTX 4080 mobile laptops won't be available until late Feb / early March.
 
Last edited:
For example, where is the desktop equivalent to the RTX 4080 Mobile?

Why would they choose to make the gap between the RTX 4070 (supposedly 5,888 shaders) and RTX 4080 mobile so large?

EDIT - It does look like these RTX 4080 mobile laptops won't be available until late Feb / early March.
4070ti is the desktop equivalent. Not that nVidia care about equivalents.
 
Last edited:
If they have so much excess of 3000 series that they're basically making the 4000 equivalent weaker then they might as well just rebrand the 3000 series as a 4000 series or a '3600' (like the 1600 before).... it could solve the (self inflicted) issues for nVidia but it would still mean we're likely to get shafted on pricing....
Once Nvidia knew they were onto a winner with ADA performance they decided to downgrade all the dies by a tier and upgrade all the prices by 1-2 tiers as god forbid their consumers get to benefit.
 
I think the rumoured RTX 4070 spec looks more like a RTX 4060 TI type of spec...

I think it's more likely that they have no idea what they are looking at (if they do somehow manage to get hold of confidential information).

In the 'leaks' that have been covered on tech websites, there's nothing that confirms the models they refer to (for 'leaks' referring to a specific GPU die). It's usually just guessed at, or speculated by websites like videocardz.
 
Last edited:
Can't the actual product designation (such as RTX 4080 12GB or RTX 4070 TI 12GB) wait until a few weeks before release?

Exactly as we saw with the RTX 4070 TI. Basically, I think they have no idea which products these GPU dies will end up in (it's not an engineering decision but a marketing one) - Probably because Nvidia leaves this decision quite late.
 
Last edited:
Last edited:
Probably the next gen of desktop graphics cards will use GDDR7 VRAM anyway:

Improved signalling allows 2x the bandwidth (compared to typical GDDR6 speeds).

HBM has always been expensive, not an ideal solution for consumer products.
 
Last edited:
Probably the next gen of desktop graphics cards will use GDDR7 VRAM anyway:

Improved signalling rate allows 2x the bandwidth (compared to typical GDDR6 speeds).

HBM has always been expensive, not an ideal solution for consumer products.
Looking forward to seeing this on the 6090!
 
I'd guess it will be considered desirable for both AMD and Nvidia cards, because they will be able to cut down the memory bus on the low/mid end models further, so the new designs might actually end up being a bit cheaper.
 
HBM prices don't really affect consumer products.

GDDR7 sounds useful, though not world changing - GDDR6X is already using PAM4 but isn't so efficient.
 
Back
Top Bottom