• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

12 and 16GB GPU is NOT "A Gimmick"

If 6GB is good enough then so is 12GB, the 6GB difference equates to about $20 cost, if that, so if they had a 6GB version for £20 less then that might be a solution.




The reason these same people (Not sure about Hardware Unboxed) said "You don't need more than 4 cores" is because Intel said it, they unashamedly regurgitated Intel's crap, i would not be at all surprised if they are indeed cowed by Nvidia, its always Nvidia who skimp on VRam, i'm sure they would have loved the 3060 be a 6GB card, at the same cost, but the 6700, its direct competitor will be 12GB, while that is overkill, i'd rather it be 12GB than 6GB.
I share your distaste for many of Nvidia's antics but the 980TI had 6GB VRAM vs Fury X 4GB, the 6990 and the 295x2 both had less than ideal amounts thus hastening their demise so both parties are guilty of cheaping out in this department. Dual chip cards especially should have been given more than twice the VRAM of the single GPU cards like the 4870x2 did so admirably but corners have often been cut.
 
Every review back in 2014 said the gtx 780 6gb was useless over 3gb unless running multiple in sli.
Fast forward to today and im still with the 6gb card since 6gb is very handy.
So yes 12gb card is handy
 
Just check the VRAM progression between previous generations. Then look at the last two Nvidia generations and it's stagnated just when the new consoles are released. I think Nvidia is going to release an Ampere refresh with more VRAM or the RTX4000 series will have it,and they will then push for more VRAM usage to show the advantages of the new GPUs(IMHO).

Having said that the rumours hint that the RX6700 will be 6GB now.
 
Had they known how AMD was going to play things they perhaps would have designed the chip with a wider bus to allow 8gb.

A 256bit bus costs more money to produce than a 192bit bus. So a 3060 with 8Gb would probably be more expensive than 12Gb.

A game only has to use 6.1Gb VRAM and the decision to go with 12Gb is worth it.
 
12gb is arguably not needed on a 3060, but without altering the bus the other option of 6gb would have perhaps been too little... seems like they got caught with their pants down by AMD to me. Had they known how AMD was going to play things they perhaps would have designed the chip with a wider bus to allow 8gb.
.

I think this is true for the entire line up. I also think that for the top end cards Nvidia are trying to avoid releasing a card that is too good.

Had the 3080 released with 20gb of VRAM it probably could last at least 4 years before a user would need to upgrade from it.
 
A 256bit bus costs more money to produce than a 192bit bus. So a 3060 with 8Gb would probably be more expensive than 12Gb.

A game only has to use 6.1Gb VRAM and the decision to go with 12Gb is worth it.

Would love to see the maths tbh. Unfortunately we can only guess at numbers.

Of course adding another memory bus takes up die space but individually they are a relatively small component of the total die area as you can see on this 3070 die shot: https://images.hothardware.com/cont...5/content/small_GeForce-RTX-3070-die-shot.jpg

I’ve seen estimates of around $5,600 per wafer for Nvidia on Samsung 8nm so if we assume a full wafer of 3060 dies @ 392mm2 it’ll give 152 dies. Assuming around 90% yield that’s around $41 per die.

let’s be massively overly generous and say it’d add 20% to the die area to accommodate them, it would then make it around $54 or about $13 more vs the $36 of 6gb more vram... of course you still need to add $12 of extra vram to get up to 8 so that then takes you to $25 extra to go with 8gb 256 vs $36 extra to stick with the 192 and up it to 12gb rather than 6.

Obviously all that’s based on a multitude of various assumptions and might be way off the mark but it’s as good a guess as any!

At worst I’d suggest there’s little in it and a wider bus with 8gb would likely outperform a narrower bus with 12gb at this card’s intended performance tier. Of course it isn’t as simple as just adding a wider bus with a mouse click, so not something you can easily just change on a whim - hence why I suspect they got caught out by AMD at an earlier point in the design stage and to make the best of the situation they find themselves in they’ve been pushed into making it 12gb when it was probably only ever intended to have 6.
 
Last edited:
If it costs less that £250 I can accept it, otherwise it is an abomination of a card.

I have literally given up now - Dell,Sony,MS can get all the bits and bobs they want at "reasonable" prices,reasonable volumes,etc and we get second rate tarted up rubbish. Even the laptop RTX3060 is the full GA106 die! Dell also just got the Ryzen 7 5800 non-X which is the Ryzen 7 5800X replacement. Yet we can't get these cheaper CPUs,because "reasons".

PC enthusiasts get the worse bins of these GPUs and CPUs at top dollar pricing.

Also,expect the RX6700XT to be slightly slower than an RTX3070,but priced close to one due to its 16GB VRAM. I expect the RX6700 6GB to be priced at least at £350. OFC,street prices will be just higher.

Then the next generation will have more VRAM,and they will sponsor games to use more of it,forcing you to upgrade quicker unless you have at least a £450 GPU.

It's all about upselling the entry level and mainstream gamer to higher and higher price points.
 
No, its not the 3060, tho i think its better with 12GB (overkill) than 6Gb (just enough for now) its when they turn their attention to the Radeon 6000 Series, is 16GB really too much for a 6800XT? or even the 6800? How are they be better with 8GB, other than reducing the cost, for AMD.

I have 8GB on my 2070 Super, it has the physical prowess to run my favourite game at 4K, it does not have enough Vram, so i don't want hear Jay or Steve bang on about GPU's with TWICE the muscle my card has having too much Vram at 16GB when the alternative is 8GB, if i'm to replace this GPU it might well be the 6700XT, not because its much faster than my 2070 Super, it probably isn't going to be. It will be because i can turn the resolution up in my favourite game to 4K.
Twice the muscle?? from what I am seeing the 3060 is so neutered that it only does 5-10% more than a 2070 super at the most.
 
Does a gpu really need a super core (3090 or 6900xt) to use better textures? When you play a game it tells.you you can't use xx textures of you don't have enough vram. I've seen it in some games.
 
Twice the muscle?? from what I am seeing the 3060 is so neutered that it only does 5-10% more than a 2070 super at the most.

Let's face it if you got the majority of the 20 series lineup you were mugged. Those that kept the 10 series cards have an upgrade path at least, but not sure why if you bought a 2070 you would even be thinking of getting a 3060... (unless you had sold it for a scalped second hand going rate and you could get hold of a 3060).
 
Would love to see the maths tbh. Unfortunately we can only guess at numbers.

Of course adding another memory bus takes up die space but individually they are a relatively small component of the total die area as you can see on this 3070 die shot: https://images.hothardware.com/cont...5/content/small_GeForce-RTX-3070-die-shot.jpg

I’ve seen estimates of around $5,600 per wafer for Nvidia on Samsung 8nm so if we assume a full wafer of 3060 dies @ 392mm2 it’ll give 152 dies. Assuming around 90% yield that’s around $41 per die.

let’s be massively overly generous and say it’d add 20% to the die area to accommodate them, it would then make it around $54 or about $13 more vs the $36 of 6gb more vram... of course you still need to add $12 of extra vram to get up to 8 so that then takes you to $25 extra to go with 8gb 256 vs $36 extra to stick with the 192 and up it to 12gb rather than 6.

Obviously all that’s based on a multitude of various assumptions and might be way off the mark but it’s as good a guess as any!

At worst I’d suggest there’s little in it and a wider bus with 8gb would likely outperform a narrower bus with 12gb at this card’s intended performance tier. Of course it isn’t as simple as just adding a wider bus with a mouse click, so not something you can easily just change on a whim - hence why I suspect they got caught out by AMD at an earlier point in the design stage and to make the best of the situation they find themselves in they’ve been pushed into making it 12gb when it was probably only ever intended to have 6.

The complexity of the PCB can also be a significant contributor. A wider bus = more traces, often meaning more PCB layers and so more expensive production of the board itself. The costs of producing the die as it gets larger increases more than you'd expect because it reduces yield by increasing the defect rate per die.
 
Twice the muscle?? from what I am seeing the 3060 is so neutered that it only does 5-10% more than a 2070 super at the most.

its when they turn their attention to the Radeon 6000 Series, is 16GB really too much for a 6800XT? or even the 6800? How are they be better with 8GB

I have 8GB on my 2070 Super, it has the physical prowess to run my favourite game at 4K, it does not have enough Vram, so i don't want hear Jay or Steve bang on about GPU's with TWICE the muscle my card has having too much Vram at 16GB when the alternative is 8GB
 
A 256Bit bus also uses more power than a 192Bit one.

AMD saw fit to invent an entirely new cache system so they could use a 256Bit bus (rather than a 384Bit one) and with it keep the power under 300 Watts.
 
The 12GB 3060 looks disappointing to me, its supposed to be a 1440P/RT card right? 29 fps in Control? CP2077 not seen but bet it struggles. 3060Ti better but just seen one go for £910, price/performace data is shot to pieces at the mo but at 300 quid (LOL) i guess its an ok card?
 
Back
Top Bottom