• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA ‘Ampere’ 8nm Graphics Cards

So Nvidia is cancelling some un-released Ampere cards.

I wonder if its because AMD is too fast and Nvidia will go back to the drawing board to work on shifting Ampere to 7nm TSMC since its clearly a much better node than Samsung garbage 8nm

There is no "switching" to TSMC 7nm in the way people are talking.

People seem to have quickly forgotten (sorry for linking wccftech but I can't find the relevant sections quickly in the videos and investor briefs)

https://wccftech.com/nvidia-tsmc-7nm-next-generation-gpu-ampere/
https://wccftech.com/nvidia-we-would-like-to-surprise-everyone-with-7nm-gpu-announcement/

People have equated this to them talking about the 3080, etc. but they are not - nVidia didn't just suddenly switch to Samsung 8nm after those talks.
 
There is no "switching" to TSMC 7nm in the way people are talking.

People seem to have quickly forgotten (sorry for linking wccftech but I can't find the relevant sections quickly in the videos and investor briefs)

https://wccftech.com/nvidia-tsmc-7nm-next-generation-gpu-ampere/
https://wccftech.com/nvidia-we-would-like-to-surprise-everyone-with-7nm-gpu-announcement/

People have equated this to them talking about the 3080, etc. but they are not - nVidia didn't just suddenly switch to Samsung 8nm after those talks.
Are you suggesting they cannot have an ampere refresh by September next year on TSMC 7nm?
 
Probably because they twigged that by having the supers and high memory cards in the wings ready the pre-orders in the queue have halved because people are not stupid.

Would Nvidia put out the rumour to squash the supers and high memory card speculation and then suddenly and miraculously they appear sometime next year? Could they possibly be that duplicitous? Haha what you think.

Apparently Linus from Linus Tech Tips has put a copyright claim against this video for the 1min of footage Jim used from Linus review of the 3090.

If you don't watch the video until the end you will miss were Jim has said he is giving up on the whole 'Tech Tuber thing'.

Yes that was naughty though the bit where he says Linus said the 3090 wasn't the Titan class card and went against what he'd been sponsored to say was interesting, wouldn't be surprised if that was why he was told to hand the 3090's back to Nvidia. Poor old Jim I'm going to miss him.
 
Agreed. It very likely will be fine for a long while yet. 8k doesn't even push the vram on the 3080 so 20GB IMO would be wasted for just gaming.

To actually push over VRAM limit takes a lot of doing. Don't think i've ever seen it happen. Not unless you enjoy playing at single digit fps anyway :p

When you look at current games and you correct the vRAM measurement for "vRAM allocated" vs "vRAM actually used" what you find is that the most demanding games so far (Crysis Remastered, FS2020, Avengers) all at Ultra 4k actually get into the <20fps range even though their vRAM usage is still below 10GB. We see this being pushed incredibly hard on these forums, that 10Gb is not enough but all tests and experiments have shown that's not the case now, and isn't likely to be the case with future games.
 
When you look at current games and you correct the vRAM measurement for "vRAM allocated" vs "vRAM actually used" what you find is that the most demanding games so far (Crysis Remastered, FS2020, Avengers) all at Ultra 4k actually get into the <20fps range even though their vRAM usage is still below 10GB. We see this being pushed incredibly hard on these forums, that 10Gb is not enough but all tests and experiments have shown that's not the case now, and isn't likely to be the case with future games.

Well that is true and would make sense why nvidia engineers think 10Gb is enough. But it is strange then if concrete as to why nvidia were going to run them in the first place into production if it was not the case?
 
Apparently Linus from Linus Tech Tips has put a copyright claim against this video for the 1min of footage Jim used from Linus review of the 3090.

If you don't watch the video until the end you will miss were Jim has said he is giving up on the whole 'Tech Tuber thing'.

Was done by a 3rd party company and all is sorted now.
 
Apparently Linus from Linus Tech Tips has put a copyright claim against this video for the 1min of footage Jim used from Linus review of the 3090.

If you don't watch the video until the end you will miss were Jim has said he is giving up on the whole 'Tech Tuber thing'.

Cant wait for adored to get drunk again and go on one of his infamous Twitter el Trump style sprees where he defames and insults people he doesn't agree with
 
Probably because they twigged that by having the supers and high memory cards in the wings ready the pre-orders in the queue have halved because people are not stupid.

Or releasing new SKU's when people are still waiting for their orders to arrive would be a PR disaster.

The 16/20GB cards will be Supers on 7nm in about 10 months time.

As Rroff has previously said, you can't just ring up someone at TSMC, tell them you've messed up and need them to make chips instead. Nvidia have used more than one foundry in the same achitecture generation before. Nvidia say they've been working on Ampwhere since they released Turing, having TSMC onboard would have been in the books for atleast half that time.
 
Well that is true and would make sense why nvidia engineers think 10Gb is enough. But it is strange then if concrete as to why nvidia were going to run them in the first place into production if it was not the case?

My guess is that the response to 10Gb of vRAM has been overwhelming and lots of people just flat out claiming it's not enough. When I looked at a decent cross section of comments on this what I found was a lot of arguments that went along the lines of:

"It's a premium card it should have more"
"It should have more than the last gen"
"Nvidia is greedy for not using more vRAM"
"Nvidia skimped on the vRAM"
"I'm not paying £700 for a card with 10Gb of vRAM"

Just endless comments like this, none of which come from an objective measurement of what games use and what they need, and the relative balance of bottlenecks between the vRAM and GPU. It was all just emotional language about how people thought they deserved some big number. And at the end of the day if people are going to behave that way and they'll buy like a 16Gb card because AMD has made one or some other reason, then Nvidia are going to do that and eat into the margin, even if that vRAM goes unused like last Gen 11Gb vRAM configs did.
 
More 3070 benchmarks have been leaked, Ashes Of The Singularity (of course)... https://videocardz.com/newz/nvidia-geforce-rtx-3070-ashes-of-the-singularity-performance-leaks

Seems performance isn't so far behind the 3080, at least in this game. 12% at 1080p, 15% at 1440p and 20% at 4k.

But at £500+ and 8gb of Vram, xx70 is traditionally a low to mid range part, priced into mid to high range.

Basically Nvidia is shifting their buyers down a stack, if your normally a xx70 buyer, what your used to paying is now going to get you xx60, if you was a xx80 buyer your now looking at xx70 etc.

The 3080 price was always too good to be true, especially when we are now seeing some models of the xx70 costing more than the 3080FE.

The whole thing is ridiculous
 
But at £500+ and 8gb of Vram, xx70 is traditionally a low to mid range part, priced into mid to high range.

We are in this transitional period moving from raster to RT which means we have more silicon in use. I'm not trying to defend Nvidia here, but let's not lose sight of what you get with these GPUs - raster, RT and Tensor all rolled in to one chip. I'd imagine they would still want £350-£400 for a chip solely focused on legacy raster performance.

AMD's hybrid design suits this period more as they can apparently dynamically balance performance between RT and raster all be it at the detriment of RT.

Looks like we now have a choice. If RT is more important then pay a premium to Nvidia, otherwise AMD are looking very promising.
 
My guess is that the response to 10Gb of vRAM has been overwhelming and lots of people just flat out claiming it's not enough. When I looked at a decent cross section of comments on this what I found was a lot of arguments that went along the lines of:

"It's a premium card it should have more"
"It should have more than the last gen"
"Nvidia is greedy for not using more vRAM"
"Nvidia skimped on the vRAM"
"I'm not paying £700 for a card with 10Gb of vRAM"

Just endless comments like this, none of which come from an objective measurement of what games use and what they need, and the relative balance of bottlenecks between the vRAM and GPU. It was all just emotional language about how people thought they deserved some big number. And at the end of the day if people are going to behave that way and they'll buy like a 16Gb card because AMD has made one or some other reason, then Nvidia are going to do that and eat into the margin, even if that vRAM goes unused like last Gen 11Gb vRAM configs did.

I have been through that thread (read more than posted)
I believe folks there rely too much on simplified empirical evidence to defend 10gb
its very difficult to run scenarios when folks are not presenting any kind of underlying theory to how vram works in most games..
I personally couldnt glean much insights from there other than trivia on how much vram is being currently utilised (and the thread just refuses to die)

We are in this transitional period moving from raster to RT which means we have more silicon in use. I'm not trying to defend Nvidia here, but let's not lose sight of what you get with these GPUs - raster, RT and Tensor all rolled in to one chip. I'd imagine they would still want £350-£400 for a chip solely focused on legacy raster performance.

AMD's hybrid design suits this period more as they can apparently dynamically balance performance between RT and raster all be it at the detriment of RT.

Looks like we now have a choice. If RT is more important then pay a premium to Nvidia, otherwise AMD are looking very promising.

AMDs RT is lot more elegant..
I believe RTX stalls the graphics pipeline effectively bottlenecking it to the extent of 40% FPS drops
so it makes sense to load balance the hardware to incorporate those kind of drops imo.
 
Last edited:
Back
Top Bottom