• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA ‘Ampere’ 8nm Graphics Cards

Don't think anyone in here has noticed yet that the new 3080 is rumoured to only have 10GB what your thoughts on this ? :D:p:D:p:D:p:D:p:D:p:D:p:D:p:D:p:D

Don't think anyone here has noticed yet that the RTX 3080 has an HDMI 2.1 output. Please everyone, share your thoughts about how this will help you to game on your new LG OLED TV? :D:p:D:p:D:p:D:p:D:p:D:p:D:p:D:p:D
 
Don't think anyone in here has noticed yet that the new 3080 is rumoured to only have 10GB what your thoughts on this ? :D:p:D:p:D:p:D:p:D:p:D:p:D:p:D:p:D

No comment! :p

Don't think anyone here has noticed yet that the RTX 3080 has an HDMI 2.1 output. Please everyone, share your thoughts about how this will help you to game on your new LG OLED TV? :D:p:D:p:D:p:D:p:D:p:D:p:D:p:D:p:D

That solves the heating for the living room this winter.

:P
 
Are you claiming it's a linear relationship (always)? That there is a linear relationship between VRAM usage and required GPU performance to render a frame?

Such that any increase in VRAM requirement must result in a proportional (linear) increase in GPU stress?

I'm curious.

The fact that there are sometimes multiple VRAM configurations of the same card would seem to call this into question. Eg 480 4GB and 480 8GB. In such cases you would claim that the 8GB variant is of no value? That the 4GB card must be equally as viable as the 8GB card?

I'm not putting words into your mouth here I'm clearly just asking a question(s).

I've not stated that the relationship is strictly linear because I'm honestly not sure. However the relationship wouldn't need to be a linear one for you to make predictions about how future demands of games on both vRAM and GPU. All that matters is that it's not random, if there's a predictable relationship, even if that relationship is non-linear (maybe it's exponential or logarithmic), you can still use it to predict.

But we do know there is a relationship between these 2 things, and that must necessarily be the case, because the reason we put a model or a texture or any other asset into vRAM is for the GPU to calculate the next frame (or an upcoming frame) using that asset. Any additional unique asset you add to the scene in game necessarily increases the load on the GPU and also the amount of vRAM usage.

And if you kinda take a step back and look at the broader picture of gaming over say the last 20 years and just look at GPU speeds in terms of rough performance like say the FP32 performance in TFLOPS and the amount of memory have it kinda looks like it is a linear relationship. I've just been faffing with some numbers and added a trend line, and linear trend line seems to fit best. But I've been buying GPUs since the voodoo days in like 1998 so that's more than 20 years and it's an obvious relationship, GPUs get faster vRAM increases.

Capture.png


I mean I used to do modding a long time ago back at Uni and prior, back when we were on like Series 4-5 Nvidia cards, I had a 4600Ti at the time. And I remember mapping in Unreal engine at the time, and looking at performance and looking at vRAM usage. And you can just keep throwing more unique static meshes and textures into the level you're making, and as you start to get near the ceiling of your vRAM you kinda notice that you're getting into unplayable frame rate territory as well. You can't just worry about 1 limit and ignore the other, you have to consider both.

With different vRAM configs, I kinda touched on this before. Not all cards are used strictly for gaming, there's other apps that use them as well, non real time rendering, CAD, you see this a lot with the Quadro cards which are marketed at these people as "workstation" cards. It's why the 3090 is going to have 24Gb because it's a halo product, there no way in hell you'd ever be able to load that card up with 24Gb of game assets and have a playable frame rate. 16Gb would be way more than sufficient for the purpose of gaming.

One thing to keep in mind that multi vRAM configs in video cards are rare and my suspicion is this has something to do with architecture limitations. The way the architectures are built mean that vRAM configs are only possible in certain multiples, say in the 1060 case you could get 3Gb and then double that at 6Gb. Well what happens if the expected useful vRAM is say 4Gb, well you can either under provision or over provision. If you under provision maybe you only lose a small amount of quality or frame rate, but the card is cheaper with less vRAM. And the 6Gb has more than enough vRAM but a jacked price If there's no obvious best config it's better to probably make both and let the consumer decide if that trade off is worth it or not. I'm reading an article which stated the base price of these cards were £190 for the 3Gb and £240 for the 6Gb. You seemed cynical before, about the cost of vRAM and if that lead to savings so I think this is a clear example where it does. Source https://www.eurogamer.net/articles/digitalfoundry-2016-nvidia-geforce-gtx-1060-3gb-vs-6gb-review_14
 
There is only 30w difference between the 3080 and 3090 despite having a huge 1000 cuda core difference.

this tells me two things

AIB will just use the same coolers for both because the TGP is so similiar

and secondly, the 3090 is much higher binned silicon than the 3080. The 3080 silicon looks like total trash tier compared to the 3090
 
There is only 30w difference between the 3080 and 3090 despite having a huge 1000 cuda core difference.

this tells me two things

AIB will just use the same coolers for both because the TGP is so similiar

and secondly, the 3090 is much higher binned silicon than the 3080. The 3080 silicon looks like total trash tier compared to the 3090

The RTX3080 also has a 17% core difference. That tells me, Nvidia is leaving room for an RTX3080TI which probably will have better GA102 dies.
 
The RTX3080 also has a 17% core difference. That tells me, Nvidia is leaving room for an RTX3080TI which probably will have better GA102 dies.

Seems likely. One of the things with product lineups like these is that the top tier bins tend to be quite small yields, you need to make a lot of chips in order to get a handful you need for the better products, and I'm fairly sure that's why they tend to come a bit later.

I remember with my 5970 a dual GPU AMD card, I think to keep the card under the PCI-e power limit they needed really power efficient GPUs, but 2 per card so the cards were very rare.
 
Same any more and ill just grab a 3080 and pocket the difference

I'm kinda hoping with all the 1400 rumours floating around that when it comes time for release they say 1200 so I feel less like I should been lubed up first and I am actually getting a bargain.... even tho I'm probably not still lmao
Why would anyone be happy paying MORE than the 2080Ti was at launch?????? Not to mention £600!!! More than the 1080Ti was!! That's nearly double the price in 4 years FFS!
 
Anybody notice the guy who leaked the prices also said the founders editions were $100 more expensive? So the 3080FD will be $900 and the 3090 FE will be $1500?

only clown cards will be less.

If Nvidia think I'm paying £900 for a card in 2020 with a life limiting 10gb they can ram it!
 
If Nvidia think I'm paying £900 for a card in 2020 with a life limiting 10gb they can ram it!
They don’t care, there are plenty who will ram their money in nvidia’s coffers to make up for the loss. Don’t like it? Wait and buy AMD.
 
Techspot did some tests a few years ago,with the GTX1060 3GB and GTX1060 6GB. At 1440p the GTX1060 6GB system could get away with 8GB of system RAM to reach maximum performance,but the GTX1060 3GB needed 16GB of RAM to do the same. What you would see is worse 1% lows if there wasn't enough system RAM.
And garunteed the testers and reviewers will be testing the cards in systems with 64gb+ of system memory.
 
Hello

Nvidia leaking from all sides. I'll take that thanks - don't know if its due to IPC, clock, core count or improvements to RTX/DLSS but if Ray Tracing games are twice as fast as the 2080ti, I'm happy with that!



 
They don’t care, there are plenty who will ram their money in nvidia’s coffers to make up for the loss. Don’t like it? Wait and buy AMD.
I intend to. I plan to buy a second hand 2080Ti, then see AMDs offerings. But as this is the Ampere thread, this is where I will discuss ampere. Funny that.
 
Already proven to be faked.
He loves spreading fake news lately :p

I have been saying for a long time that Ampere would have at least 2x RT performance, no surprise there. Many were not buying it :D

So glad I waited, none of the games I wanted to upgrade for got released due to delays so I did not miss out. The 2000 series really did have rubbish RT.
 
Last edited:
Back
Top Bottom