• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA ‘Ampere’ 8nm Graphics Cards

OK. If preorders are live tomorrow, who is actually buying a 3090 FE and what's the most you'll pay?

I will be at a maximum of £1399.
Same any more and ill just grab a 3080 and pocket the difference

I'm kinda hoping with all the 1400 rumours floating around that when it comes time for release they say 1200 so I feel less like I should been lubed up first and I am actually getting a bargain.... even tho I'm probably not still lmao
 
Those lines are the results of 7 different runs in the same part of the map.

VRAM useage was at 7.95GB,and even a slight variation in path could push it just over 8GB as the assets would load in at slightly different times. Even at 7.95GB VRAM usage,the GPU wasn't at 100% utilisation. At over 8GB usage,the GPU utilisation went under 50% usage.

I have tested VRAM limits on modded games. Fallout 4 was capped at 60FPS. A GTX1080 can easily push 60FPS on that engine at qHD with a reasonable CPU. Its not near to 100% usage.Even at close to 8GB VRAM usage,GPU usage wasn't near 100% and when it breached 8GB GPU usage plummeted.

So if a GTX1080 at nowhere near 100% usage,can have its performance crash when it went over 8GB VRAM usage,do you think in your head,that if I had 11GB of VRAM on my GTX1080 performance would crash??

No it wouldn't.

Guess what,4K is double the number of pixels rendered than qHD. So at 4K,8GB wouldn't have been enough.

Edit!!

If that is not enough for you,then put your money where your mouth is and buy a 6GB/8GB card.

I brought up my GTX 560 to illustrate that there is a point where the GPU can't use any more vram because it's not strong enough to make use of it.

How do you calculate where that point is, or do you actually think that the amount of vram any GPU can use is unlimited?
 
The PCIe 4.0 thing also ties in with the memory thing.
CB did some PCIe 4.0 scaling and while for the RX 5700 showed no difference, and the RS 5500 XT did for the 4GB version (100% vs 112%), the most interesting thing was the RX 5600 XT:
Gx8s3yo.png

https://www.computerbase.de/2020-02/amd-radeon-pcie-3.0-4.0-test/
Okay only 6% but it looks like that 6GB isn't quite enough VRAM and PCIe 4.0 helps to fill it from main memory.
BTW, also regarding memory the last time TPU had a Tahiti and GK102 in a review the difference between the two has huge.
https://www.techpowerup.com/review/nvidia-geforce-gtx-1080/26.html
1FRAC3V.png


The 770 with only 2GB is way behind the 280X. In fact, Tahiti has overtaken the GTX 780 in 1440P and 4K. Pity there's no 4GB 770 or 680 in there, but the 780 indicates that it's not just a VRAM thing.


The 5700XT is not the fastest card, its quite fast but by the time the next generation comes along this difference will magnify.

The 2080TI does not have PCIe4 but PCIe4 is 2X the bandwidth, so you can get a rough idea but cutting the bandwidth from 16X to 8X.

PCIe4 vs PCIe3 already has an effect, it will matter with this coming generation.

5700XT PCIe3 vs PCIe4: 4%

2080TI X8 vs X16: 16%

gXjLtLs.png


HLWD3kY.png


 
OK. If preorders are live tomorrow, who is actually buying a 3090 FE and what's the most you'll pay?

I will be at a maximum of £1399.
I'm in for a 3090 at some point but not in any hurry, despite knowing my 1080Ti is gonna struggle if the Reverb G2 shows up first.
 
The 5700XT is not the fastest card, its quite fast but by the time the next generation comes along this difference will magnify.

The 2080TI does not have PCIe4 but PCIe4 is 2X the bandwidth, so you can get a rough idea but cutting the bandwidth from 16X to 8X.

PCIe4 vs PCIe3 already has an effect, it will matter with this coming generation.

5700XT PCIe3 vs PCIe4: 4%

2080TI X8 vs X16: 16%






That's probably just another Ryzen bug, since PCIEV4 is only avaialble on Ryzen right now. Doubt we'll see the same results when Intel's 11th gen (Rocket Lake) launches later on.

Testing the fastest GPU currently released on Intel (fastest gaming platform), the 2080ti, shows only a slight benefit from v3 x8 to x16, meaning it's only just using more bandwidth than v3 x8 provides.
 
Having looked into this further since the other night, it's also a bit of an unknown what vRAM usage is actually necessary, games will allocate a whole bunch based on what is availabile and then internally manage what they put in there. And they don't always fill what they actually reserve, and it turns out some of the extremely few examples of large vRAM usage like Resident Evil going over 8Gb is actually not being used, benchmarks with cards with different amounts of vRAM can kinda confirm this, that it's not an impact of performance, you can assign way less than 8Gb and it's fine.
The point about the 2080Ti stands. The 3070 is supposed to be 2080 Ti performance yet has 3GB less VRAM.

Sure it's cheaper, but your argument is that VRAM is paired scientifically to GPU grunt and nVidia don't give more than you need.

I am assuming you will simply say the 2080 Ti was a halo product so nV charged more and included more VRAM than necessary.

Moving on.

You said in reply to me previously that "lots of (fast) system RAM" and "ultra-fast NVME" would not be necessary, even with less VRAM than we were hoping for (8 GB - 10 GB for 70/80).

So let's throw in a hypothetical system which doesn't have these. We'll align somewhat closely to the new consoles. Say a system with:

8GB DDR3 or DDR4 sys RAM.
A 3070 8GB or 3080 10 GB.
No SSD at all.
Whatever CPU you like.
PCIe 3x at max.

You said previously that software would make up for the shortfall in hardware. That clever software techniques could ensure smooth gameplay when VRAM was full to capacity.

Looking those specs, do you still conclude that there would be "no difference" between an 8 GB 3070 and a 12+ GB 3070? Or an 11GB 2080 Ti?

No time when the software wouldn't be able to smooth out the lack of VRAM?
 
Of course it could, if there was any software which took advantage of it.

I don't think you understand how VRAM works. 11GB of data isn't being used constantly, it's stored there because it's quicker than accessing it from system RAM. No different than having data in system RAM vs getting it from disk or SSD.

The data stored there but not being used is basically a waste, that's only ever done if the game suspects that something that's not yet being used will need to be used in the near future. Modern games have assets that drive game installations up to 100Gb or that kind of region which when uncompressed into a usable form in vRAM would probably be way bigger, something like 2x that. So you have maybe 200Gb of raw uncompressed assets for the game and maybe 8-10Gb of vRAM although 75% of gamers are 6Gb or less as it stands today. Game engines are already super optimised for streaming assets from slower storage sources like SSDs in time for them to be needed by the GPU and that vRAM pool is constantly being churned with new data. I used the example of GTA V with my install being 91Gb large, yet I can zip about the map in a jet or helicopter and it streams all those assets in and out of vRAM with zero trouble what so ever. All modern game engines do this because the amount of unique assets we want in the game exceeds vRAM limitations by like an order of magnitude.

What is actually important is how much vRAM does the GPU need to render the next frame, what assets are visible and are going to be used in the next calculation, that's what is actually important in modern gaming and there's a relationship by where the more active vRAM you're using the lower the frame rate. We put things into vRAM so the GPU can do calculations based on that data to draw the next frame. If you double the unique objects being rendered in your game by throwing loads of new models and textures into vRAM you give your GPU more work to do and thus get a lower frame rate. There's really a ceiling on maximum useful vRAM any specific GPU really needs.

We've seen what happens when you load up a game with over 10Gb of worth of assets, and it's that it takes so long to render the next frame that the game is unplayable, go look at the FS2020 benchmarks and see what happens at 12.5Gb of vRAM usage.
 
Anybody notice the guy who leaked the prices also said the founders editions were $100 more expensive? So the 3080FD will be $900 and the 3090 FE will be $1500?

only clown cards will be less.
 
I brought up my GTX 560 to illustrate that there is a point where the GPU can't use any more vram because it's not strong enough to make use of it.

How do you calculate where that point is, or do you actually think that the amount of vram any GPU can use is unlimited?

Look you asked about GTX1080TI saying you doubted it would run out of VRAM. I ran out of VRAM at qHD on a slower GTX1080 in those scenarios in 2017. The GPU wasn't at full utilisation in both cases. After that I was more careful on what settings and mods I used to manage it within the 8GB framebuffer.

If I had a 4K monitor,yes your GTX1080TI would use 11GB of VRAM fine without even being fully utilised. Because people have run out of VRAM in modded games at higher resolution using one. Once you ran out of VRAM,you get caching to system RAM,which causes stuttering and GPU utilisation falls.

A lot of people keep saying low VRAM is fine - but most don't actually bother to test and see what happens if a game goes over it. If it leads to performance drops,you have run out.

The thing is unlike 10 years ago,VRAM increases have slowed down from generation to generation. It was common to get a doubling(or at least a 50% increase) in VRAM quantities at each generation.

Between the Nvidia 7800 series/X1900(512MB) and Kepler/R9 290,we went from 512MB to 4GB/6GB VRAM. This is a 12X improvement(or 8X if we ignore the Titan). That was in 8 years. We also went from GDDR3 RAM to 512 bit GDDR5 memory buses.

In the six years since,we have not even seen a doubling(6GB to 11GB),despite GPUs becoming much more powerful(the Titan is now so expensive its an outlier but is only 4X).

The old consoles only had 8GB or a bit more RAM IIRC,so there was at least some level of trying to manage things and you see that with textures.

Many games don't use as high resolution textures as they could due to console limitations,but certain games can,which does push VRAM usage up. Modded games can also move textures to 4K and even 8K - in the past games used to increase texture resolutions more quickly per generation,as the VRAM quantities were rising in dGPUs.

Now we have consoles,which have not only increased VRAM amount,but can now use very fast SSDs to cache textures. That means more VRAM usage,and more need of faster storage to cache files.

Most PCs don't even support PCI-E 4.0,so can't run something similar.We also have increased use of DX12/Vulkan which also seems to increase VRAM usage.
 
Last edited:
As you turn graphical options up, vRAM usage goes up and frame rate goes down.

Except in at least a major case this is not true: texture quality. Almost solely vram dependent.
And the ratio between the performance hit & vram used is not always the same for every setting either.

This is what you refuse to understand - you can have a lot of vram be put to use with minimal performance hit.
 
The 5700XT is not the fastest card, its quite fast but by the time the next generation comes along this difference will magnify.

The 2080TI does not have PCIe4 but PCIe4 is 2X the bandwidth, so you can get a rough idea but cutting the bandwidth from 16X to 8X.

PCIe4 vs PCIe3 already has an effect, it will matter with this coming generation.

5700XT PCIe3 vs PCIe4: 4%

2080TI X8 vs X16: 16%

gXjLtLs.png


HLWD3kY.png


The other major factor is whatever the new consoles do with streaming stuff from their NVMe drives.
In an ideal world both consoles would have launched with more RAM, but the streaming thing will be interesting to watch too.
High end PC rigs can always try going crazy on system RAM even for gaming. 4 x 32GB to allow huge RAM caching?
Wonder if that Radeon Pro with the SSD onboard had any influence on what the consoles are doing?
 
Look you asked about GTX1080TI saying you doubted it would run out of VRAM. I ran out of VRAM at qHD on a slower GTX1080 in those scenarios in 2017. The GPU wasn't at full utilisation in both cases. After that I was more careful on what settings and mods I used to manage it within the 8GB framebuffer.

If I had a 4K monitor,yes your GTX1080TI would use 11GB of VRAM fine without even being fully utilised. Because people have run out of VRAM in modded games at higher resolution using one. Once you ran out of VRAM,you get caching to system RAM,which causes stuttering and GPU utilisation falls.

The thing is unlike 10 years ago,VRAM increases have slowed down from generation to generation. It was common to get a doubling(or at least a 50% increase) in VRAM quantities at each generation.

Between the Nvidia 7800 series/X1900(512MB) and Kepler/R9 290,we went from 512MB to 6GB VRAM. This is a 12X improvement(or 8X if we ignore the Titan). That was in 8 years. In the six years since,we have not even seen a doubling(6GB to 11GB),despite GPUs becoming much more powerful(the Titan is now so expensive its an outlier). Now we have consoles,which have not only increased VRAM amount,but can now use very fast SSDs to cache textures,and most PCs don't even support PCI-E 4.0,so can't run something similar.
We also have increased used DX12/Vulkan which also seems to increase VRAM usage.

Do you think every GPU is capable of using an unlimited amount of vram?

If not, how do you calculate how much vram a given GPU is capable of using?

How?
 
Do you think every GPU is capable of using an unlimited amount of vram?

If not, how do you calculate how much vram a given GPU is capable of using?

How?

You literally said your GTX1080TI couldn't use 11GB of VRAM.

If a GTX1080 at under 100% utilisation can use more than 8GB of VRAM at qHD,why are you so adament your GTX1080TI has too much VRAM??

Look at what others have posted here. So many times the "low VRAM is enough" crowd are always wrong.

They tell people a low VRAM card is fine,yet it always isn't. I have a GTX960 also to hand.

A card people said can't use 4GB of VRAM,so the 2GB version was fine. Except it was proven by various websites 5 years ago,the 2GB had worse performance in many games.

This was when 4GB was considered a "lot" of VRAM,and 6GB a "ton". Yet the GTX980TI still seems to do much better than a 4GB Fury X.
 
Do you think every GPU is capable of using an unlimited amount of vram?

If not, how do you calculate how much vram a given GPU is capable of using?

How?
Nobody has said that a 8800 GT should have come with 20 GB VRAM. Or a 3DFX Voodoo2.

I believe we were talking about the 3070, 3080, 1080Ti, 2080Ti, etc. The latter two actually coming with 11 GB VRAM, which you said they "couldn't possibly" use.
 
The other major factor is whatever the new consoles do with streaming stuff from their NVMe drives.
In an ideal world both consoles would have launched with more RAM, but the streaming thing will be interesting to watch too.
High end PC rigs can always try going crazy on system RAM even for gaming. 4 x 32GB to allow huge RAM caching?
Wonder if that Radeon Pro with the SSD onboard had any influence on what the consoles are doing?

I bought 32gb of RAM when I built my old 3770k rig. I just retired that rig last year and never got anywhere near using even half of the RAM. (Although I'm sure people could have quoted examples where some computer...somewhere used 32gb of RAM.)

I don't know how much vram a given given graphics card can actually make use of. I don't even know how to calculate such a metric. I do know that I would not spend extra money for, say, 32gb of vram on any GPU today.
 
Well I just sold my 2080S for £510 so I'd say the going is quite reasonable. Cost me £20/month to 'rent' it.

Bring on 3000 series!
 
The 2080TI does not have PCIe4 but PCIe4 is 2X the bandwidth, so you can get a rough idea but cutting the bandwidth from 16X to 8X.

No you can't.

If a card uses 80% of the bandwidth afforded by PCIE3 x16 then it would show a difference if reduced to PCIE3 x8 but wouldn't make any use of the extra bandwidth afforded by PCIE4 x16.
 
Back
Top Bottom