• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

10GB vram enough for the 3080? Discuss..

Status
Not open for further replies.
24gb would put it soo close to a 3090, assuming the core would still he cut down etc.

Nvidia have not positioned themselves well at all. I don't think they had any intention of pricing the 3080 as low as they did. Its already too close to the 3090 in performance to warrant the additional cost of the 3090, and any 3080ti would be even closer still so what would they do with the price? Would it be closer to the 3080 or the 3090? Even right in the middle it would be too expensive vs the 6800xt. Any lower and it almost makes the 3090 AND the 3080 pointless cards. AMD spooked them big time. Nvidia knew what was coming and I'd imagine they are head scratching right now lol
 
3080ti coming = pretty much garanteed. Much more ram? That's speculation.
There is verrryyy little chance that Nvidia are not going to increase VRAM above 10GB (even if it's to 12GB, half of the 3090), now that AMD have put 16GB on all of their high-end models. It would be a marketing disaster and they are already under intense pressure and scrutiny as it is without generating further backlash.

I have to do a lot of reading in my day job. Sysadmin and helpdesks etc. I have to absorb key details and weed out the chaff. A lot of posts on here could be more to the point, even link to resources for deeper reading, dont need reems of paragraphs to get your point across. :)

I acknowledge his statement as having a degree in Computer Science (or similar) will give you enough fundamentals, not everyone will have this, nor need it, but its just my black and white response for when dog or anyone is asking me a question or paraphrasing or strawmanning - it works both ways. Happy to discuss.

I just pointed it out as its valid, regardless of your sidestep there, Im sure @PrincessFrosty would meet me half way on this one!
Ditto, communication at all levels of heirarchy is a big part of my job. People generally have no time or patience for any blah or fluff; they just want you say what you have to say, as concisely as you are able, with as few words as possible. It's the same for me here when I am reading the forum.
You're aware that up until Nvidia's 3000 series announcement, all news of the 3070, 3080, 3090 were also 'rumours'? We already knew most of the specs and details before they launched, it's the same now with the 3080ti news. It's obvious, even to a small child, that a 3080ti is coming, with much more than 10GB VRAM that the 3080 has.

Also if you consider my post a 'personal attack', you have issues.
He is one of the most neurotic posters I have seen in a while and his posts seem to revolve around accepting as gospel whatever he reads in press releases.

It can reduce the vram required, yes. It's no replacement for a lack of VRAM though. Remember, the fastest consumer SSDs are what, approaching 8gb/sec red/write? Now look at the bandwidth of a 3080. Almost 100 times more. When the 3080 runs out of ram, and that's when not if as we've been saying from the start, RTX IO isnt going to save it.

Thanks for explaining to him what I could not be arsed to do.
 
By being able to load textures and data in vRAM faster you don't need to keep everything in vRAM. You can stream data into vRAM as you get close to new areas. Then delete the data you dont need. The whole point on the RTX IO feature. It saves vRAM space.



This begs the question, do you really know anything about what you are talking about? https://www.nvidia.com/en-gb/geforce/news/rtx-io-gpu-accelerated-storage-technology/

Do you just constantly copy and paste Nvidia marketing material? RTX IO doesn't do anything yet and it could be 2 years before Microsofts direct storage does anything.
 
Ditto, communication at all levels of heirarchy is a big part of my job. People generally have no time or patience for any blah or fluff; they just want you say what you have to say, as concisely as you are able, with as few words as possible. It's the same for me here when I am reading the forum.

I have no problem with it at all. :) At least we are similar on some items..
 
I wouldnt bet on bandwidth being a replacement for storage at all. There's a finite amount of data you can fit in vram regardless of how fast it is. More bandwidth is always nice, but you still end up having to swap out to ram and that'll always be the bottleneck.

I'm more confident about this after some testing.

The argument about Far Cry 5 in the parody "is 16Gb enough for the 6900XT" thread sparked my interest in this because it was used as another example of a game that requires more than 10Gb of vRAM which of course I was fast to test. On my GTX 1080 it was reporting about 7.5Gb allocated but real world usage fluctuated between 4Gb and about 6Gb absolute max/peak, this is in 4k Ultra with the HD texture pack. Another user LtMatt pointed out how this wasn't "optimal" because in his tests he got stuttering loading assets which only went away moving from an 8Gb to 16Gb Radeon card. And I clearly did not have this and I wanted to know why.

So I did a few things, I took video graphing frame times to look for micro hitching that was so small it wasn't easily visible while playing and honestly the game is extraordinarily stable in terms of frame times you can jump into a helicopter, take off spam missiles and explosions, fly across the entire map between different zones, land in the middle of the more built up areas where there's a lot of detail, while frame rate is borderline bad simply due to maxing the settings for stress testing, the consistency was very high. Then I watched my own game play videos back and one thing I noted is that the vRAM usage wildly bounces between 4Gb and 6Gb used, the game will fill about 2Gb of vRAM in just a few seconds as you approach a compound. And you've got to think that this is about right on a drive that is basically at the 4GB/sec limit of PCI-e 3.0 and a fast CPU, filling a mere 2Gb of vRAM data is going to be trivial, in fact the stress on the SSD itself is even less, because textures are stored compressed and they make up most of the assets by size, so 2Gb in vRAM is less on disk.

Games like Far Cry 5 and in fact most modern game engines have long overcome vRAM limitations by simply streaming textures into a vRAM pool on the card and doing so predicatively so that the textures are there just in time to avoid a cache-miss. It's how a game like Far Cry 5 can be 60Gb install on disk, a vast amount of that being textures, yet be an open world game where you can visit any part of the world with no real loading to speak of. And if you ignore this useless allocated vRAM metric, which tells you nothing, and you focus on what is really in use, you can see that fluctuating rapidly as you fly around the map forcing old unused textures out of vRAM and new stuff is streamed in.

I have a fairly unique setup 2x Samsung 960 Pros in RAID 0, which is 2 fast SSDs both with sequential read speeds of 3500 MB/sec set up to be read/written in parallel which doubles the speed (although actually not really doubled because the PCI-e 4x bandwidth is a bottleneck), in addition an overclocked [email protected] which is no slouch for gaming and more specifically uncompressing texture data on the fly. I suspect this has something to do with my game being hitch free. These are horrible bottlenecks to have to overcome with expensive components however, and I think DirectStorage and the RTX IO for Nvidia and the AMD equivalent (does this have a name?) will make this a thing of the past.

This is why I have said in the past that it's a new paradigm, vRAM size increases don't track game size anymore because it's not being used as a dumb cache that you attempt to cram the entire game in to, now what is more important is how much vRAM do you need to only hold what assets you need for your immediate surroundings.
 
24gb would put it soo close to a 3090, assuming the core would still he cut down etc.

Nvidia have not positioned themselves well at all. I don't think they had any intention of pricing the 3080 as low as they did. Its already too close to the 3090 in performance to warrant the additional cost of the 3090, and any 3080ti would be even closer still so what would they do with the price? Would it be closer to the 3080 or the 3090? Even right in the middle it would be too expensive vs the 6800xt. Any lower and it almost makes the 3090 AND the 3080 pointless cards. AMD spooked them big time. Nvidia knew what was coming and I'd imagine they are head scratching right now lol

I'm confident that the 3080ti will be faster than the 3090 in games. Perhaps they'll throw some quadro optimisations the 3090's way to give those poor owners some extra value for compute workloads.
 
I'm more confident about this after some testing.

The argument about Far Cry 5 in the parody "is 16Gb enough for the 6900XT" thread sparked my interest in this because it was used as another example of a game that requires more than 10Gb of vRAM which of course I was fast to test. On my GTX 1080 it was reporting about 7.5Gb allocated but real world usage fluctuated between 4Gb and about 6Gb absolute max/peak, this is in 4k Ultra with the HD texture pack. Another user LtMatt pointed out how this wasn't "optimal" because in his tests he got stuttering loading assets which only went away moving from an 8Gb to 16Gb Radeon card. And I clearly did not have this and I wanted to know why.

So I did a few things, I took video graphing frame times to look for micro hitching that was so small it wasn't easily visible while playing and honestly the game is extraordinarily stable in terms of frame times you can jump into a helicopter, take off spam missiles and explosions, fly across the entire map between different zones, land in the middle of the more built up areas where there's a lot of detail, while frame rate is borderline bad simply due to maxing the settings for stress testing, the consistency was very high. Then I watched my own game play videos back and one thing I noted is that the vRAM usage wildly bounces between 4Gb and 6Gb used, the game will fill about 2Gb of vRAM in just a few seconds as you approach a compound. And you've got to think that this is about right on a drive that is basically at the 4GB/sec limit of PCI-e 3.0 and a fast CPU, filling a mere 2Gb of vRAM data is going to be trivial, in fact the stress on the SSD itself is even less, because textures are stored compressed and they make up most of the assets by size, so 2Gb in vRAM is less on disk.

Games like Far Cry 5 and in fact most modern game engines have long overcome vRAM limitations by simply streaming textures into a vRAM pool on the card and doing so predicatively so that the textures are there just in time to avoid a cache-miss. It's how a game like Far Cry 5 can be 60Gb install on disk, a vast amount of that being textures, yet be an open world game where you can visit any part of the world with no real loading to speak of. And if you ignore this useless allocated vRAM metric, which tells you nothing, and you focus on what is really in use, you can see that fluctuating rapidly as you fly around the map forcing old unused textures out of vRAM and new stuff is streamed in.

I have a fairly unique setup 2x Samsung 960 Pros in RAID 0, which is 2 fast SSDs both with sequential read speeds of 3500 MB/sec set up to be read/written in parallel which doubles the speed (although actually not really doubled because the PCI-e 4x bandwidth is a bottleneck), in addition an overclocked [email protected] which is no slouch for gaming and more specifically uncompressing texture data on the fly. I suspect this has something to do with my game being hitch free. These are horrible bottlenecks to have to overcome with expensive components however, and I think DirectStorage and the RTX IO for Nvidia and the AMD equivalent (does this have a name?) will make this a thing of the past.

This is why I have said in the past that it's a new paradigm, vRAM size increases don't track game size anymore because it's not being used as a dumb cache that you attempt to cram the entire game in to, now what is more important is how much vRAM do you need to only hold what assets you need for your immediate surroundings.

Next gen consoles, with super fast PCI-Ev4 SSD's with advanced compression (faster than your 960's in raid0), still saw fit to put 16GB total memory on their consoles, so they obviously disagree with you and still see the need for double the total memory over the last generation of console, despite having such a revolutionary SSD configuration.

Sony and Microsoft would have been the first to scrimp on memory if the SSD loading speed rendered VRAM obsolete. Plus no PC developer in their right mind will develop a game assuming the user has a PCI-Ev4 CPU, motherboard and SSD, that's just total madness for now.
 
Last edited:
Do you just constantly copy and paste Nvidia marketing material? RTX IO doesn't do anything yet and it could be 2 years before Microsofts direct storage does anything.

Why do I care about your unsourced rant?

Lets look at the 2 years claim. Microsoft to Bring DirectStorage API to Windows in 2021: Speeding Up Gaming With NVMe SSDs https://www.tomshardware.com/uk/new...i-windows-2021-gaming-nvme-ssds-nivida-rtx-io

Microsoft this week said that it would bring preview of its DirectStorage application programming interface that powers the company’s Xbox Velocity Architecture to Windows 10 developers in 2021.

This process has already begun for DirectStorage and we’re working with our industry partners right now to finish designing/building the API and its supporting components. We’re targeting getting a development preview of DirectStorage into the hands of game developers next year. https://devblogs.microsoft.com/directx/directstorage-is-coming-to-pc/

Does not look like its going to be two years. Note streaming of textures does not need DirectStorage. Its already used in games. https://docs.unrealengine.com/en-US/Engine/Content/Types/Textures/Streaming/index.html

Anyway this feature already exists NVidia calls it gpudirect and runs on Ubuntu 18.04 and PCIE gen 3 is supported https://developer.nvidia.com/gpudirect As far as I know you need PCIe P2P support. http://developer.download.nvidia.com/devzone/devcenter/cuda/docs/GPUDirect_Technology_Overview.pdf slide 9/20
 
Last edited:
I just pointed it out as its valid, regardless of your sidestep there, Im sure @PrincessFrosty would meet me half way on this one!

I completely agree...in isolation. The objective speed measurement of the absolute very best single consumer SSD we have is 7.5GB/sec on the Samsung 980 Pro which you need a PCI-e 4.0 motherboard to even use without a bottleneck, you can get much faster than this with some exotic RAID setups but let's just ignore that for a moment and talk sensible. Where as vRAM bandwidth on a modern high performance card like the 3080 is like 760 GB/sec. There's no contest you simply cannot use the SSD as memory for the GPU to do work on in real time, it's literally 100x too slow.

BUT...

Games do predictive loading, they are tiled into regions, they inspect player behaviour and then predict what assets will be needed and pre-cache them just in time to avoid cache misses. What the vRAM is important for only the assets that are being used to render your current frame, the GPU only needs 760GB/sec of memory bandwidth for asset's it's reading, processing and using to draw the current frame.
 
Next gen consoles, with super fast PCI-Ev4 SSD's with advanced compression (faster than your 960's in raid0), still saw fit to put 16GB total memory on their consoles, so they obviously disagree with you and still see the need for double the total memory over the last generation of console, despite having such a revolutionary SSD configuration.

Sony and Microsoft would have been the first to scrimp on memory if the SSD loading speed rendered VRAM obsolete. Plus no PC developer in their right mind will develop a game assuming the user has a PCI-Ev4 CPU, motherboard and SSD, that's just total madness for now.

This is total system memory, if you subtract what is reserved for the OS+Apps and you subtract the game engine which is typically 2-4Gb of RAM you have about 10Gb to play with for "vRAM", and in fact Microsoft did actually skimp because they know the vRAM portion needs to be faster memory than the non-vRAM portion so they put 10Gb of fast memory on there and 6Gb of slower (cheaper) memory.

I never said that the SSD makes vRAM obsolete, you still 100% need incredibly fast vRAM. You just don't need so much you can put the entire 100Gb game install on there. How much vRAM do you really need? Well it sorta looks something like enough to hold the assets that are needed to render the current frame. Plus you want a little bit of extra headroom for texture swaps, but really not all that much. Testing with Doom Eternal which lets you set texture pool size in the video options shows no benefit above "high" which sets the engine CVAR is_poolsize to 2048Mb, in other words 2Gb vRAM reserved exclusively for texture pool sizes. Anything above this does not produce visual quality increase in the game, setting it to Ultra Nightmare puts the pool size at 4.5Gb and it looks no different.

Lets look at the 2 years claim. Microsoft to Bring DirectStorage API to Windows in 2021: Speeding Up Gaming With NVMe SSDs https://www.tomshardware.com/uk/new...i-windows-2021-gaming-nvme-ssds-nivida-rtx-io

Does not look like its going to be two years. Note streaming of textures does not need DirectStorage. Its already used in games.

Exactly. DirectStorage is integrated into the next Xbox consoles and from what I've read will be patched into Windows in Jan 2021, then all that's needed is game engines integrate support. Quite frankly if you go reading about things like Unreal engine they've already re-written their core I/O components of the engine to cope with the PS5s much quicker SSD, and that was 6+ months ago. I fully expect DirectStorage to be supported day 1. Because most big AAA games are multi-platform it means the PC for once will actually get forced adoption of a new tech via the consoles, imagine that!

And as you said, streaming is common place anyways we've been perfecting that tech for well over a decade now. All DirectStorage does is make SSD access faster and bypass the CPU freeing up resources.
 
Why do I care about your unsourced rant?

Lets look at the 2 years claim. Microsoft to Bring DirectStorage API to Windows in 2021: Speeding Up Gaming With NVMe SSDs https://www.tomshardware.com/uk/new...i-windows-2021-gaming-nvme-ssds-nivida-rtx-io





Does not look like its going to be two years. Note streaming of textures does not need DirectStorage. Its already used in games. https://docs.unrealengine.com/en-US/Engine/Content/Types/Textures/Streaming/index.html

Anyway this feature already exists NVidia calls it gpudirect and runs on Ubuntu 18.04 and PCIE gen 3 is supported https://developer.nvidia.com/gpudirect As far as I know you need PCIe P2P support. http://developer.download.nvidia.com/devzone/devcenter/cuda/docs/GPUDirect_Technology_Overview.pdf slide 9/20
I said "could" be 2 years and you responded with more copy and pasted content that says a "preview" should be available for developers in 2021.
 
I said "could" be 2 years and you responded with more copy and pasted content that says a "preview" should be available for developers in 2021.

So I cite my sources, you dont like. Okay fine. A preview build is tested before release, its the full feature implemented but in the testing phase for bugs. I feel dumb saying this as its very obvious. We plebs will get the preview build last and then MS will release when they feel like it.

It wont be long because you can download the whole source code for the same type of feature on github for linux. This feature is not new. https://developer.download.nvidia.com/CUDA/training/cuda_webinars_GPUDirect_uva.pdf It just reaching gamers for the first time via DX12.
 
Last edited:
So I cite my sources, you dont like. Okay fine. A preview build is tested before release, its the full feature implemented but in the testing phase for bugs. It wont be long because you can download the whole features source code on github for linux. This feature is not new. It just reaching gamers for the first time via DX12.
Your sources don't back up anything you have claimed. Nowhere in that does it say RTX IO will be game ready in less than 2 years. And that ignores the fact I used the word "could" in the first place.
 
One of the biggest hurdles for direct storage on PC is selecting a minimum specification for storage speed. It works on consoles because they are standard spec but what about on PC.

What should a dev mandate as the storage device?
Sata ssd?
Nvme pcie 3?
Nvme pcie 4?

Okay now what speed should they target on this storage medium?

Or will they test your device and gate off certain settings because of your storage device? Sorry sir you can't use ultra preset because you need a pcie 4 Nvme drive even though you 3090/6900xt is fast enough to run the game.
 
I guess it's conceivable that the 3080ti will use 12X1GB modules on a 384 bit bus. Though this would involve expensive re-engineering on the PCB. I think it's more likely we see a 20GB version using 2GB modules (10x2GB), or 24GB (24x1GB).


If the 3080ti has the same cuda core count as the 3090 are they actually able to cut the bus width in a way that allows them to get 20gb of vram?
Or is it even possible to get 20gb on a 384bit bus?
 
Nvidia have not positioned themselves well at all. I don't think they had any intention of pricing the 3080 as low as they did. Its already too close to the 3090 in performance to warrant the additional cost of the 3090, and any 3080ti would be even closer still so what would they do with the price? Would it be closer to the 3080 or the 3090? Even right in the middle it would be too expensive vs the 6800xt. Any lower and it almost makes the 3090 AND the 3080 pointless cards. AMD spooked them big time. Nvidia knew what was coming and I'd imagine they are head scratching right now lol

They must have gotten wind of AMD's pricing beforehand and dropped them at the same price to make it look like AMD were price matching them rather than the other way around, but then tried to have their cake and eat it by selling only a handful of FE's once a fortnight unlike AMDs whose will be mass market...

The 6900 really has them a pickle I imagine do we release a cutdown vram 3090 at the same price and call it a 3080ti and relaunch the 3090 with Titan drivers and call it such or do something else?
 
Well since we are discussing theories I think the 3080 we got was going to originally be the the 3080ti with 20gb of ram and would have launched next year and the 3070 we got would have been the 3080 in a 16gb config. The 3090 would have just been a power hungry titan with 48 GB.

They then got word on AMDs performance and decided to rush to market but they didn't want to give us cards that were "too good" and had to limit them some way.
 
Your sources don't back up anything you have claimed. Nowhere in that does it say RTX IO will be game ready in less than 2 years. And that ignores the fact I used the word "could" in the first place.
Preview build next year is release next year. This already exists, it just has to be added to DX12. Its been on servers from PCIe Gen2 p2p. Its was used on RTX 2080 ti cards. All that needs to happen is DX12 support. Game engines that support PS5 and xbox series x have the feature builtin already. Both consoles support it.
 
Well since we are discussing theories I think the 3080 we got was going to originally be the the 3080ti with 20gb of ram and would have launched next year and the 3070 we got would have been the 3080 in a 16gb config. The 3090 would have just been a power hungry titan with 48 GB.

They then got word on AMDs performance and decided to rush to market but they didn't want to give us cards that were "too good" and had to limit them some way.

That could well be true, we dont know but it is odd there is no titan. That a £1.7k card seems to fit the profile.
 
Status
Not open for further replies.
Back
Top Bottom