• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

10GB vram enough for the 3080? Discuss..

Status
Not open for further replies.
The information available is that the Radeon cards are on par with turning RT wise. This makes them all slower than the 30 series. RTX 3080 is faster than the 6800xt and most likely faster than the 6900xt in RT. This is why there are no RT benchmarks from AMD.





With the Microsoft DXR SDK tool called ‘Procedural Geometry'. https://videocardz.com/newz/amd-ray...dia-rt-core-in-this-dxr-ray-tracing-benchmark




Note with RT & DLSS the 3070 is faster than the 6800. https://videocardz.com/newz/alleged...nd-tomb-raider-with-dxr-performance-leaks-out

If these fake for now features are of interest to you, then fine.
Many people don't care about image quality reduction by Nvidia or ray-tracing which is not even implemented and popular games option just yet.
 
If these fake for now features are of interest to you, then fine.
Many people don't care about image quality reduction by Nvidia or ray-tracing which is not even implemented and popular games option just yet.

These many people being outliners. RTing is the next big feature, the console have made it that way. The 3080 and 3090 were designed to fight the consoles. Also some of the new features of DX12U reduce image quality by design.

Take variable rate shading on AMD's show case game godfall. That feature is by design ment to reduce quality and thus increase performance.

In the Asteroids tech demo, Mesh Shaders render and dynamically adjust the detail of up to 350,000 individual asteroids with a level of sub-pixel geometric detail and thus create performance that is otherwise impossible to achieve. This reduces the render quality of distant objects.

Also DX12U is getting its own version of DLSS via Direct ML. Also using DirectML to reatime upscale low resolution textures to high resolution which is another reason why having so much RAM is not needed.

Gwertzman: You were talking about machine learning and content generation. I think that’s going to be interesting. One of the studios inside Microsoft has been experimenting with using ML models for asset generation. It’s working scarily well. To the point where we’re looking at shipping really low-res textures and having ML models uprez the textures in real time. You can’t tell the difference between the hand-authored high-res texture and the machine-scaled-up low-res texture, to the point that you may as well ship the low-res texture and let the machine do it.

https://venturebeat.com/2020/02/03/...ext-generation-of-games-and-game-development/

Gwertzman: Like literally not having to ship massive 2K by 2K textures. You can ship tiny textures.

Gwertzman: The textures are being uprezzed in real time.

Gwertzman: The download is way smaller, but there’s no appreciable difference in game quality. Think of it more like a magical compression technology. That’s really magical. It takes a huge R&D budget. I look at things like that and say — either this is the next hard thing to compete on, hiring data scientists for a game studio, or it’s a product opportunity. We could be providing technologies like this to everyone to level the playing field again.

Gwertzman: In this case, it only works by training the models on very specific sets. One genre of game. There’s no universal texture map. That would be kind of magical. It’s more like, if you train it on specific textures and then you — it works with those, but it wouldn’t work with a whole different set.

Gwertzman: It’s especially good for photorealism, because that adds tons of data. It may not work so well for a fantasy art style. But my point is that I think the fact that that’s a technology now — game development has always been hard in terms of the sheer number of disciplines you have to master. Art, physics, geography, UI, psychology, operant conditioning. All these things we have to master. Then we add backend services and latency and multiplayer, and that’s hard enough. Then we added microtransactions and economy management and running your own retail store inside your game. Now we’re adding data science and machine learning. The barrier seems to be getting higher and higher.

That’s where I come in. At heart, Microsoft is a productivity company. Our employee badge says on the back, the company mission is to help people achieve more. How do we help developers achieve more? That’s what we’re trying to figure out.

This is why huge amounts of texture memory is not really needed. You can upscale the textures in real time using Direct ML and then upscale the image from 1440p to 4k using direct ML/DLSS. This makes memory requirements much less. You speed up the streaming of textures from the NVMe drive because the texture size is smaller.

You then speed up access to the NVMe drive using DirectStorage. https://devblogs.microsoft.com/directx/directstorage-is-coming-to-pc/

This allow you to have huge detail worlds with high fps and high resolutions. You can then use RT to get better more photo realistic graphics.

Demos




Were is AI in graphics going?


Games


https://store.steampowered.com/app/731950/The_Homestead/
 
Last edited:
Well the FE I ordered yesterday turned up today, I did not even pay for Saturday delivery, so well chuffed :D

Here, enjoy the pics guys, oh and I updated my sig too ;)


1.jpg


2.jpg
Well the FE I ordered yesterday turned up today, I did not even pay for Saturday delivery, so well chuffed :D

Here, enjoy the pics guys, oh and I updated my sig too ;)


1.jpg


2.jpg

Congratulations
 
Oh okay, that's good to hear :)
Actually what Dave said isn't accurate and contains a lot of hyperbole. There is a difference in VRAM usage from 1440p to 4k but it's not "huge" in the actual meaning of the word. Often 25% or less depending on the game.

https://www.techpowerup.com/review/horizon-zero-dawn-benchmark-test-performance-analysis/4.html

https://www.techpowerup.com/review/death-stranding-benchmark-test-performance-analysis/5.html

https://www.guru3d.com/articles-pag...-graphics-performance-benchmark-review,4.html

Do your own research and don't just believe what people on forums say without them backing it up with evidence.
 
Actually what Dave said isn't accurate and contains a lot of hyperbole. There is a difference in VRAM usage from 1440p to 4k but it's not "huge" in the actual meaning of the word. Often 25% or less depending on the game.

https://www.techpowerup.com/review/horizon-zero-dawn-benchmark-test-performance-analysis/4.html

https://www.techpowerup.com/review/death-stranding-benchmark-test-performance-analysis/5.html

https://www.guru3d.com/articles-pag...-graphics-performance-benchmark-review,4.html

Do your own research and don't just believe what people on forums say without them backing it up with evidence.

Agreed. I did my own tests in FC5 when people asked my to use 4k and upsample to 8k using the in game rendering resolution slider set to x2.0. The point of that test was to stress the game into using more vRAM than my card had (1080 with 8Gb) and the reality was that the "real" vRAM usage jumped about 1Gb from roughly 5.8Gb to 6.8Gb. The fact is that in terms of screen buffer the difference is tiny, it's about 64Mb for 4k and 256mb for 8k. Any visual effect that is per pixel and needs its own space in vRAM, for example anti-aliasing, will also need more vRAM at higher resolutions. But again the total from 4k to 8k was small, 1Gb total increase, minus the additional 200MB for the frame buffer, means about 800MB in additional vRAM for the other effects. And that's from 4k to 8k, the jump form 1440p to 4k is much smaller.

This is another example of where what matters is the speed and raw horsepower of the GPU, it's the GPU that has to calculate all those additional pixels which is the hard part, their space in memory is quite small as a fraction of the total. And keep in mind that metrics such as the ones you've linked to will be measured badly, they'll be measured as vRAM alloc, not what is used, so in most cases headroom of available vRAM will be higher.
 
Right now 10GB is enough, that is the fact, but can't wait for the Godfall release, which will definitely clear any doubts at all in a day or two.
They are however mentioning 4K x 4K textures, it's not just 4K, right? There must be some difference?
Will be also interesting to see Ampere compression in action or 3000 series downfall, LOL. I have RTX 3080 too (got it on day 1), so it's not sarcastic.
 
Last edited:
Right now 10GB is enough, that is the fact, but can't wait for the Godfall release, which will definitely clear any doubts at all in a day or two.
They are however mentioning 4K x 4K textures, it's not just 4K, right? There must be some difference?
Will be also interesting to see Ampere compression in action or 3000 series downfall, LOL. I have RTX 3080 too (got it on day 1), so it's not sarcastic.

It will clear up the situation with Godfall's Ultra mode... but similar situations will be springing up next year as more games are released that push the visual envelope.
 
https://www.youtube.com/watch?v=cYXxSwLEalk

If this actually happens, it makes the 3070 and 3080 look really bad..

Maybe they are planning on revamping their whole lineup...

I'll just buy whatever seems best by january, i hope amd gets a dlss competitor by january/february
What revamp? They are not even able to supply the current demand. There’s queues running into the thousands worldwide and AMD’s cards will be facing the same issue. Until the existing problems get fixed, NVIDIA will get a lot of bad rep for launching new cards.

A lot depends on the third party 6800XT benchmarks with Rage and SAM disabled.
 
https://www.youtube.com/watch?v=0LkqqgvMyFI

Sooo without dlss the 3070 is already out of vram on watch dogs legions with everything on ultra and with the hd texture pack
Ok wow look at the FPS, that is definite VRAM stutter. I haven't seen that in a long time. Is it running at 4k?

To be honest I wouldn't expect a 3070 to be running anything higher than 1440p at Ultra in new games. Would like to see the same benchmark run with a 3080.
 
Ok wow look at the FPS, that is definite VRAM stutter, I haven't seen that in a long time.
Yep its with dlss off, with dlss on the game uses 7-8gb so it runs just fine.

DLSS removes like 500mb-1gb of the vram...

This is at 1080p, now im not too sure about my decision, i think ill wait to see if amd brings out a dlss competitor and if they do, i am likely going amd this round unless nvidia can bring out some kind of refreshes with more vram, ill definitely buy a 3070ti with 16gb for $600 lol
 
What revamp? They are not even able to supply the current demand. There’s queues running into the thousands worldwide and AMD’s cards will be facing the same issue. Until the existing problems get fixed, NVIDIA will get a lot of bad rep for launching new cards.

A lot depends on the third party 6800XT benchmarks with Rage and SAM disabled.
True, they need to supply first..

ye 6800xt's benchmarks will show us how competetive it really is
 
Yep its with dlss off, with dlss on the game uses 7-8gb so it runs just fine.

DLSS removes like 500mb-1gb of the vram...

This is at 1080p, now im not too sure about my decision, i think ill wait to see if amd brings out a dlss competitor and if they do, i am likely going amd this round unless nvidia can bring out some kind of refreshes with more vram, ill definitely buy a 3070ti with 16gb for $600 lol
WTF... that was at 1080p? :eek:
 
WTF... that was at 1080p? :eek:
Yes... I did not know of this issue until now. I had only seen benchmarks with dlss.

I found this in the "Is 8gb vram for the 3070 enough" thread.

Anyone that asks me if 8gb is enough i'm telling its enough if you are fine with lowering textures relatively soon for some games...

But an argument could be made that with dlss it runs just under 8gb so its fine and also you could *not* use the hd texture pack and it would bring the usage down to 5-6gb.

But I think if you are spending $500 on a video card you should be able to max out everything for atleast 2 years..
 
Yes... I did not know of this issue until now. I had only seen benchmarks with dlss.

I found this in the "Is 8gb vram for the 3070 enough" thread.

Anyone that asks me if 8gb is enough i'm telling its enough if you are fine with lowering textures relatively soon for some games...

But an argument could be made that with dlss it runs just under 8gb so its fine and also you could *not* use the hd texture pack and it would bring the usage down to 5-6gb.

But I think if you are spending $500 on a video card you should be able to max out everything for atleast 2 years..
I think that if WDL requires DLSS at 1080p on a card with 8GB VRAM then it is an absolutely unoptimised POS. I say that not to defend any lack of VRAM in the 3070, but considering that WDL hardly looks like the most amazing of next-gen games vs something like CP2077, then this suggests me that it probably uses more resources than it needs. I would also be interested to see how it runs without RT at the same settings.

The sad fact is that it's not going to be the only unoptimised POS coming out in 2020/2021 and this will have an effect on cards with lower amounts of VRAM that may not otherwise deserve to run in to performance issues. I mean, I am all for accepting more VRAM usage if a developer actually justifies needing it and you see the evidence with your own eyes, but I resent it when dirt companies like Ubisoft make it necessary though bad development and you end up with a mediocre looking game with bad textures and tons of unnecessary RT reflections that cause performance to drop off a cliff even at 1080p.
 
Status
Not open for further replies.
Back
Top Bottom