• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

10GB vram enough for the 3080? Discuss..

Status
Not open for further replies.
Wat?
relative-performance_1920-1080.png


What are you doing mate, we're talking about a game called Godfall and you go and post something random. Try reading a few more posts further up before making assumptions

Capture.png
 
Nice one, would be interesting to do without islc as a control. Interesting to note that it's entirely cpu-bound in 2nd video, gpu not really breaking 90% usage much.
https://youtu.be/L08cxjmxRXc

In particular, if you look at the 0:55 mark (you will see the video "glitch" as the SBL is flushed) you will see that the CPU usage drops by 15%!! That's why filling up the standby list causes stutter/fps slow downs, etc. It's not just the memory filling up in the standby list alone. It's the CPU also being tied up in it as well. Once the SBL is flushed, all goes back to normal.

It's a huge problem in games and I thought that developers had a grip on this. As it's not a simple fix. They have to go back to the source code/game engine to address it.

-----------------------
These are some serious memory requirements—good! While cards with 4 GB or less will be seriously challenged at lower resolutions, the extra VRAM usage means crisper textures for the vast majority of gamers who have 8 GB of VRAM or more. Godfall is one of the first games to break through the 8 GB VRAM barrier at 4K.
Either TPU messed up their benchmark results or there is some serious vram issues going on with these results.
1. There is a huge disparity between the 3080 vs the 3090. If anything the 3090 is finally beginning to stretch it's legs in this title.
2. The 2080ti either ties or within spitting distance of a 3080, not 3070 a 3080. When the 3080 should be beating a 2080ti by a country mile. I check a few other reviews from TPU just to make sure. And I've not come across a review were a 2080ti beats/tie a 3080 at those resolution(s). Even though the 3080 is technically faster at higher resolutions it's not enough to make any in game immersion difference from the 2080ti results.

We know a 2080ti isn't faster then a 3080 nor should be within the vicinity of a 3080. One thing I do know. This is the 1st next-gen game title from Sony ported to PC.
:D




At 1080p, the last 3 cards have vram configurations of 10, 11, 24. And with the other 2 resolutions we are looking between a 4-5 fps difference between a 2080ti and a 3080 when we know from other benchmarked games it's much higher then that.
 
Last edited:
https://youtu.be/L08cxjmxRXc

In particular, if you look at the 0:55 mark (you will see the video "glitch" as the SBL is flushed) you will see that the CPU usage drops by 15%!! That's why filling up the standby list causes stutter/fps slow downs, etc. It's not just the memory filling up in the standby list alone. It's the CPU also being tied up in it as well. Once the SBL is flushed, all goes back to normal.

It's a huge problem in games and I thought that developers had a grip on this. As it's not a simple fix. They have to go back to the source code/game engine to address it.

-----------------------

Either TPU messed up their benchmark results or there is some serious vram issues going on with these results.
1. There is a huge disparity between the 3080 vs the 3090. If anything the 3090 is finally beginning to stretch it's legs in this title.
2. The 2080ti either ties or within spitting distance of a 3080, not 3070 a 3080. When the 3080 should be beating a 2080ti by a country mile. I check a few other reviews from TPU just to make sure. And I've not come across a review were a 2080ti beats/tie a 3080 at those resolution(s). Even though the 3080 is technically faster at higher resolutions it's not enough to make any in game immersion difference from the 2080ti results.

We know a 2080ti isn't faster then a 3080 nor should be within the vicinity of a 3080. One thing I do know. This is the 1st next-gen game title from Sony ported to PC.
:D




At 1080p, the last 3 cards have vram configurations of 10, 11, 24. And with the other 2 resolutions we are looking between a 4-5 fps difference between a 2080ti and a 3080 when we know from other benchmarked games it's much higher then that.

My bet is memory bandwidth, 3090 has enough of it for its 10k cuda cores to stretch its legs. For the 3070 and 3080 I hope this isn't a sign of things to come for next gen exclusive games like this :D
 
My bet is memory bandwidth, 3090 has enough of it for its 10k cuda cores to stretch its legs. For the 3070 and 3080 I hope this isn't a sign of things to come for next gen exclusive games like this :D
Could be the reason tbf.

I'm not sure what display they used, but TPU didn't run with maximum settings. LPM was set to disabled, so i guess they did not test on a HDR display as they claim there was no difference with it enabled.

EDIT - The results could be because TPU are still using a Intel i9 9900K with PCI-E Gen 3. They really should be using a 5950x with Gen 4 these days. I wonder if a Gen 4 with extra bandwidth would give more expected results.
 
https://youtu.be/L08cxjmxRXc

In particular, if you look at the 0:55 mark (you will see the video "glitch" as the SBL is flushed) you will see that the CPU usage drops by 15%!! That's why filling up the standby list causes stutter/fps slow downs, etc. It's not just the memory filling up in the standby list alone. It's the CPU also being tied up in it as well. Once the SBL is flushed, all goes back to normal.

It's a huge problem in games and I thought that developers had a grip on this. As it's not a simple fix. They have to go back to the source code/game engine to address it.

-----------------------

Either TPU messed up their benchmark results or there is some serious vram issues going on with these results.
1. There is a huge disparity between the 3080 vs the 3090. If anything the 3090 is finally beginning to stretch it's legs in this title.
2. The 2080ti either ties or within spitting distance of a 3080, not 3070 a 3080. When the 3080 should be beating a 2080ti by a country mile. I check a few other reviews from TPU just to make sure. And I've not come across a review were a 2080ti beats/tie a 3080 at those resolution(s). Even though the 3080 is technically faster at higher resolutions it's not enough to make any in game immersion difference from the 2080ti results.

We know a 2080ti isn't faster then a 3080 nor should be within the vicinity of a 3080. One thing I do know. This is the 1st next-gen game title from Sony ported to PC.
:D




At 1080p, the last 3 cards have vram configurations of 10, 11, 24. And with the other 2 resolutions we are looking between a 4-5 fps difference between a 2080ti and a 3080 when we know from other benchmarked games it's much higher then that.


Very dodgy benchmark results.

The 3090 is just not that much faster than the 3080.

Also if anyone is implying that the poor 1080p results are due to memory for the 3080 why is it running 2160p at all let alone closing the gap on the 3090.

Total garbage benching.
 
My bet is memory bandwidth, 3090 has enough of it for its 10k cuda cores to stretch its legs. For the 3070 and 3080 I hope this isn't a sign of things to come for next gen exclusive games like this :D

Does the 3080 get more memory bandwidth at 2160p, I don't think so.

Weird how the 3080 performs better at 2160p.

Obviously 10gb is enough here.
 
The results could be because TPU are still using a Intel i9 9900K with PCI-E Gen 3. They really should be using a 5950x with Gen 4 these days. I wonder if a Gen 4 with extra bandwidth would give more expected results.
Calling this as the reason for the odd results at 1080P.

Just look at the 5500 XT results in the TPU bench and then here. :)
 
Doesn't amd have some kind of optimised shader path for using lower precision shaders.. godfall might be using those.. the 2080ti has 2:1 fp16 throughout and a separate integer path, that's one possible explanation.
 
The results could be because TPU are still using a Intel i9 9900K with PCI-E Gen 3. They really should be using a 5950x with Gen 4 these days. I wonder if a Gen 4 with extra bandwidth would give more expected results.

I think Gen 4 could make a nice difference at 1080p where the fps are much higher.

Something is definitely throttling the cards at 1080p, if the 3090 is scoring just under 65fps @2160p it should be pushing over 200fps at 1080p.

I think we can rule out drivers as the problem exists for both Red and Green teams.

I wish my 5950X would hurry up and get to me.:D
 
I think Gen 4 could make a nice difference at 1080p where the fps are much higher.

Something is definitely throttling the cards at 1080p, if the 3090 is scoring just under 65fps @2160p it should be pushing over 200fps at 1080p.

I think we can rule out drivers as the problem exists for both Red and Green teams.

I wish my 5950X would hurry up and get to me.:D


What problem is the red team having? To me they all look to be punching above their weight relative to the green team of the same generation. What performance would you be expecting them to have?
 
Very dodgy benchmark results.

The 3090 is just not that much faster than the 3080.

Also if anyone is implying that the poor 1080p results are due to memory for the 3080 why is it running 2160p at all let alone closing the gap on the 3090.

Total garbage benching.

Well I said Memory Bandwidth, not Memory Capacity - they are two totally different things
 
What problem is the red team having? To me they all look to be punching above their weight relative to the green team of the same generation. What performance would you be expecting them to have?

You should get about 4x the result at 1080p as you would at 2160p, if there is nothing holding back the cards.

My point has zero to do with Red v Green as both vendors cards are on the same side in my argument.

Something is throttling the cards, it could be because the reviewer used a Gen 3 9900k or it could just be rotten game software.
 
@EastCoastHandle - As I predicted. You will scour the web daily looking for a result that shows 10gb is not enough at max settings and rush back here to post them in here with excitement :);)


Well well well the cows have come home to roost. ;)

In B4 but muh vram allocation. :D :p
Lol. I see you are counting your chickens already. Getting all happy before the results are confirmed by multiple sources ;)

Something does not seem right about those results. Will wait until multiple sources confirm it :D
 
@EastCoastHandle - As I predicted. You will scour the web daily looking for a result that shows 10gb is not enough at max settings and rush back here to post them in here with excitement :);)



Lol. I see you are counting your chickens already. Getting all happy before the results are confirmed by multiple sources ;)

Something does not seem right about those results. Will wait until multiple sources confirm it :D
Woosh. ;)
 
It would be a nice feature if you could turn off unneeded memory for example running a 24gb card with 8gb or 12gb as this would reduce heat and power consumption a little bit.

It is very rare that I use more than 8gb for anything.
 
You should get about 4x the result at 1080p as you would at 2160p, if there is nothing holding back the cards.

My point has zero to do with Red v Green as both vendors cards are on the same side in my argument.

Something is throttling the cards, it could be because the reviewer used a Gen 3 9900k or it could just be rotten game software.
I understand what your talking about now. Good spot.
 
Status
Not open for further replies.
Back
Top Bottom