• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

10GB vram enough for the 3080? Discuss..

Status
Not open for further replies.
VRAM aside - a 3080 isn't fast enough for 4k max settings in all new games, at 60+ fps. The 3080ti will be a welcome upgrade, assuming it has a good 15% more performance.

Highly doubt it'll be 15% faster as that's getting into 3090 territory, but I agree that the 3080 isn't targeting high-end 4K - it's a decent budget card for 4K though (assuming realistic pricing).
 
Highly doubt it'll be 15% faster as that's getting into 3090 territory, but I agree that the 3080 isn't targeting high-end 4K - it's a decent budget card for 4K though (assuming realistic pricing).

I think it'll be either identical performance to the 3090, or will be slightly faster. I think Nvidia care more about beating AMD comfortably on charts than they do in protection the 3090 status.

The 3090 is aimed for developers/productivity requiring the huge VRAM, gaming isn't it's primary aim, so doubt they'll care too much.

Hopefully we'll see 3080ti clocks being really high, through binning/optimisations over the old 3080/3090 silicon.
 
You can "facepalm" all you want, quake 2 is creeping up on 25 years old ffs.

The reasons why Quake 2 was chosen largely comes down to:

Full source code availability
Ease of licensing

Original Quake 2 maps use a pre-computed form of ray tracing (radiosity based) with a format which allows the original lighting sources to be quickly and easily removed without having to recompile or edit the level data and replaced with ray traced effects with a high level of compatibility with the original design.

The renderer itself is perfectly capable of rendering something like Doom 3 at around the same performance level - maybe a little slower in areas with complex interaction of glass and reflective surfaces which doesn't happen in Quake 2 but it is single digit percent differences or a little higher if you turn up the bounces or base ray budget.

A lot of the reason other games weren't chosen and/or haven't yet been updated comes down to the availability of source code, complex licensing situations and that even with a source license you don't usually have any rights or access to the original game level data and in games which use other techniques for lighting, etc. you'd have to go in and edit any situation which doesn't play nice with ray tracing + replace all light sources by hand and recompile/repackage the world data and redistribute it - which is both a huge task and in many cases a legal nightmare even if the developer/publisher was onboard.

EDIT: Given the availability of the level editing tools, etc. for Quake 2 and that it is relatively easy to add at least basic support for high resolution meshes it is a bit poor nVidia didn't have people go in and at least re-make 1-2 levels to really show off ray tracing at a modern level of detail though.
 
I think it'll be either identical performance to the 3090, or will be slightly faster.

tenor.gif
 
I mean, that doesn't seem that ridiculous to me. Nvidia have generally made the x80 Ti slightly faster out of the box than the Titan of its generation (usually by giving it higher clocks), and openly marketed it as being so.

https://www.eurogamer.net/articles/digitalfoundry-2017-gtx-1080-ti-finally-revealed

It wouldn't be much of a shock if the 3080 Ti is slightly faster than the 3090 at stock, given it won't be dedicating as much of its power budget to hungry GDDR6X, so will be able to have a higher core clock. Especially since the 3090 FE's rated boost clock is only 1.7GHz. Of course, the 3090 will be on par or slightly better clock for clock, just as the Titans were once you overclocked them (bar the very first Titan).
 
I think it'll be either identical performance to the 3090, or will be slightly faster. I think Nvidia care more about beating AMD comfortably on charts than they do in protection the 3090 status.

The 3090 is aimed for developers/productivity requiring the huge VRAM, gaming isn't it's primary aim, so doubt they'll care too much.

Hopefully we'll see 3080ti clocks being really high, through binning/optimisations over the old 3080/3090 silicon.

The only way it could be better if it was a Die Shrink which was talked about a while back but still on Samsung.

I am not going to repeat what I said in other thread to the other delusional user but any 3080/Ti/S is a core that failed to be a 3090, the 3090's have the better silicon and a lot more hit 2100-2200mhz Core than 3080's do.
 
I mean, that doesn't seem that ridiculous to me. Nvidia have generally made the x80 Ti slightly faster out of the box than the Titan of its generation (usually by giving it higher clocks), and openly marketed it as being so.

https://www.eurogamer.net/articles/digitalfoundry-2017-gtx-1080-ti-finally-revealed

It wouldn't be much of a shock if the 3080 Ti is slightly faster than the 3090 at stock, given it won't be dedicating as much of its power budget to hungry GDDR6X, so will be able to have a higher core clock. Especially since the 3090 FE's rated boost clock is only 1.7GHz. Of course, the 3090 will be on par or slightly better clock for clock, just as the Titans were once you overclocked them (bar the very first Titan).


GDDR6X is 15% more power-efficient than GDDR6.

"Speaking of power, it is necessary to note that because of considerably increased performance, GDDR6X is 15% more power-efficient than GDDR6 (7.25 pj/bit vs 7.5 pj/bit) at the device level, according to Micro"


Micron Reveals GDDR6X Details: The Future of Memory, or a Proprietary DRAM? | Tom's Hardware (tomshardware.com)
 
So basically the only VRAM "problem" ive read about was on the horizon zero dawn game and that same textures problem ended up being reproduced on 16GB AMD GPUs anyways? sounds to me most people are just overthinking this VRAM debate.
 
10GB is not ideal on a new high end GPU today (I know what they did it/ the Bus Width to VRAM relationship) but that being said some do not know the different from VRAM actually used and VRAM being cached.

One reason I decided if I did buy at launch I would not buy a 3080 (I had a 1080Ti and Titan Xp so more VRAM) and go for the 3090 and then like many I had no luck, then I got lucky in Mid January and gone an FE from Nvidia's partner.
 
The only way it could be better if it was a Die Shrink which was talked about a while back but still on Samsung.

I am not going to repeat what I said in other thread to the other delusional user but any 3080/Ti/S is a core that failed to be a 3090, the 3090's have the better silicon and a lot more hit 2100-2200mhz Core than 3080's do.

You're assuming much there. We have no idea on the binning for the 3080ti chips. They could be binned to reach really high clocks. They have such similar core count and memory bandwidth as the 3090, it wouldn't take much to outperform one.
 
10GB is not ideal on a new high end GPU today (I know what they did it/ the Bus Width to VRAM relationship) but that being said some do not know the different from VRAM actually used and VRAM being cached.

One reason I decided if I did buy at launch I would not buy a 3080 (I had a 1080Ti and Titan Xp so more VRAM) and go for the 3090 and then like many I had no luck, then I got lucky in Mid January and gone an FE from Nvidia's partner.

3080ti FE will be the one to get, 3090 (or better) gaming performance for £750-800
 
Again it will not beat a 3090 and it will not cost £780-800, no way and a 3090 user who got lucky at launch has been enjoying it since Sep 2020.
 
I mean, that doesn't seem that ridiculous to me. Nvidia have generally made the x80 Ti slightly faster out of the box than the Titan of its generation (usually by giving it higher clocks), and openly marketed it as being so.

https://www.eurogamer.net/articles/digitalfoundry-2017-gtx-1080-ti-finally-revealed

It wouldn't be much of a shock if the 3080 Ti is slightly faster than the 3090 at stock, given it won't be dedicating as much of its power budget to hungry GDDR6X, so will be able to have a higher core clock. Especially since the 3090 FE's rated boost clock is only 1.7GHz. Of course, the 3090 will be on par or slightly better clock for clock, just as the Titans were once you overclocked them (bar the very first Titan).


That's because Titans were all 250w and only the RTX Titan went up to 280w, Titans are aimed at workstations and always kept at power levels that don't cause issues for workstations with cooling or power.

The 3090 is not a Titan "Titan Class" but not a titan and does not stick to the 250w power levels, why you see Ti's in the past beating Titans is because the Ti's had power levels a lot higher than a Titan, this time remember the 3090 can use 420W Fe and Aibs are all way up to 500w (twice the power any titan would be allowed to use).


The 3080Ti will not beat a 3090 as the 3090 again is not a Titan set at 250w. If a Titan comes out it will be 250-280w max and slower than a 3090. Also the clocks on Ampere are not going any higher than they are now on the 3080 and 3090 as we have seen, no matter what cooling you have on them, all that happens is the clocks are more steady with better cooling but they don't go much higher the peak clocks. So all this states the Ti will be a little faster than a 3080 and a little slower than a 3090 overall.
 
Last edited:
How's it "fake" if you can buy them for MSRP???? Most people here who have signed up to the dc etc. alerts have been pretty successful with getting them, I was signed up for about 3 weeks before bagging one in Jan/Feb. AMD on the other hand..... good luck even finding one in the first place....

VRAM aside - a 3080 isn't fast enough for 4k max settings in all new games, at 60+ fps. The 3080ti will be a welcome upgrade, assuming it has a good 15% more performance.

Also, I don't consider DLSS to be worth using in the vast majority of games, as it makes the image look a blurry mess. The only games I'd ever consider using it, would be a really fast paced competitive shooter, where fps matters above all.
For likes of cyberpunk, yup, a 3080 and even a 3090/6900xt isn't enough at 4k unless turning down a couple of settings with no ray tracing.
 
Status
Not open for further replies.
Back
Top Bottom