• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

10GB vram enough for the 3080? Discuss..

Status
Not open for further replies.
Im reckoning £850 as it's more of a vram bump than anything else.
It's actually closer to the 3090 in Cuda with the same bus speed so I'd expect the price to reflect that, the 2080ti was 1k so expecting the 3080ti to be atleast 1k. As its basically a 3090 with half the VRAM.
 
I mean, that doesn't seem that ridiculous to me. Nvidia have generally made the x80 Ti slightly faster out of the box than the Titan of its generation (usually by giving it higher clocks), and openly marketed it as being so.

https://www.eurogamer.net/articles/digitalfoundry-2017-gtx-1080-ti-finally-revealed

It wouldn't be much of a shock if the 3080 Ti is slightly faster than the 3090 at stock, given it won't be dedicating as much of its power budget to hungry GDDR6X, so will be able to have a higher core clock. Especially since the 3090 FE's rated boost clock is only 1.7GHz. Of course, the 3090 will be on par or slightly better clock for clock, just as the Titans were once you overclocked them (bar the very first Titan).

I get ya. What I refer to is there was a few gents on here slagging the 3090 (and rightly so on price at the time) yet they bought a 3080 then mentioned 'I only got it to tide me over till a 3080Ti is out'.

That to me is plain stupid. Either get the 3090 back in October and have all that performance from the get go (and you would be now 7 months gaming done) or stick with the 3080 and have the best bang for buck. The 3080Ti (without an FE offering) will be a complete rip off and no better than the 3090FE.

It's actually closer to the 3090 in Cuda with the same bus speed so I'd expect the price to reflect that, the 2080ti was 1k so expecting the 3080ti to be atleast 1k. As its basically a 3090 with half the VRAM.

Correct, and be 7 months late to the party and near in price to a 3090FE. We said this in the other thread dedicated to it. :)

The only way it could be better if it was a Die Shrink which was talked about a while back but still on Samsung.

I am not going to repeat what I said in other thread to the other delusional user but any 3080/Ti/S is a core that failed to be a 3090, the 3090's have the better silicon and a lot more hit 2100-2200mhz Core than 3080's do.

This guy gets it! :)
 
3080ti FE will be the one to get, 3090 (or better) gaming performance for £750-800

tenor.gif
 
I'd imagen the 3080ti will be the top mining card if it gets cracked as it should pull the same hash rate as a 3090 while using quite a bit less power as it only has half the VRAM to feed.
 
Anyone expecting a 3080TI to surpass a 3090 is in for a big disappointment, it should be fairly close though going by the leaked specs.

That's realistic, and sure we all knew it would be near to 3090 performance. But to exaggerate being faster AND being cheap is fairy tale.

I'd imagen the 3080ti will be the top mining card if it gets cracked as it should pull the same hash rate as a 3090 while using quite a bit less power as it only has half the VRAM to feed.

One of the big turnoffs for these new gimp miner versions, still wont impact the scalpers and the hardcore miners will unlock it within days so complete waste of energy on everyone's part.
 
GDDR6X is 15% more power-efficient than GDDR6.

"Speaking of power, it is necessary to note that because of considerably increased performance, GDDR6X is 15% more power-efficient than GDDR6 (7.25 pj/bit vs 7.5 pj/bit) at the device level, according to Micro"


Micron Reveals GDDR6X Details: The Future of Memory, or a Proprietary DRAM? | Tom's Hardware (tomshardware.com)

It's more power efficient but still has higher power consumption as GGDR6X is running at faster speeds. I think Micron said something about GDDR6X running at 21GBps requires 25% more power than 14GBps GDDR6.
 
Pardon.

What possible reason could you have for saying that? All modern games out right now run fine in 4k on the 3080. I have a 3080 and a 4k monitor and I play all of my games in 4k, with no exceptions.

Salient point, 'games out right now'. We have new games pushing the bar constantly, nm a new console gen arrived in the same period as these new GPU's. They'll get a library soon enough those titles will either be planned for crossplat or demanded as ports, dev'd for console first and traditionally not always well optimised. Add the fact that few ppl upgrade each and every gen and we'll see the 3080 dropping behind before it's AMD peers do, RT and DLSS be damned (as neither will be ubiquitous/supported across all games/genres/Ip's for a long time yet) Give it a gen or two and both will be much better but by then AMD will be on par there too... and this is good for all of us (as long as stock goes up and scalpers and miners fade away)
 
It's more power efficient but still has higher power consumption as GGDR6X is running at faster speeds. I think Micron said something about GDDR6X running at 21GBps requires 25% more power than 14GBps GDDR6.


G6x is 25% more efficient at the same clocks right. But G6 was run at 14gbps on the 2080ti and 21gbps on the 3090 - so yeah 25% more efficient but the power consumption still went up because they clocked g6x 50% higher

Ampere is also more efficient than Turing but you wouldn't know at first glance because they pushed the power way over the efficiency curve
 
G6x is 25% more efficient at the same clocks right. But G6 was run at 14gbps on the 2080ti and 21gbps on the 3090 - so yeah 25% more efficient but the power consumption still went up because they clocked g6x 50% higher

Ampere is also more efficient than Turing but you wouldn't know at first glance because they pushed the power way over the efficiency curve

G6x is only 15% more power efficient at the same clocks according to Micron.

Since Die Shrinks don't have the same kind of performance jumps as they used to, both AMD and Nvidia have pushed the power curve.
 
Salient point, 'games out right now'. We have new games pushing the bar constantly, nm a new console gen arrived in the same period as these new GPU's. They'll get a library soon enough those titles will either be planned for crossplat or demanded as ports, dev'd for console first and traditionally not always well optimised. Add the fact that few ppl upgrade each and every gen and we'll see the 3080 dropping behind before it's AMD peers do, RT and DLSS be damned (as neither will be ubiquitous/supported across all games/genres/Ip's for a long time yet) Give it a gen or two and both will be much better but by then AMD will be on par there too... and this is good for all of us (as long as stock goes up and scalpers and miners fade away)

That can't possibly be a sane standard though? If some hypothetical game in the future won't run at 4k in max settings, it's not 4k capable? where do you draw the line precisely? You could argue that the 3080 is not even an 800x600 capable card because in future there will be some game that's so demanding that even at that resolution the 3080 wont cope.

The new consoles have pretty lackluster GPUs in them, they're something in the order of a 2060 which is significantly slower. And we do know from historic trends that for the first 2-3 years of the console life the PC and console variants of the same game tend to be more or less on parity graphics wise, there's just not enough people on the PC with GPUs faster than the consoles during those years to justify additional work for the PC version of the game above and beyond the console version. We tend to only see developers start to take strides to make the PC versions better once the consoles are in their death throws 4-6 years into their life cycle and average PC gamer has a monster GPU going to waste. By which time we'll be on at least 5080's if not 6080's late in the lifespan.
 
series x feels like its performing near the 3070 but that's me

i mean, it's not like consoles can be equivalent to PCs anyways

ps4 is supposedly equal to 750ti. that's not true in 2017+ games, it nearly performs as much as a gtx 770 (but that may be a thing specially on nvidia). i'm sure that if sx performs near a 3070, it will outperform 3080 4-5 years later

right now, 6700xt can beat 3070 in a lot of titles, and they're mostly on par

it has 40 RDNA2 CUs and with a 2.4 GHz clock speed
series x have 52 RDNA2 CUs with lower clock speed at 1.8 GHz

i would say sx can easily outperform 3070 with the start of nextgen console optimizations

(finally, i dont think infinity cache would be needed on console, it has superior low latency interconnected chip anyways)
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom