• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA ‘Ampere’ 8nm Graphics Cards

E
Wonder if this new gen going be good value for money like the 2000 series was

What mean by this is there been some people that bought a 2080ti when they were first released and have now sold it for not much less then they paid for it..

Its not surprising in any generation if that card is still top dog sells for top money, they will be half or less there value after release after 2 years. 300 or there abouts a year on gpu depreciation is not good value and that comes from some one whole still owns one.
 
Wonder if this new gen going be good value for money like the 2000 series was

What mean by this is there been some people that bought a 2080ti when they were first released and have now sold it for not much less then they paid for it..

If Ampere offers tiny performance increases while simultaneously charging huge price increases, your idea of "value" might continue on another generation.

However, if Ampere offers real value, like pretty much any generation before Turing, the emperor will have no clothes.
 
Last edited:
hard to know, but if the reported power usage is anything to go by it'll be close 390w per card plus everything else in your system, 1200w should do but i suspect 1600-2k units are going to become the norm for sli :(

Remember, that 390w is reference clocks. Apparently they clock like bat ****, and that is why they have that 12 pin connector which can provide 600w. Nvidia may well underclock the crap out of these to make them look better from a power perspective. In fact, I am 90% sure that is exactly what they will do.

Speeds apparently are not an issue at all, it's the power consumption of these Samsung dies once they are overclocked that is.

If the 3080 uses a 12 pin then god help us when it comes to the 3090.
 
If Ampere offers tiny performance increases while simultaneously charging huge price increases, your idea of "value" might continue on another generation.

However, if Ampere offers real value, like pretty much any generation before Turing, the emperor will have no clothes.

They won't offer value. They will never offer value until every single one of them is matched by AMD and the price war begins. Which isn't happening. They may offer value compared to Turing, but that isn't saying much, is it?

They should offer lots of performance, but that remains to be seen. However, apparently with this co processor which could contain the tensor cores* that would explain exactly why they are so much better at RT.

Apparently the 3060 is as good at RT as the 2080Ti. Remember though, I said as good at RT not as fast ! so any performance increase in RT will only take it so far because of the core itself. However, look at this pic.

Yy1rX6g.jpg

Let me explain that. There's a GPU die core, and a tensor core co processor. That would explain why the 2080Ti die is 772mm2, yet apparently either the 3080 or 3090 is "only" 600 odd. Sure, it's some kind of shrink here, but we have no idea how big Samsungs shrunk cores are in density (well, we could probably find out but I will leave that to guys like AdoredTV etc). What I am saying is, when you combine both cores it would have been so monolithic and big that the failure rates would have been so incredibly high that you would have got like, two big dies working per wafer.

This way they can make GPU dies on one wafer, and the possible tensor core dies on another wafer. Making the tensor cores smaller in die size means better yields all around. Same goes for the GPU die itself.

And see, that is why Turing was so expensive. I've said it before but people don't listen, Turing was *really* expensive to produce. The dies were honking great monoliths, the failure rates were really high and the cost was high also (being TSMC). So only part of it was Jen doing you up the Gary Glitter, the rest was solely down to costs. The 2080Ti FE cooler cost them about $60 just to produce.
 
If the 3080 uses a 12 pin then god help us when it comes to the 3090.

i really hope this isnt the case otherwise i may hold off for amd's offering's, as i said before if the 3080 turns out to be the best performance for the watt i think it will be the card the masses will opt for, hopefully a decent uplift from a 2080 ti and cheeper :) and use standard atx cales (ie 8pin pci express x2)
 
i really hope this isnt the case otherwise i may hold off for amd's offering's, as i said before if the 3080 turns out to be the best performance for the watt i think it will be the card the masses will opt for, hopefully a decent uplift from a 2080 ti and cheeper :) and use standard atx cales (ie 8pin pci express x2)

AMD will be no better dude. In fact, they could well be worse.

The reason? we are back to tanks. The 670/680 days are long gone now. Nvidia did what DX11 needed, but RT needs heavy hardware to throw it around. Hence the enormous performance drops on Turing, because they could only fit so much on the monolithic die and node.

The 5700 and XT are already quite power hungry, especially when OC, so don't expect dies twice that size to be any better. If you want RT you gotta pay the price for it.
 
BTW to give you an idea of the VRM? take that funny looking board up there, mirror it and then use the whole damn thing for phases. It looks like they are putting the VRAM, GPU die and co processor on one board and everything else on the other lol.
 
And see, that is why Turing was so expensive. I've said it before but people don't listen, Turing was *really* expensive to produce. The dies were honking great monoliths, the failure rates were really high and the cost was high also (being TSMC). So only part of it was Jen doing you up the Gary Glitter, the rest was solely down to costs. The 2080Ti FE cooler cost them about $60 just to produce.

Controlling costs is part of generational innovation.

If Intel were to find a manufacturing method which produced GPU's that beat Nvidia's performance by 30% but cost 500% more to manufacture, it would not make it to market. Intel would have to go back to the drawing board and come up with a better idea.

Now, Turing wasn't *that* ^ bad, but Nvidia should have considered that ray tracing might not be ready for prime time if it was going to cost so much and deliver so little.
 
Remember, that 390w is reference clocks. Apparently they clock like bat ****, and that is why they have that 12 pin connector which can provide 600w. Nvidia may well underclock the crap out of these to make them look better from a power perspective. In fact, I am 90% sure that is exactly what they will do.

Speeds apparently are not an issue at all, it's the power consumption of these Samsung dies once they are overclocked that is.

If the 3080 uses a 12 pin then god help us when it comes to the 3090.

then 6 months down the line release a 3090 super with the factory overclock.
 
Nah, that's only 3000W max. It will require its own dedicated 32Amp ring onto your fuse box. Or might be 3 Phase?

Look on the bright side at least you won't need to turn the heating on this winter. Summer is a different issue maybe the hose off the air con unit could be hooked up the exhaust fan on the PC case and shove it out the window. Global warming what global warming?!
 
Back
Top Bottom