• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA ‘Ampere’ 8nm Graphics Cards

8GB, it is pathetic... i said this in another thread,i don't need more GPU muscle necessarily, i have plenty of that, its my 8GB buffer that's holding me back.

A card that's faster than mine which has 8GB is frankly useless too me, as i said, i already have the GPU muscle i need, i can't turn the res up because when i do texture streaming spills over my 8GB buffer and chokes on my NVMe drive.

My understanding may be wrong and please correct me if it is, but if you have the same RAM yet that bus size and bandwidth is significantly bigger, then surely that is equivalent to a larger quantity of slower ram? And if it takes advantage of PCI 4.0 then that increases bandwidth to the CPU/system RAM and NVME storage as well potentially? The same amount of ram could be swapped back and forth much faster couldn't it?
 
8GB, it is pathetic... i said this in another thread,i don't need more GPU muscle necessarily, i have plenty of that, its my 8GB buffer that's holding me back.

A card that's faster than mine which has 8GB is frankly useless too me, as i said, i already have the GPU muscle i need, i can't turn the res up because when i do texture streaming spills over my 8GB buffer and chokes on my NVMe drive.

It really is a kick to the balls by Nvidia.

Shows what having a near monopoly does to a company.
 
10GB 3080 and then jumping to 24GB for the 3090 seems completely crazy if this is true

I think the GA102 must have yield problems. The RTX3090 uses perfect dies,which cost a lot so Nvidia is putting 24GB of VRAM on the cards to justify the pricing.

I would expect the RTX3080 with 17% less shaders and 14GB less of VRAM to consume far less power.

To put in context the TU102 GPU in the RTX2080TI was cut down far less compared to the Titan RTX. It only had 5.5% less shaders,a less cut down memory bus and a lower VRAM reduction.
 
It is still NVidias poor R&D, they made all the choices.

R&D is research and development. It is done before you go to "print" per se.

Are Samsung poop? yeah kinda. However, they are cheaper. Remember Jen said how much cheaper it was over a year ago?

Is it all bad? no. Not at all. Well, I don't think it is any way. Mostly because the 3080 should be faster than the 2080Ti, yet costs £600 less. I really don't see that as bad. As for the 3090? well, at least it is going to be really, really fast.

That is all any one cares about. Like I mentioned yesterday, logic doesn't even come into it. It's 99% ego and the rest is pure gaming fun.

And we already knew they were 7nm Samsung. That has been known for months. They started as 8nm Samsung (note how I am putting Samsung next to those?) but they failed totally. 7nm Samsung is not the same as 7nm TSMC. At all. It is very, very different. Hence the wild power use.

As I said though none of that will matter. So long as they are as fast as being touted. 40% or more faster than a 2080Ti is a win, no matter if you short the national grid. That is herculean power, man. And that isn't even taking into account how much better the RT performance will be, nor the DLSS because of the many more Tensor cores.
 
If the 10 GB variant is $800 then no way the 20 GB one is less than $1000. Yuck. But the drop to 3070 is so big... that doesn't feel any better of a deal.

Nvidia's really got us over a barrel.
 
Is it all bad? no. Not at all. Well, I don't think it is any way. Mostly because the 3080 should be faster than the 2080Ti, yet costs £600 less. I really don't see that as bad. As for the 3090? well, at least it is going to be really, really fast.

...

As I said though none of that will matter. So long as they are as fast as being touted. 40% or more faster than a 2080Ti is a win, no matter if you short the national grid. That is herculean power, man. And that isn't even taking into account how much better the RT performance will be, nor the DLSS because of the many more Tensor cores.

Yes absolutely. The 3080 is probably not meant to be an upgrade from the 2080 Ti, I would say its the upgrade to the 1080 Ti at the $800 price band. And if it equals the 2080 Ti that is an exceptional jump from the 1080 Ti at that price point.

The 3090 is meant for those who just want the best and can pay it, previous Titan owners and possibly 2080 Ti owners who have shown what they are willing to pay. And that's fine too, from the specs it looks like it will give a performance jump across the board as it has more cores, more clock speed, more and faster ram, bigger bus and transfer speed. And you'll pay for it too.


If the 10 GB variant is $800 then no way the 20 GB one is less than $1000. Yuck. But the drop to 3070 is so big... that doesn't feel any better of a deal.

Nvidia's really got us over a barrel.

Why do you think you need 20Gb RAM? If 10 Gb ram is faster and on a faster bus than the 11 Gb currently in the 2080 Ti, then might that not be just fine?

I don't understand this fixation with quantity and no one seems to want to try to explain it properly, just keep regurgitating the same negativity. Surely quantity AND speed are both important factors in determining memory performance and limitations.
 
Last edited:

If that's true it fits nicely into the range (changes highlighted in yellow):
33uB4Rr.png
 
Selling your 2080Ti for £600 or less is nuts. Seriously, don't do it. Sure the RT performance may be much better but that really isn't saying much. RT is a visual feature, not a necessity. Unless you insist on 4k ultra beyond 60 FPS (and I can't see why you would but whatevs) don't bother.

Nvidia have you over no barrels. That is like saying a drug dealer has their addicts over a barrel. They choose to take the drugs and become an addict.
 
Back
Top Bottom