• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

3090 is NOT a gaming card - Gamers Nexus DESTROYS the 3090

I think that the main problem here is that somehow a card that is not meant for gaming is judged in gaming terms. The 3090 has some awesome value for certain applications, while for others (gaming) is not so special compared with a 3080. When the card was presented it was made pretty clear that it had a different scope. This said... is it really so important? Some people for sure will need it, some people is rich and they do not care about that amount of money (or money at all), and some others are just a bit more happy if they buy it and that is also totally fine :)

It is a gaming card. It really has no other purpose. It is not a Titan. It has no compute.

Nvidia knows 99% of these cards will be sitting in gaming Pc's. Nvidia basically sat on stage and lied.

Steve on GN explains it well.
 
No it isnt.

One is second hand and one is new. Its desperate to bring in used values to justify 3090 prices

2080ti been sold off for half price in a lot of places so same stands based on new. Why would anybody spend £200 to £300 on a 3080 for 5-10% gain? Mental I tell ya!
 
At 480W you are severely limited by thermals. Once the watercooled 480W Strix 3090 results are out I think the % differences will widen

correct. In both the strix 3090s using 480w there were losing 5-10% of their performance cause the temps hit 80c. Keep it under 50c and you should see constant 2100+ boost and then they will shine
 
It is a gaming card. It really has no other purpose. It is not a Titan. It has no compute.

Nvidia knows 99% of these cards will be sitting in gaming Pc's. Nvidia basically sat on stage and lied.

Steve on GN explains it well.

Well, I need to disagree there. For example, my primary use for that card will be molecular dynamics with big protein complexes. Currently, I have a 2080Ti doing the job (combined with a poor old 980Ti) and both are always on the edge regarding RAM. Also, the performance in the apps I use depends exclusively on the amount of CUDA cores for a given amount of memory. There is a sweet spot and the 3090 totally hits it. Another great scenario for this GPU will be cryo-EM data analysis, something that has similar requirements and has been done until now with 1080Tis or 2080Tis, because they are cheap. A lot of these cards actually sit in universities, even if many others are in gaming PCs, and 3090 contribution to research is going to be great :)
 
Well, I need to disagree there. For example, my primary use for that card will be molecular dynamics with big protein complexes. Currently, I have a 2080Ti doing the job (combined with a poor old 980Ti) and both are always on the edge regarding RAM. Also, the performance in the apps I use depends exclusively on the amount of CUDA cores for a given amount of memory. There is a sweet spot and the 3090 totally hits it. Another great scenario for this GPU will be cryo-EM data analysis, something that has similar requirements and has been done until now with 1080Tis or 2080Tis, because they are cheap. A lot of these cards actually sit in universities, even if many others are in gaming PCs, and 3090 contribution to research is going to be great :)

Yes all GPU's are capable of GPU accelerated tasks. But why would you not buy a Tesla or quadro? If cost is an issue then again the 3080 makes more sense unless you somehow need the VRAM.
 
Well, I need to disagree there. For example, my primary use for that card will be molecular dynamics with big protein complexes. Currently, I have a 2080Ti doing the job (combined with a poor old 980Ti) and both are always on the edge regarding RAM. Also, the performance in the apps I use depends exclusively on the amount of CUDA cores for a given amount of memory. There is a sweet spot and the 3090 totally hits it. Another great scenario for this GPU will be cryo-EM data analysis, something that has similar requirements and has been done until now with 1080Tis or 2080Tis, because they are cheap. A lot of these cards actually sit in universities, even if many others are in gaming PCs, and 3090 contribution to research is going to be great :)

I remember the University of Nottingham that work with the computerphile guy had a rig with a few high-end GPU's in it so yeah it wouldnt surprise me the clusters being used for science are rather handy especially for projects. I guess only downside to this is they are eating up the numbers when it matters like the recent 3080 release, not quite fair if a bulk order of 50 units is achieved when you got many OcUK customers on a fat waiting list. :o
 
Yes all GPU's are capable of GPU accelerated tasks. But why would you not buy a Tesla or quadro? If cost is an issue then again the 3080 makes more sense unless you somehow need the VRAM.

Because you totally need the RAM :). RAM has been the bottleneck since forever for these tasks in academia. Also, academic apps do not tend to use the features offered by Teslas or Quadros in many cases, because they are simply too expensive to buy. It is better to split the workload between the card and the GPU if you really need to do it. For the price of a couple of GV100s you can, for example, get a 72 core machine with 4 2080Ti and 384Gb of RAM. This makes much more sense for most labs.

When you move to a company everything changes, and yes, Quadros and Teslas are the norm. In our case we have big fat Supermicros, each one with four GV100s. Problems is that each one of these boxes is more than 70K and academic labs cannot afford them. In industry you also have a different range of applications and in many cases Teslas or Quadros are the only GPUs supported (luckily, not always).

In my case, I have been working from home since March, preparing things that then will be run in clusters. A Threadripper with a 3090 and a 2080Ti gives me tons a lot of value for little money (compared with a professional GPU!) and is also silent, that is why I welcomed the card so much in this strange year :).
 
Yes all GPU's are capable of GPU accelerated tasks. But why would you not buy a Tesla or quadro? If cost is an issue then again the 3080 makes more sense unless you somehow need the VRAM.

Because not all software needs a Quadro card to run, some run really well on standard desktop parts and Cuda core counts are king here.

Its true there is a 48gb Quadro version of the 3090 coming, but its also going to be 50-100% more expensive, so you have to weigh up the cost/benefit since the Quadro won't have any extra cores, just more VRAM
 
I remember the University of Nottingham that work with the computerphile guy had a rig with a few high-end GPU's in it so yeah it wouldnt surprise me the clusters being used for science are rather handy especially for projects. I guess only downside to this is they are eating up the numbers when it matters like the recent 3080 release, not quite fair if a bulk order of 50 units is achieved when you got many OcUK customers on a fat waiting list. :o

In the Uni, at least in Imperial College, ordering those was always a long process. If the computer was more expensive than 10K (I think it was 10K) you had to offer the assembly to a few different companies, and have some quotes from them. At the end, you had to choose the most competitive one. We also had to go for reference cards so, at least in that regard, I am sure we were not taking stocks from the normal retailers :D.
 
In the Uni, at least in Imperial College, ordering those was always a nightmare. If the computer was more expensive than 10K, you had to offer the assembly to a few different companies, and have some quotes from them. At the end, you choose the most competitive one. We also had to go for reference cards so, at least in that regard, I am sure we were not taking stocks for the normal retailers :D.

Yeah but exciting **** all the same, I love that stuff, fair play to using them to good causes too!
 
Because not all software needs a Quadro card to run, some run really well on standard desktop parts and Cuda core counts are king here.

Its true there is a 48gb Quadro version of the 3090 coming, but its also going to be 50-100% more expensive, so you have to weigh up the cost/benefit since the Quadro won't have any extra cores, just more VRAM

Yes, it seems that accuracy is the most important difference between the two. 3090 has no error detection because a wrong pixel doesnt affect anything. If you are running math on a consumer card you might be adding errors in to the computations. That and the fact it has double the Tflops.

Cost/benefit is exactly why im against the 3090.

The ironic thing about the 3090 is whilst it not a rule but we can clearly generalise here the knowledge base of the 3090 users seems to be such that many are not really enthusiast level understanding of how these cards work. As witnessed from the complete lack of understanding on VRAM and future proofness of this card. Many seem to have more money to burn than they have knowledge on how graphics works. This isnt a slight or an insult its just an observation.

What will likely happen is that someone with a 3090 OC strix will just max out the settings on the games because thats what people tend to do especially when they feel they have to get the highest fidelity and return on their investment.

A 3080 OC Strix in the hands of someone who knows what they are doing will massively outperform someone with a 3090 who doesnt know what they are doing.

For example someone who runs:

3080 GTX with intel 5.2ghz & C14 3800mhz Ram. Pro NVME SSD 1TB and a clean windows install which is optimised. Who understands what the GFX settings actually do. They will be able to run significantly higher FPS with no loss of visuals and likely less input lag.

I was able to increase the FPS by over 70% just by tweaking the settings with no loss of visual fidelity and the massive increase in responsiveness due to high FPS and no input lag made far more difference than 5-10 frames from more power.

This is like going to the Nurburg in a 500HP Mustang and seeing the 400hp Cayman wipe the floor with your times.

"All the gear and no idea"
 
I am annoyed that they only managed to get a few built for reviewers before the holiday shut down and Asus are saying large quantities wont be here until November with just off small shipments before then. Clearly nowhere near enough to clear the backlog of pre orders.

So its either wait for the 3090 strix until perhaps November or succumb and buy another brand in stock before then.
 
Yes, it seems that accuracy is the most important difference between the two. 3090 has no error detection because a wrong pixel doesnt affect anything. If you are running math on a consumer card you might be adding errors in to the computations. That and the fact it has double the Tflops.

That problem was considered as soon as scientific apps were developed with gaming cards in mind, yes (back in the days of the 8000 series, if not earlier). You can work around it by writing the proper algorithms and setting up the right controls. From the beginning it was also clear that double precision would not be available in most cases, that is why most academic software splits the calculations between the CPU and the GPU. Then you have the commercial software. In some cases, it only supports Quadros and Teslas, but is not infrequent to be able to run it in consumer CPUs too. It depends on the implementation though. Personally speaking, I never found a consumer GPU giving me errors :).
 
Do these reviewer get to keep the cards? I mean in such circumstances you would think they have to send back asap so they can be tested and ready to replace RMA's etc. Some of the shillers dont deserve such a card when the general public are paying for them up front and having to wait.

Often they do, yes.
 
Back
Top Bottom