• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

GT200 GTX280/260 "official specs"

Alright guys,

Obviously these cards are going to be cpu limited in some respect, but has anyone got any ideas how much so? say with a 6420 @ 3.4ghz? will it be enough lol!
 
I have an [email protected] and I think even that is gonna limit me lol :D

No!, the glass is half full :p. Show some optimism :D.

What resolution are you guys talking about?. I don't think 1920x1200 will be CPU limited. I can go a little higher than that so I might not need to upgrade my CPU if I end up getting this over the new upcoming ATI cards.
 
There is no simple correlation between CPU speed, GPU power, and "CPU limitation". It depends on the application you're running, the resolution and other settings you're running, and the particular part of the game you're in.

In simplified terms:
When processing a frame, both the CPU and GPU have work to do. If the CPU finishes first, it stays idle until the GPU finishes its workload ('GPU limitation'). If the GPU finishes first, it stays idle until the CPU finishes ('CPU limitation).

Now, the CPU load of a particular game-scene is usually relatively fixed, but will vary massively from one scene to the next (depending on the number of enemies on screen, number of physics objects, other background game logic etc). The GPU-load can be altered through resolution, AA, and the number of graphical effects you enable. So the trick is to find, for a particular game, a good balance between GPU and CPU load.

In short, CPU limitation is a good thing. It means you're already running at the maximum pace of the game. If you're also already running at the maximum resolution of your monitor, with full effects and AA, then you're getting the best possible experience. The only way to improve the framerate would be to get a faster CPU, and we all know that the difference between CPUs is small compared to between different GPUs (especially since so few games use more than one CPU core effectively). BUT - it's all massively application dependent. Crysis, running at 1920*1200, will still be hugely GPU-limited, even with these new cards. HL2 at the same resolution, however, will likely be CPU-limited for the vast majority of game scenes, even with full AA appplied.
 
n1 duff-man, I am concerned tho for my measly E6300 @ 3.15ghz!

You'll be okay. Worse case scenario is that you get to up the resolution, graphics and detail settings to max without seeing any drop in performance. Of course, if you're running at relatively low resolution due to your monitor, then a slightly cheaper card (like one of the ATI ones) might be a better bet.
 
Duff-man i'm at 1680x1050 on my 226BW, I like x2FAA if I can get away with it (HL2, C0D4 ) I thought if you were say 'CPU limited' instead of GPU limited, all that would happen is your FPS VALUE is lower, but you'd still get consistent smoothness regardless.

So instead of 80fps constant i'd get 60fps constant, but still have a great gaming experience?
 
Duff-man i'm at 1680x1050 on my 226BW, I like x2FAA if I can get away with it (HL2, C0D4 ) I thought if you were say 'CPU limited' instead of GPU limited, all that would happen is your FPS VALUE is lower, but you'd still get consistent smoothness regardless.

So instead of 80fps constant i'd get 60fps constant, but still have a great gaming experience?

Like I say, unfortunately it's game-dependent, and also changes scene-by-scene.

Lets consider first a highly CPU-limited game scene. You have an assload of physics objects interacting, with a load of enemies on-screen, and you get (say) 40fps at this point. If you have a stupidly powerful GPU (like one of the GT200s will be), then you can likely use whatever resolution, detail and AA settings you like without affecting your framerate. You get 40fps regardless because the GPU will always finish its workload first.

Now lets consider a highly GPU-limited game-scene. Not much is happening with regards physics and AI, so the CPU isn't doing a lot of work here. If you have a mid-range card, you might see say 30fps here at a set resolution. With a high end card this might go up to 60fps, and with one of these new beasts you might get 120fps (say). Once again, you could up the details levels with the more powerful card, and still see framerates above the 60hz threshold that your monitor can display.

This is partially the reason why a card which is *twice as powerful* does not often show twice the *average* framerate in benchmarks. Within that benchmark there will likely be a mix of CPU-limited and GPU-limited scenes. So, while the new card might get double the framerate in the GPU-limited areas, it gets roughly the same framerate as the old card in the CPU-limited areas. So, the overall average framerate is somewhere between 1x and 2x that of the old card, depending on how many CPU-limited scenes there are. This is also the reason that nvidia/ATI marketing benchmarks usually show much bigger improvements that real-world benchmarks. They choose 'flyby' type benchmarks which are highly GPU limited to show off the potential power of their new GPUs. In the real world you have to appreciate that in some game scenes, having a faster GPU will do nothing to improve your framerate.
 
Last edited:
Here comes some big pics of NVIDIA GeForce GTX 280



GTX280a.jpg


GTX280d.jpg


GTX280b.jpg


GTX280c.jpg
 
Last edited:
When do you guys think games will require more horsepower?

I think the current cards handle everything bar Crysis very nicely, unless you mean at 2560x1600 and so on.

I'm thinking Far Cry 2? I'm not upgrading from my 8800GT just for Crysis, because the game sucks.
 

Okay, from those pics I did a quick estimate of the card-length:

The 8800GTX is 2.95 times longer than the main section of the PCI-e slot.

From that diagram, I get that the GTX280 is 3.09 times longer than the main section of the PCI-e slot.

From this, we can summise that the GTX280 is just slightly longer than the GTX. Subject to my MS-paint skills :p In any case, it's very similar in size to the 8800GTX, and probably just slightly longer. If someone knows the actual length of the GTX or of the main section of the PCI-e slot, we can make an estimate of how much longer the card is.
 
This could be hot product but in several ways. There certainly does seem to be a lot more components and more of the PCB used up than on the G80.
 
Neither of them have anywhere near the amount of capacitors the 9800 GTX has, though:

9800gtx185274f97an3vz2.jpg


Edit: Now that I think about it, all that voltage regulation gear on the 9800 GTX is probably there for a single aim: To shove enough voltage through that GDDR3 to keep it at 2.2GHz effective.
 
Back
Top Bottom