• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Maybe 3 GTX 280's in Tri-SLI will be able to get 30fps min on maxed Crysis...

I'm not taking shots at the GTX 280 here, I know it will be a lot better than 8800 GTX. I was just suprised by how slowly current tech actually runs Crysis more than anything else.

It will be better, but I doubt it will be double the speed, never mind three times.

Just look at raw stats vs a 9800GTX.

Depending on the efficiency of the new shader archetecture it could easily blow away the current G92 cards, but not by massive amounts.

For example according to leaked specs the GTX 280 shader clock is ~1296Mhz with 240 shaders.

My overclocked 9800GTX runs 2000Mhz with 128 shaders.


Assuming the GTX 280 uses shaders (stream processors) that can do 1 MADD (2 FLOPS) and 1 MUL (1 FLOP) / clock same as the G92, we get:

GTX 280 = (MADD (2 FLOPs) + MUL (1 FLOP)) x 1296,000,000 x 240 = 933.12 GigaFLOPS (peak)

9800GTX OC = 3 x 2000,000,000 x 128 = 768 GigaFLOPS (peak)

9800GTX Stock = 3 x 1688,000,000 x 128 = 648.192 GigaFLOPS (peak)

Which makes the stock GTX 280 ~21% faster than an overclocked 9800GTX, and 44% faster than a stock 9800GTX :|

Nothing to be sniffed at! But no where near 2x the performance, let alone 3x.


Of course that is raw stats, and not taking into consideration the effect the faster mem bus and any shader optimisations nVidia might have done, but even with those I can't see the card being better than 2 x the performance of a current 9800GTX max.
 
what are we going to talk about when these cards are released?? :p

My guess would be a lot of chortling and p***-taking of ATI because they can't produce a card as fast as nVidia's top-end offering costing much $$$.

It's my belief that ATI will settle for second place in the speed stakes and concentrate more on the price/performance ratio and home cinema/integrated markets. Their integrated GPU on the 780G has no competitor for now, and speaking from experience, is simply the best solution for low power, HD content playback in HTPC's.

Meanwhile, the laughers and pointers will carry on paying crazy prices for the top end hardware because nVidia know they have no competition for the foreseeable future and can gouge accordingly.

Maybe Intel will shake things up a bit further down the line.
 

Yes and No.

At anything below 1680x1050 the are very similar or the 9800GTX is faster (due to far higher shader and core clockspeed).

At 1920+ or with tons of AA the 8800 is faster (due to more memory and higher mem bandwith).
 
It will be better, but I doubt it will be double the speed, never mind three times.

Just look at raw stats vs a 9800GTX.

Depending on the efficiency of the new shader archetecture it could easily blow away the current G92 cards, but not by massive amounts.

For example according to leaked specs the GTX 280 shader clock is ~1296Mhz with 240 shaders.

My overclocked 9800GTX runs 2000Mhz with 128 shaders.


Assuming the GTX 280 uses shaders (stream processors) that can do 1 MADD (2 FLOPS) and 1 MUL (1 FLOP) / clock same as the G92, we get:

GTX 280 = (MADD (2 FLOPs) + MUL (1 FLOP)) x 1296,000,000 x 240 = 933.12 GigaFLOPS (peak)

9800GTX OC = 3 x 2000,000,000 x 128 = 768 GigaFLOPS (peak)

9800GTX Stock = 3 x 1688,000,000 x 128 = 648.192 GigaFLOPS (peak)

Which makes the stock GTX 280 ~21% faster than an overclocked 9800GTX, and 44% faster than a stock 9800GTX :|

Nothing to be sniffed at! But no where near 2x the performance, let alone 3x.

think your forgetting the other advantages of the new core, such as increased number of raster operators. more rendering pipelines will boost AA performance since fillrate will be increased.
and then there is the doubling of the memory bandwidth. that alone will make a big difference for this new card alone.
 
who really cares? just wait for the card to come out and try it out. speculating isnt going to do anything. My GTS plays Crysis at medium detail at 1680x1050 and it looks fine to me. some of you are being a bit silly. Spending over £700 on a SLI setup to play 1 game is mad.
 
I think we're all over Crysis aren't we?

Hands up if you're waiting for a new GFX card to play it? No? That's right, we've all played it and thought "hmm, those leaves look **** wish I could have AA on" finished it and been a tad underwhelmed?

I know a lot of us have felt that way.. I have to say I enjoyed going back to farcry and actually getting round to completing it (as I'd got distracted halfway through on release, plus its a LOT bigger than Crysis) at 1920x1200 with full AA/AF and the paradise mods etc etc with my 8800GTX and I would argue it very nearly looked as good as Crysis without the whole environment destruction thing.

I might go back to Crysis at some point and try it with full AA but tbh the story is nowhere near as compelling as Farcry so I ain't going to be motivated to buy a gfx card to play a game I didn't especially like in the first place!

Farcry 2 could be the one to make me upgrade IF the story is as good and it doesn't rely on the fact you can set fire to stuff to hide the fact its repetitive, easy and quite dull.
 
think your forgetting the other advantages of the new core, such as increased number of raster operators. more rendering pipelines will boost AA performance since fillrate will be increased.
and then there is the doubling of the memory bandwidth. that alone will make a big difference for this new card alone.

Not forgetting that at all, hence:

Of course that is raw stats, and not taking into consideration the effect the faster mem bus and any shader optimisations nVidia might have done, but even with those I can't see the card being better than 2 x the performance of a current 9800GTX max.

I did miss out the words 'and other' between shader and optimisations, that was a typo.


And as I said, I can't see the card being more than 2 x the speed of a 9800GTX, even with all that taken into account.

But hey you never know, i'd be delighted if it was, GPU's ahve been languishing for FAR too long.
 
Assuming the GTX 280 uses shaders (stream processors) that can do 1 MADD (2 FLOPS) and 1 MUL (1 FLOP) / clock same as the G92, we get:

GTX 280 = (MADD (2 FLOPs) + MUL (1 FLOP)) x 1296,000,000 x 240 = 933.12 GigaFLOPS (peak)

9800GTX OC = 3 x 2000,000,000 x 128 = 768 GigaFLOPS (peak)

9800GTX Stock = 3 x 1688,000,000 x 128 = 648.192 GigaFLOPS (peak)

Which makes the stock GTX 280 ~21% faster than an overclocked 9800GTX, and 44% faster than a stock 9800GTX :|

Nothing to be sniffed at! But no where near 2x the performance, let alone 3x.

Those approximations are in direct contradiction of nvidia's claim of three times the fold@home performance of the 3870. Currently the 3870 roughly matches the 8800GTX in raw FPU performance (for scientific computing etc). For the GTX280 to be pulling out such a lead implies significant efficiency improvements within the stream processing structure.
 
I'm just going on specs and rumours that are currently floating round.

Those are the clocks of the card (supposidly), we know it has 240 shaders, and it has been reported already that with the clocks floating round that nVidia would fall short of the 1TFLOP mark.

So the figures all match what has already been said.

I did say it is just extrapolation from raw data and can't be used to give a direct comparison as we have no idea what nVidia have done to the rest of the card.

It is just to give an idea of performance.
 
I guess it could be a situation like with centrino/c2d etc where there are efficiencies that make it perform beyond what you'd expect?
 
I'm just going on specs and rumours that are currently floating round.

Those are the clocks of the card (supposidly), we know it has 240 shaders, and it has been reported already that with the clocks floating round that nVidia would fall short of the 1TFLOP mark.

So the figures all match what has already been said.

I did say it is just extrapolation from raw data and can't be used to give a direct comparison as we have no idea what nVidia have done to the rest of the card.

It is just to give an idea of performance.

I'm not doubting the figures, or your calculations of theoretical peak FP performance. They look correct. But there are significant inefficiencies at play within the architecture which prevent them from reaching their theoretical maximum performance. I'm just saying that the relatively modest computing-power increase over the 9800GTX flies in the face of the only benchmark we actually have for the GTX280.

...of course that was a nvidia benchmark, so it could well have been 'cherry picked' to exploit specific deficiencies in the 3870 architecture when running F@H.
 
Of course that is raw stats, and not taking into consideration the effect the faster mem bus and any shader optimisations nVidia might have done, but even with those I can't see the card being better than 2 x the performance of a current 9800GTX max.

Bandwidth issues aside, the ALU architecture is different. All the fuss about the 'missing MUL', G80/G92 could rarely complete 3 ops/clk. Early reports suggest the G200 has fixed the issue of the partially functional MUL operand.
 
Last edited:
crysis is crap why bother

That makes completely no sense ?

Assume for a moment that even if Crysis is th worst game in the history of the world, if you are able to play it smoothly then other games like FarCry2, Alan Wake, and other upcoming games will be fine., or at the least have a shot of running ok.
 
Back
Top Bottom