• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

8800GT or 3870 or wait? Stream processors???

Associate
Joined
7 Jan 2007
Posts
1,073
Location
Scotland
After reading many, many threads on which card between the 8800GT and the 3870, I'm leaning towards the 8800.
I'm looking to spend around £150, I do a mix of gaming (UT3) and some video work.
Am I right to look to the 8800GT? Or should I wait as I believe there are new cards coming, but will there be any in my price rangge worth waiting on?

A question; stream processors? It looks to me the 3870 has many more than the 8800gt, now why is the 8800 deemed as better? I'd have thought the 3870 should perform better? Can someone enlighten/educate me? lol.

TIA
 
Firstly - the 8800 GT is faster. Not much question about that.

Secondly - the 3870 is based on an architecture that is much less efficient than the 8800 GT. The 3870 is based on a core that is 5-way scalar. This just means that it has 64 units of 5 shaders, but these shaders have to be heavily optimised for. Also, the 8 series' shader units are clocked at 2.5x the speed of the core clock giving them a crapload of extra shader power than first meets the eye.#

Also, the R600/RV670 cards do antialiasing with their shaders, whereas the 8 series do them with their ROPs - what this means to you is that nVidia stuck with the traditional (and at least for now better) approach, whereas ATi chose to do AA with the shader units which is not only less efficient, but also eats into shader power that could be used for rendering other parts of a game. Also, nVidia's 8 series cards, especially the newer ones based on G92 have a metric crapload of texturing power (to a couple of powers of the ATi cards IIRC).

Just remember that two architectures made by different companies are never directly comparable.
 
X1950Pro in your rig now? TBH I would upgrade your card yet, deffenitly wait some new cards wont be far away :).
 
Same as with the older 7000 series, Nvidia had 24 physical Pipes 8 Vertex and 16 Rops, ATI claimed more but it was simply the way they both counted them that was difference.

Add the above numbers together and you get ATI's method of counting so Nvidia would have 48 lol.

If you can get by for 8 weeks or so wait on the new Gen.
 
Last edited:
The r600/r670 is a pretty efficient core. It has 64 real stream processors which do 5 calculations at a time.
The G80 has 128 real stream processors doing one calculation at a time.

The G80 makes up its low count with a much higher clock speed 1500+mhz compared to Ati going at around 750mhz.

Ati fails at the ROP front though. It only has 16 ROP's compared to the 24ROP's that the G80 has.

Also as said above ATI use shader based AA, which is soon becoming the norm. While Nvidia uses the hardware based ROP's (many games use this)

Overall the G80 core is better rounded. Ati has huge bandwidth (or did) with its 512bit bus, but it wasn't needed. Just that ATI went for a far more radical design, while Nvidia went for the basic approach (which works very well)
 
Last edited:
Someone had edited above lol:

I cant remember 100% but I seen a Table showing it with a 7900GTX in same Table was counted different and did not have 48 REAL PIPES as some claimed it did.

Are you stuggling on that card for now ?.
 
Last edited:
I cant remember 100% but I seen a Table showing it with a 7900GTX in same Table was counted different and did not have 48 REAL PIPES as some claimed it did.

Yer i just edited, You are right. I think ATi had something like 24 real ones going 2 at a time. Not sure but ATI always like to count how many instructions or whatever they are. While nvidia just count the real stuff. :p
 
The r600/r670 is a pretty efficient core. It has 64 real stream processors which do 5 calculations at a time.
The G80 has 128 real stream processors doing one calculation at a time.

The G80 makes up its low count with a much higher clock speed 1500+mhz compared to Ati going at around 750mhz.

Ati fails at the ROP front though. It only has 16 ROP's compared to the 24ROP's that the G80 has.

Also as said above ATI use shader based AA, which is soon becoming the norm. While Nvidia uses the hardware based ROP's (many games use this)

Overall the G80 core is better rounded. Ati has huge bandwidth (or did) with its 512bit bus, but it wasn't needed. Just that ATI went for a far more radical design, while Nvidia went for the basic approach (which works very well)

But the 8800GT and GTS only have 16ROP's and they both beat all of ATi's offerings, and surely it cant just be down to the high speed of the shaders? The 8800GTS had 20ROP's whilst the GTX has 24.
 
Are you stuggling on that card for now ?.

It does play UT3 well, but it sometimes feel it is on it's limit. I also have a bit go the "upgrade" bug, you know, you get the idea in your head, spend some time online looking and just can't help yourself!

Thanks for the above replies, I am beginning to understand the different architecture.
 
But the 8800GT and GTS only have 16ROP's and they both beat all of ATi's offerings, and surely it cant just be down to the high speed of the shaders? The 8800GTS had 20ROP's whilst the GTX has 24.

This is off a review site. Not sure if its true.

Comparing the block diagram of the GeForce 8800 GT with that of the 8800 GTX you can also see that two ROP partitions are no longer present, leaving just four total. To help offset this, NVIDIA has come up with more efficient color and z-compression for G92. This enhanced compression should help at high resolutions, particularly once AA is applied, as available memory is used more efficiently, helping to keep memory usage in check.

Though i have no idea why ATI sucks at AA. Maybe it becuase they didnt tweak the 16 ROP's well or the stream Processors just don't work well at AA and they had to limit the amount it could use for AA. I have no idea really.
 
Think about it - if the shaders are doing AA (which IIRC requires quite a lot of processing power), that's power that normal shader effects and such can't use.
 
Yer i just edited, You are right. I think ATi had something like 24 real ones going 2 at a time. Not sure but ATI always like to count how many instructions or whatever they are. While nvidia just count the real stuff. :p

16 real pipes (same as the R520) but with 3 shader ALU's per pipe making 48 shader processors in total. (hence making the X1900XT quicker than the X1800XT in shader heavy games but roughly performed on par in older less shader heavy games).

IIRC as well the R600 design doesn't support hardware based MSAA and has to do it all in shaders.
 
The 3870 is based on an architecture that is much less efficient than the 8800 GT.

That's not totally true. The architecture is actually very efficient when processing code that takes advantage of VLIW (Very Long Instruction Word).

Although it's method of rendering AA is indeed, badly inefficient.
 
Back
Top Bottom