• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

More confusion about gfx cards, namely....

Soldato
Joined
30 Sep 2006
Posts
5,280
Location
Midlands, UK
...the 8800GT 512mb
http://www.overclockers.co.uk/showproduct.php?prodid=GX-071-OK&groupid=701&catid=56&subcat=1008

and the HD3870 512mb
http://www.overclockers.co.uk/showproduct.php?prodid=GX-078-OK&groupid=701&catid=56&subcat=416

OK, they are both OCUK versions.

What i don't get is that most folk are saying the 8800GT is the better performer (ignoring price for a minute). :confused:

But why? According to the specs:
3870 has a faster gpu @ 775MHz to the 8800GT's 600MHz
3870 has 320 stream processing units, 8800gt has 112
3870 has shader model 4.1, 8800gt has 4.0
3870 has shader clock speed (i assume that's what it is) of 2250MHZ, the 8800gt has 1500MHz.

Am i missing some kind of point here?
I am going to be buying 3 cards of whatever i finally decide upon, so if i'm being persuaded to buy the 8800gt i want to know that the extra £70 ish, is worth it.

Each rig has: GA-P35C DS3R, E2180 @ 3GHz, 2Gb ram, Vista HP, 250gb hdd.
All gaming will be done at 1400x900, hopefully allowing us to play at full res with full AA and AF. I assume both cards will give me this?
Games: UT3, BF2/2142, CoD4, Crysis, GoW etc

So guys, taking into account our gaming resolution and rig specs, pleeeaassseee tell me why the 8800gt should be better than the 3870 when the specs say otherwise.

I totally appreciate all good advice, as its doing my nut in trying to decide.

Cheers :)
 
With regards to performance per pound as it were, they are roughly the same percentage wise when you compare the prices of the cards.

With the games listed, you shouldnt have a problem with any of them apart from Crysis, as per usual, with either of those gfx cards with that resolution.
 
Well, unfortunately the shaders of the HD3870 run at the same frequency as the clock speed, in this case 775mhz. The 2250mhz figure is the memory speed, which does give the HD3870 a bandwidth advantage over the 8800GT. Unfortunately, what you won't see in most specs is the HD3870 rather woeful texturing ability, it only has 16 texture units while the 8800GT has 64 texture units. Add to that that the HD3870 has shader based AA, which comes with a higher performance hit than traditional AA (the 8800GT uses traditional AA), and thats why the 8800GT performs better most of the time, especially when AA is applied (and AF).

There are fundamental difference in architecture betwen these 2 graphics cards, so it's hard to compare them on just numbers alone.

My advice (and I'm an ATi "fanboy") is go with the 8800GT.
 
Thats just the type of post i was hoping for, thanks MF.
I may still be opting for the 3870s, purely for price and that i'm not using such a high gaming res, but its nice to know a bit more about why the specs don't tally with the performance charts.

Thanks again :)
 
ok I have the 3870s - imo they're great they do all that I ask of them and I game at 1600s and also everything on at 1280 ie aa and af max settings etc. with smooth gameplay apart from crysis but at 1280 I do have the settings at high overall and it looks nice ..some parts of the game I have to lower settings but still looks very nice. :D
 
Last edited:
drivers are sometimes the drawback but ati have a tendency of putting out great drivers once the card has been out for a while

i think the cards are performing similarly in most benchmarks
you might be best going with one of the stock GTs with the new larger stock cooling fan in it

it is quiter and is an easy overclocker than the original stock card
the new ati cards are great though
its a tough choice for anyone in the market for a new card at the moment
but im happy with my gt
 
ATI has a lot fewer shaders than they claim, one fifth of it i believe, and they aren't good at utilizing what they have. ATI seems to think that just because their shaders can do 5 operations per cycle, that means you take the amount and multiply by 5. That architecture has shown not be as effective as Nvidias.
 
ATI has a lot fewer shaders than they claim, one fifth of it i believe, and they aren't good at utilizing what they have. ATI seems to think that just because their shaders can do 5 operations per cycle, that means you take the amount and multiply by 5. That architecture has shown not be as effective as Nvidias.

it can just as many other architectures, do up to 5 shading operations per clock on each shading "unit" which is basically 5 shaders, in the worst possible situtation it will only do 1, in all likely hood 3 is easy to do and 5 is pretty hard to do.

frankly hardware wise it doesn't matter all that much, dev's told ati they wanted software based AA so they can increase the quality of AA in the future, ATi as per usual do what the dev's WANT, then nvidia come along, give them millions of pounds towards developement and have them optimise the engine to do the opposite of the direction ATi are moving in.

Frankly, you look at maybe bioshock, ut3, ati hardware does INCREDIBLY well compared to the GTX, these are cases were ati intended the card for, games that ati are noticably behind in are all TWIMTBP games, all of them. Frankly without that program the situation would be completely different.

but you have to remember this, ATi saw what happened with games and what kind of power they wanted, gpu's are 2-3 years in the making/design process, you can't simply change around design in the final even 6 months when you see what games are coming and basically, that the engines have been made for Nvidia hardware. So they did the next best thing, they aimed the card at midend, clearly saw that 65nm wasn't working great for gpu cores and skipped it, moved down to 55nm dropped to 256mbit bus and offered incredibly cheap cards. if the 3870/3850 weren't out the GT would be £180/250 for the different versions, the gts would be £300 and the gtx still £350. if ati hadn't done what they did we wouldn't have ANY cheap high end cards so maybe we should give ati credit for that.

nvidia hadn't planned, by all accounts, to bring out refreshed cards but who would have bought a gtx for £350 when you could get 2x 3870's for £300, no one. nvidia had to bring out a fast midpriced card based on high end cores.

but the 5 in 1 shader deal, c2d can do i think 5 instructions per clock in apps such as superpi, which is why its insanely fast at it. more complex programs won't be able to fit in 5 instructions per clock, thats life. some will go as low as 2/3 a clock, other things, sse4 and other registers can take certain instructions and run them way faster. thats how hardware is, efficiency is the best thing.

you have to remember, essentially nvidia still use this crossbar memory architecture, which is fine , it basically accesses every single part of the core. Ringbus works quite differently, and having to connect to 64 shader "units" instead of all 320 of them individually works massively more efficiently and at a huge die space saving, logic needing and power saving.

As die size increases and number of shaders increases a crossbar memory access style will become to big and cumbersome and nvidia will move away from it too, at that point they will most likely also go with a "unit" style setup for the same reasons ati have.
 
ATI has a lot fewer shaders than they claim, one fifth of it i believe, and they aren't good at utilizing what they have. ATI seems to think that just because their shaders can do 5 operations per cycle, that means you take the amount and multiply by 5. That architecture has shown not be as effective as Nvidias.

just to be clear, their architechture has shown to be just as if not more efficient than nvidia's, their support from dev's and massive hand outs to said dev's has been proven to be almost non existant. architechtures great, support from game makers poor, its their only downfall, unfortunately a MASSIVE one.

You have to remember for several years ati have been asking dev's what they WANT, and giving it. dev's wanted more shader power, ati gave it, nvidia paid the dev's to make engines that didn't want all that much shader power, kills the massive shader advantage the x1900/1800 had. dev's told ati they want software AA so they can make higher quality AA compatible with HDR and everything they want in the future, ati make it, nvidia pay for more than half of the big title releases to go the way they want, queue ati losing out again.

ATi tend to jump the gun on newer architectures based on what dev's want by 1-2 generations, its a catch 22, you might want to make a game one way, but if nvidia shows up with 2 million quid and says we'll help you optimise for our hardware, current time to make a game from start to finish is so long, without massive sales guarenteed and its hard to turn down money that can literally fund a large portion of the team for most of the time the game is being made.
 
Last edited:
the x1900/1950 series has aged significantly better than the equiv 7900/7950 nvid cards, and it may turn out so with the 3870 and 8800 but nvid seem to have influenced many developers...so that the 8800gt is faster than the 3870 but on paper it does not make sense
 
ATI has a lot fewer shaders than they claim, one fifth of it i believe, and they aren't good at utilizing what they have. ATI seems to think that just because their shaders can do 5 operations per cycle, that means you take the amount and multiply by 5. That architecture has shown not be as effective as Nvidias.

your correct on this. ati seems to have gone towards a marketing route to show big numbers on paper to confuse customers. any old joe bloggs on the high street will hold the 2 boxes next to each other (8800gt and 3870) and check the specs. when he sees one has less then half the shaders the other one has, he will put the box down and go for the one with the bigger numbers. and i have seen this happen now a few times at some local pc shops.
 
the x1900/1950 series has aged significantly better than the equiv 7900/7950 nvid cards, and it may turn out so with the 3870 and 8800 but nvid seem to have influenced many developers...so that the 8800gt is faster than the 3870 but on paper it does not make sense

agreed, x19 series do seem to be doing better than the nvidia counterparts now in the newer games. but as mentioned above it all comes down to money. whichever manufacturer has the most will usually do better.
 
Unfortunately, bigger numbers means faster to a lot of people, I know of some one who thinks an 8400 (for example) is faster than any of ATi's cards based purely on the fact that's "8400" and ATi current only "goes up to" 3870(X2)
 
I just bought an 8800gt with a new build and have had to RMA it for replacement with a 3870. Google "8800gt p5k problem" or similar and you will get pages of people referring to a black screen shutdown of their PC because of the 8800gt. Very odd, but very common it seems - it does look like it is an 8800gt vs. Asus MB problem, but something to be aware of nonetheless.

I would rather have a working gfx card at 15% slower than a really quick card that crashes the system :)
 
Back
Top Bottom