• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

8800GT or 3870 or wait? Stream processors???

Netburst was very efficient when people programmed for it rather than for windows (i.e. making good use of SSE), but you should be able to support what is present as well as what the future holds. Making a smooth transition is a much better way to introduce new technology than to force it upon people.

Edit: Chances are £150 would buy you two 9600 GT cards (assuming they are introduced at about £75 which is quite a likely offer for a -600 series card)
 
Last edited:
Netburst was very efficient when people programmed for it rather than for windows (i.e. making good use of SSE), but you should be able to support what is present as well as what the future holds. Making a smooth transition is a much better way to introduce new technology than to force it upon people.

Edit: Chances are £150 would buy you two 9600 GT cards (assuming they are introduced at about £75 which is quite a likely offer for a -600 series card)

Remember the prices the 8600GTS came in, around £120-£150, and assuming there is no GTS on launch, it may well be the same for the 9600GT.
 
While that's a possibility the chances of that are extremely slim. That sounds more like the price range of the rumoured 8800 GS (y'know, the one with the funky 192-bit bus?). They'd be dooming that product to fail as they'd be launching it against a card that is not only stronger than it, but also a bit cheaper (come on, a card with 64 shader procs vs. a fully fledged RV670 core? Not gonna be a pretty one for the green team), and who the hell is gonna buy one if they can buy a card that's about 1.7x its speed for £30 more?* I'll tell you who: Nobody!

Edit: *or for that matter, spend £20 less and get a card about 1.3x its speed.
 
While that's a possibility the chances of that are extremely slim. That sounds more like the price range of the rumoured 8800 GS (y'know, the one with the funky 192-bit bus?). They'd be dooming that product to fail as they'd be launching it against a card that is not only stronger than it, but also a bit cheaper (come on, a card with 64 shader procs vs. a fully fledged RV670 core? Not gonna be a pretty one for the green team), and who the hell is gonna buy one if they can buy a card that's about 1.7x its speed for £30 more?* I'll tell you who: Nobody!

Edit: *or for that matter, spend £20 less and get a card about 1.3x its speed.

Lots of people will still buy it :p
 
lol, you guys should check out Beyond3d Forum, those smart guys tell you all you need to know about gfx.

Basically, ATi's latest core is harder to optimise for (thanks to ATI's driver team, this doesn't show, there doing a very good job).

It has a superscalar shader SIMD array of 64*5 . SIMD stands for single instruction multiple data. Basically you feed a instruction into one of your 64 SIMD shaders, and if your lucky the instuction has 5 operations for the shader to do, thus is being fully utilised. Unfortunately, this is often not the case, so say you only have 3 operation in this instruction, then your shader can only use 3 of its 5 "stream processors", thus rendering nearly half of your shading power useless. That is ofc not going to happen all the time. With this architecture worst case scenario is you'll only be able to use 1 stream proc per array, which works out at 64 shader per clock cycle. Thats WORST CASE. almost never happens. Best case is ofc 320 shaders ops per clock.

Nvidia does things alittle more flexibly on the shader front. They have there shaders array arranged so that there "stream processors" can do 1 op per shader clock, which is higher than the rest of the core by 2.5 most of the time, and they have a max of 128 shaders in there best graphics cards. Now,necause they don't use a VLIW (very long instruction word) design like ATi, each one of there shaders gets fed a seperate instruction and thus the chance it doesn't get "fed" is very small, i think average utilisation is something like 80%+ (I think, not too sure about that). So you have less shaders, but they can run much faster (are clocked higher, 1300mhz+), and are rarely left with nothing to do.

The thing that is killinh ATi's architecture is it's Poor AA performance (not because it's shader based, it should actually be quicker than it is, even though it's done in the shaders), and it's rather woeful (in respect to Nvidia's) texturing power. I mean, thr R600/RV670 textureing power is pretty much identical to R580's (X1900XT) clock for clock, sure a few tweaks here and there, but pretty much the same. This means that even though it's shader array maybe not be the most effecient, it doesn't matter because it's more than likely texture limited most of the time. Notice the performance drop when you enable AF is still rather significant, and couple this with less that stellar AA performance and the performance drop when you enable both AA and AF is sizeable.

To it's credit, ATi's driver team must be optimising shedloads because the difference in texturing power between the G80/G92 and R600/RV670 is huge. The G80/G92 really do have an overkill amount of texturing power though.

Where ATi's architecture really shines is in it's memory virtualization, most noticeably shown thanks to the HD3850 256mb versus the 8800GT 256mb. Go look at some reviews and notice the MASSIVE drop the 8800GT hits when AA/AF is used with high resolutions. It's also one of the reason's that the DX10.1 spec was so easy for ATi to implement, as they pretty much used this spec to design there DX10 part (R600).

As for ROP's, yes the Nvidia solution have more, but remember the ROP's run at core clock speed, and thus slow that ATi's rops, meaning overall fillrate is not too far apart.

And I got all that from just reading Beyond 3D.......and having too much time on my hands. And I'm willing to bet some(most) of it is wrong :P.
 
Last edited:
Thanks for the info, mate. It kind of clarified what I sort of already knew but didn't know the finer details of. Made an interesting read. I think I might have to give a few of their threads a read then.
 
Back
Top Bottom