• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Unreal Tournament 3 CPU & High End GPU Analysis

Looking at the results I think it's more a case of ATI's brute-force high clock-speed approach prevailing over NVidia's efficiency, rather than CPU limitation.


lol, most of the nvidia card is clocked significantly higher than the ati card, shader clocks(which do a majority of the work) are clocked at what, 1.6Ghz.

ati's architechture is incredibly good, but intended for shader heavy duty, but like many things needs coding. its main weakness is that despite the "320" stream processors. its actually 64 real stream processors that can do up to 5 calculations per clock. which means if you aren't leveraging the full 5 calcs per clock then you are effectively running a much lower number of processors. nvidia's is more the "brute force" method. frankly neither is bad, ati have some great tech but have been somewhat renouned for applying it a generation or two early. for instance the x1800 is mostly a x800 on shader based steroid injections, something that the 7800 lacked somewhat, then nvidia boost the shader power quite a bit on the 7900 a gen later, and a massive shader boost in the 8800 series. but they are, as you say, brute force uncomplicated 1.6Ghz + stream processors at work. ati's ring bus is also just a touch early, its something nvidia will simply have to head towards architechture wise. Nvidia will also drop hardware style AA of the past which ati has already done, again probably a generation to early(or it seems mostly on a over ambitious attempt to get the core working on a 60nm process way to early).


its very interesting to see the 2600xt doing so well, i can see nvidia drivers being able to boost performance, but i can't see it suddenly trashing the 2600xt. If ati can continue to "leverage" all those wasted clocks into action with driver updates well, you'll see how good the architecture was.

i'm under the impression that like the xbox360 was, i think the r600 was designed to be 60nm, and have the 4xaa edram on die working to give free AA. It sounds like maybe the new 55nm r600 replacement might well have this included onboard, and the removal of the edram could well account for the delay of the r600. If that is the case, had the 2900xt come at on 60nm it would have trounced the gtx.
 
lol, most of the nvidia card is clocked significantly higher than the ati card, shader clocks(which do a majority of the work) are clocked at what, 1.6Ghz.

ati's architechture is incredibly good, but intended for shader heavy duty, but like many things needs coding. its main weakness is that despite the "320" stream processors. its actually 64 real stream processors that can do up to 5 calculations per clock. which means if you aren't leveraging the full 5 calcs per clock then you are effectively running a much lower number of processors. nvidia's is more the "brute force" method. frankly neither is bad, ati have some great tech but have been somewhat renouned for applying it a generation or two early. for instance the x1800 is mostly a x800 on shader based steroid injections, something that the 7800 lacked somewhat, then nvidia boost the shader power quite a bit on the 7900 a gen later, and a massive shader boost in the 8800 series. but they are, as you say, brute force uncomplicated 1.6Ghz + stream processors at work. ati's ring bus is also just a touch early, its something nvidia will simply have to head towards architechture wise. Nvidia will also drop hardware style AA of the past which ati has already done, again probably a generation to early(or it seems mostly on a over ambitious attempt to get the core working on a 60nm process way to early).

if ati tech is so early, then explain why r600 was so late? :p
 
if ati tech is so early, then explain why r600 was so late? :p

It was late becuase they are the first to use a different style of core.
Nvidia were first becuase they went for a straight forward, easy to create core.
Ati went for a design that could be easy to evolve in the future. As it has lots of technology not currently in use.
 
It was late becuase they are the first to use a different style of core.
Nvidia were first becuase they went for a straight forward, easy to create core.
Ati went for a design that could be easy to evolve in the future. As it has lots of technology not currently in use.

Dont make much difference, was pointless of em, they should have just used the same sort of arc as the 8800's or something similar instead of making something new, the next ATi cards will make the 2900 obsolete so new future features on the 2900 dont make much difference.
 
8800 ultra, but according to ati it was something they were loking at fixing, the games apparently use a deffered rendering algorithm which is whats stopping it from working.
 
willhub what sort of performance are you getting on Vista? I get very bad performance in vista but in XP I get at least twice that FPS. Can it be the vista driver alone?
 
Last edited:
willhub what sort of performance are you getting on Vista? I get very bad performance in vista and in XP I get at least twice that FPS. Can it be the vista driver alone?

In what game? Twice the FPS, thought you said you get very bad?

Havent tried UT3 yet coz its crashing loading levels atm, gonna try again when I get Vista 64 installed.

AA gives a bad hit in all games, need to use like 4xAA and no higher or else performance is done in really.
 
Back
Top Bottom