• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Rumors of the 4900XT

i think they need to add a lot more ROP's and what not. not extra shader processors! although if they added more of both, tht wud be good. im sure its the lack of ROP's and TMU's?? that are doing in ati compared to nvidea
 
i think they need to add a lot more ROP's and what not. not extra shader processors! although if they added more of both, tht wud be good. im sure its the lack of ROP's and TMU's?? that are doing in ati compared to nvidea

ROPs and TMUs aren't the same for every graphics card. For example, ATi has 16 ROPs (well, 'render backends' in ATi terminology) in the 2900, 3800 and 4800 series, but the 4800's are a lot more powerful (I believe they've doubled z-fillrate and introduced more advanced AA techniques), and it really shows.

That said, ATi and Nvidia trade blows on shader based benchmarks. For example, ATi take a huge lead in Perlin Noise tests, but other tests will favour Nvidia's architecture (I think Nvidia's quite good on the vertex shader front). Don't be fooled by the 'GFLOPS' metric, it means nothing, really, since the architectures are so different.

Edit: Wait, saw some 2900 XT synthetic benchmarks, while the R600 core absolutely crushed G80 in Vertex shaders, they traded blows on Pixel shaders.
http://techreport.com/articles.x/12458/3
 
Last edited:
Yeah ATI have hideous amounts of theoretical peak power - which when loaded up sympathetically or hand opptomised - but in general everyday use they are closer to 0.6-0.65 of that performance. Those tests are really applicable to gaming performance.
 
RV770 Stream processors are a lot cheaper to produce. RV770 has 800 and its die size is less than half of GT200. Had they produced a monster chip it would thrash the GTX280.

There is a reason they were priced high initially, they are expensive to make.

PhysX advantage will not be there for long imo, OpenCL is coming and DX11 will also incorporate some GPGPU capabilities.

G80 was good, GTX280 proved that the G80 architecture does not scale well and until they change architecture they are going to be at a disadvanatge in regards to costs as R600 architecture proved to be better at scaling, even though R600 itself was flawed.
 
RV770 Stream processors are a lot cheaper to produce. RV770 has 800 and its die size is less than half of GT200. Had they produced a monster chip it would thrash the GTX280.

There is a reason they were priced high initially, they are expensive to make.

PhysX advantage will not be there for long imo, OpenCL is coming and DX11 will also incorporate some GPGPU capabilities.

G80 was good, GTX280 proved that the G80 architecture does not scale well and until they change architecture they are going to be at a disadvanatge in regards to costs as R600 architecture proved to be better at scaling, even though R600 itself was flawed.

Agreed.

I think Nvidia need to concentrate on improving the costs of their architectures rather than spending time pushing PhysX, Stereovision etc. I also think ATi need to dispel the almost unfair image of them I would say created by people.

Typical consumer needs more bang for buck.
 
Developers do need hardware physics tho and it is something that both ATI and nVidia are going to have to spend time on sooner or later, unless CPUs suddenly become 30x faster... as for stereovision I'm not sure wtf nvidia are thinking - although on the plus side it might drive demand for 120Hz TFTs which isn't a bad thing in itself.
 
openCL doesn't provide an alternative to physx tho, unless someone goes to the trouble of making a highly scalable, flexible, robust and easy to implement physics implementation for it.
 
PhysX is a software library. It runs on CUDA which is basically GPGPU implementation on G8x and above.

OpenCL is also a GPGPU, so in a sense you are right, OpenCL provides more competition to CUDA and indirect competition to PhysX in its current form. I do not know enough about how much GPGPU will be implemented on DX11.

I also dont see PhysX being supported by a lot of developers, unless it becomes a standard feature supported by both GPU manufacturers, and eventually Microsoft.

And dont forget that nVIDIA is willing to license PhysX, ATi doesnt use it yet, and it makes sense really. CUDA will provide an advantage to PhysX performace for NVIDIA, its not worth the trouble for ATi to make it a standard now, not with other more neutral implementations coming in soon.

Edit: Dont rule out CPUs yet, many core cpus are the future, and physics is one of the first things that will benefit from it, that and the fact that Intel is aware of the GPU and CPU convergence, i am pretty sure they will do something about it.

Look how long it took for DX10 to take off, and that was supported by nvidia, ati, microsoft and intel (granted, vista held it off but point is still valid).
 
Last edited:
Edit: Dont rule out CPUs yet, many core cpus are the future, and physics is one of the first things that will benefit from it, that and the fact that Intel is aware of the GPU and CPU convergence, i am pretty sure they will do something about it.

Good point, especially as Intel are going to be employing wider and wider SIMD units every few generations along with the increase in core number.
 
I also dont see PhysX being supported by a lot of developers, unless it becomes a standard feature supported by both GPU manufacturers, and eventually Microsoft.

Unfortunatly developers are converging on a point where high end 3d games are going to need hardware accelerated features to go forward... and right now there is only physx...

Edit: Dont rule out CPUs yet, many core cpus are the future, and physics is one of the first things that will benefit from it, that and the fact that Intel is aware of the GPU and CPU convergence, i am pretty sure they will do something about it.

CPUs have a long way to go to be a good alternative to a GPU tho... as an example a Q6600 @ 3.6gig is upto 60 times slower than a stock clocked 260 GTX 216 and at best is still over 20 times slower and this is while the GTX is still doing some rendering. To put this into perspective a setup with 2x 295GTX purely doing physics would be over 250x faster in some cases than a Q6600.

Look how long it took for DX10 to take off, and that was supported by nvidia, ati, microsoft and intel (granted, vista held it off but point is still valid).

Developers generally have very little need for the features of DX10 compared to the requirements for high speed performance when handling lots of objects in a realistic fashion, defered shading is starting to take off which might push things towards DX10 territory but thats about it in general.
 
Last edited:
DX10 would have been so much more successful had Microsoft allowed it in Windows XP. Although even then there are a lot of people out there with non DX10 cards, like the Nvidia 6000 and 7000 series. Had Crysis been built from the ground up for DX10 it might have been a little bit more optimised, after all, wasn't the main point of DX10 over DX9 supposed to be effeciency when doing the same things?
DX10.1 is held back again by there only being a few games to take advantage, and Nvidia not wanting to play ball.
Well, at least both ATI and Nvidia will be using DX11, and Windows 7 is shaping up nicely.
 
Developers do need hardware physics tho and it is something that both ATI and nVidia are going to have to spend time on sooner or later, unless CPUs suddenly become 30x faster... as for stereovision I'm not sure wtf nvidia are thinking - although on the plus side it might drive demand for 120Hz TFTs which isn't a bad thing in itself.

Hardware based physics in its many forms has been around since I coded a game for my dissertation at university some 4 years ago. AMD/ATI simply chooses to optimise their hardware for the Havok API, which was and still is (since its acquisition by Intel) the most commonly used API among game developers.

Let’s be honest physics has been bought to the forefront as a marketing tactic, being used by both companies to help with selling cards while promoting certain titles (Mirrors Edge).

What should be more important here is the price and how it performs in the top titles. I will wait and see the product on launch.
 
Hardware based physics in its many forms has been around since I coded a game for my dissertation at university some 4 years ago. AMD/ATI simply chooses to optimise their hardware for the Havok API, which was and still is (since its acquisition by Intel) the most commonly used API among game developers.

Let’s be honest physics has been bought to the forefront as a marketing tactic, being used by both companies to help with selling cards while promoting certain titles (Mirrors Edge).

What should be more important here is the price and how it performs in the top titles. I will wait and see the product on launch.

But PhysX has some A1 titles like Mirror's Edge and some tech demoes :p
 
Physx for Nvidia is still quite a young technology. I think it’s fairly understandable it hasn't got the backing of all that many developers just yet. Either way the physics argument is derailing what this thread is all about.
 
Until i see it officlaiy released or commented upon by ATI ill take it with a bucket of salt :)

Eh, yeah, but it's fun to speculate (even if the speculators are now going on about PhysX for some reason). I think it's pretty reasonable to assume ATi should be releasing something fairly soon (i.e. within the next 2 or 3 months), though, just due to their release cycle patterns.
 
Back
Top Bottom