• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

My Test of GRAW running with and without physX

SteveOBHave said:
Yeah fully... They are suggesting that 4GB be the new standard for Vista... Looks like we will all be going that way sooner rather than later...

Depends whether you're running DDR or DDR2 system :) , but the crux of it is, that people will be adopting higher specs incl. their ram to handle/store/process their ever increasing calculations more faster and efficiently.

If you compare the specs required last year to run games to this year, many old systems with 256MB ram are being forced to upgrade to 512MB/1GB ram to meet the recommended specification, which is amazing. By next year many games would have minimum recommended size of ram of 1GB... the rate of growth is not as exponential as CPU's but it still is a area that is growing.

To go back on topic (before we get told off), PhysX would theoretically be a nice way to offload calulations freeing up the CPU and GPU (which in turn alleviates stress on the ram), but for the current time, it seems to run interference rather than assist the system. In this instance no amount of additional ram (other than the required minimum) would really assist the process -physx carrying 128MB should be enough for the physics calculations.

Anyway, this is my last post on this topic until my friend comes back from holiday -that rich lucky Bar steward! He will post up some specs of him using PhysX on his Dell XPS Quad SLi system - he is experiencing the same problems like everyone else who used PhysX in this thread - so until then or until UT07 comes out- ciao!
 
ihatelag said:
Anyway, this is my last post on this topic until my friend comes back from holiday -that rich lucky Bar steward! He will post up some specs of him using PhysX on his Dell XPS Quad SLi system - he is experiencing the same problems like everyone else who used PhysX in this thread - so until then or until UT07 comes out- ciao!

Dunno m8, I am really beginning to wonder if the PhysX card is doing anything that either the GPU or the CPU (or both) are not already capable of...
 
SteveOBHave said:
Dunno m8, I am really beginning to wonder if the PhysX card is doing anything that either the GPU or the CPU (or both) are not already capable of...
Physics is something that a CPU or GPU can do, but usually a piece of hardware designed for a specific task (like the PhysX card is) should be able to do it a lot more efficient, for some reason it doesn't show with the Ageia card, possibly due to other limitations like PCI bandwith or other.
 
Dutch Guy said:
Physics is something that a CPU or GPU can do, but usually a piece of hardware designed for a specific task (like the PhysX card is) should be able to do it a lot more efficient, for some reason it doesn't show with the Ageia card, possibly due to other limitations like PCI bandwith or other.

You'd think that if you design a piece of hardware you would design it around the limitations of the interface... you'd think...
 
I thought that Havok were working with nVidia, to adapt their 'HavokFX' physics engine to utilise nVidia's SM3.0 algorithms.
http://www.havok.com/content/view/289/53/
http://www.gameshark.com/news/17231/Havok-NVIDIA-at-GDC.htm

If that's the case, then only nVidia cards will have hardware support for games using 'HavokFX', and ATi users will still need the Ageia or another 3rd party physX card.

I know ATi cards support SM3.0 too, but their algorithms are completely different to nVidia's, so software designed to specifically use nV's algorithms won't work on any other card.
Free hardware support for 'HavokFX' games will sell a lot of nVidia cards. How will ATi respond to this?
 
SteveOBHave said:
You'd think that if you design a piece of hardware you would design it around the limitations of the interface... you'd think...
Yes, unless there is no other interface they can use, PCI is really the only interface they can use as most PCIe slots are not usable with most systems due to the videocards blocking them.

But maybe the limitation isn't the PCI bandwith but something else, I have not seen an article that really explains why the framerate drops so much in GRAW.
 
Dutch Guy said:
Yes, unless there is no other interface they can use, PCI is really the only interface they can use as most PCIe slots are not usable with most systems due to the videocards blocking them.

But maybe the limitation isn't the PCI bandwith but something else, I have not seen an article that really explains why the framerate drops so much in GRAW.

So somewhere between the testing and initial Beta versions of the PhysX hardware and the final release, the implimentation developed a bottleneck? Bah, these things are supposed to be ironed out before you release not after.
 
SteveOBHave said:
So somewhere between the testing and initial Beta versions of the PhysX hardware and the final release, the implimentation developed a bottleneck? Bah, these things are supposed to be ironed out before you release not after.
I was only guessing that the PCI bandwith is the limiting factor, maybe it isn't a limit at all though.

As I said in an earlier reply I would be very surpried if Ageia would continue with the development if they knew from the start that the PCI slot would be limiting the performance.
 
in regards to the PCI bus being limiting, im not so sure it is, agiea have stated in numerous interviews that getting info to and from the gpu needs little speed/bandwidth. Its what you do with the info that requires massive calculations apparently.

Sending a few thousand coordinates to and from a ppu over a pci bus shouldnt be 2 much of an effort I think, calculating where it all goes and how it interacts is
 
Combat squirrel said:
in regards to the PCI bus being limiting, im not so sure it is, agiea have stated in numerous interviews that getting info to and from the gpu needs little speed/bandwidth. Its what you do with the info that requires massive calculations apparently.

Sending a few thousand coordinates to and from a ppu over a pci bus shouldnt be 2 much of an effort I think, calculating where it all goes and how it interacts is
Sounds reasonable, as I said it would surprise me if the PCI bus was the limit.

Anandtech has an update of the PhysX article up.
There is a bottleneck in the system somewhere near and dear to the PPU. Whether this bottleneck is in the game code, the AGEIA driver, the PCI bus, or on the PhysX card itself, we just can't say at this point. The fact that a driver release did improve the framerates a little implies that at least some of the bottleneck is in the driver.
 
Last edited:
Oh dear !!!

The promo video looks awsome, what kind of hardware could they be running on as the frame rate seems to be pretty constant and well over what others are reporting with no pauses and the like that has been seen by the community.

If the finding of that link are true that's pretty damming evidance against them.
So much for the added GFX overhead causing the huge performace dips.
Just seems the card is detected and that's about it. I guess on a plus side if you can get the current level of performace on just GPU+CPU then once they actually switch the enable hardware acceleration to ON that's going to be a massive performance bump.

Their response will make interesting reading that's for sure.
 
Killajaz said:
Blatant con from the developers and manufacturers. I said it before and I say it again: If the game developers wheren't so lazy and coded the games properly you would still be fine with an ancient card like Geforce 2 GTS or similar.

Totally agree with you on that. There is no requirement for tight code these days they simply push more power at it instead if its not fast enough.

Think back to the "old days" (if you are old enough to remember) and what they used to be able to get out of a Spectrum or C64 if they put their minds to the challenge. Anyone remember Fairlight on the Spectrum. The music they managed to get out of that little buzzer was amazing...
 
Combjelly said:
The promo video looks awsome, what kind of hardware could they be running on as the frame rate seems to be pretty constant and well over what others are reporting with no pauses and the like that has been seen by the community.

Probably pre-rendered.
 
Magic Man said:
Think back to the "old days" (if you are old enough to remember) and what they used to be able to get out of a Spectrum or C64 if they put their minds to the challenge. Anyone remember Fairlight on the Spectrum. The music they managed to get out of that little buzzer was amazing...
No need to go that far back, just look at what fluid images a 300Mhz Playstation2 can produce in Gran Turismo 4 :)
 
Dutch Guy said:
No need to go that far back, just look at what fluid images a 300Mhz Playstation2 can produce in Gran Turismo 4 :)


Yup, original xbox games like the new Tomber Raider are really quite impressive considering they are running on a 733mhz Celeron and GF3. If PC developers had time and money, they could produce a Cellfactor type game that runs at 50fps quite easy without any PPU crap, but time is money nowadays, so they have to go for the easy, slow option.
 
Lanz said:
Yup, original xbox games like the new Tomber Raider are really quite impressive considering they are running on a 733mhz Celeron and GF3. If PC developers had time and money, they could produce a Cellfactor type game that runs at 50fps quite easy without any PPU crap, but time is money nowadays, so they have to go for the easy, slow option.

...quick option? :D EDIT: oh you ment slow frame rates :) LOL my bad
 
Last edited:
Back
Top Bottom