• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

dedicated PhysX GPU...

Soldato
Joined
6 Dec 2008
Posts
2,692
Location
Burghead, Elgin
Hi, I have been thinking about buying myself a 9600GT or similar to use as a dedicated PhysX card, but I thought I would check here first to see if anyone can tell me if it would be worth doing or not??

Would I see a big improvement in games that support PhysX, or would this just be a waste of time???

Thanks in advance
 
My own view is that if you have a spare card lying around then do it, but don't buy a card specifically for it.

I would have thought your 280 would easily cope with also dealing with the Phys X stuff.
 
Yeah it copes fine, but I figured if I had a dedicated PhysX card it would give me better performance, no??

I have an 8500GT 512MB lying around, and I also have an 8800GTX 768MB card but that is in my Fiancees rig...

Would an 8500GT be suitable enough to use as a PhysX card?
 
For which game or games are you planning to do all this for?

Because as far as I can see there arent many surely?

Even the new ghostbusters uses the CPU for the physics.

I dont think its worth it personally but if you have an 8500GT then you could use that I guess.
 
even the new ghostbusters? why would you expect that to bother using physX? I have heard nothign about the game so i forgive me if its some physics filled wonderous thing.

There are a few games that use it, like ut3 and GRAW2 (i think) but not many :(

Unless you play a lot of games with it in, i wouldnt boter with a specific card for it, especially since youve got a 280, iirc someone go tworse performance when putting in a 8600gt with a decentish (worse than a 280) card
 
Seriously, don't bother unless you have a spare card hanging about. The gains are minimal. It's all a marketing exercise by Nvidia really.
 
how do you "define" any spare card as just a physics card? How do you tell your PC to use it for physics only and not to use your GPU?
 
even the new ghostbusters? why would you expect that to bother using physX? I have heard nothign about the game so i forgive me if its some physics filled wonderous thing.

Yeah it has incredible in game physics. Its meant to be the game that will show the rest of the world that you dont need dedicated GPU physics processors when you can use multi-core cpus for physics.

Heres a vid of the physics:

http://www.hardocp.com/news.html?news=MzkwMjUsQXByaWwgICAgLDIwMDksaGVudGh1c2lhc3Q=
 
At the moment, there is very little to be gained from having the ability to HW accelerate PhysX. The current version only accelerate cloth/water on the GPU and a few other bits and pieces, not the rigid body stuff that physics engines are mostly used for in games.

Modern CPUs can generally power through PhysX calculations faster than the older PhysX cards also, though having one in does remove load from the CPU still, so there is still benefit.

in short, with SDK 3.0, rigid body stuff should be accelerated, if Bullets CUDA implementation is anything to go by, if NVIDIA have done a good job, we should start to see some very nice speedups, but until then, dont bother :). Unless, as others have said, you have a card lying around in which case, why not.

The amount of games out there at the mo that use the PhysX library is stiill relatively small also, this is not because its bad, it isn't, its a very good library indeed, it's more to do with historically game engines have typically included physics as well, so when a dev house buys the rights to an engine, they tend to use its in-built physics as well. Personally I think this will change as the current glut of physics libraries are quicker/more accurate and just generally better than what the current engines themsevles can provide.
 
Yeah it has incredible in game physics. Its meant to be the game that will show the rest of the world that you dont need dedicated GPU physics processors when you can use multi-core cpus for physics.

Heres a vid of the physics:

http://www.hardocp.com/news.html?news=MzkwMjUsQXByaWwgICAgLDIwMDksaGVudGh1c2lhc3Q=

I'm not 100% sure what that guy is pushing when he says physics shouldnt be done in the realms of stream processing? Cos he's wrong!

Modern physics calculation techniques (and I can guarantee the VELOCITY stuff in that Infernal engine use the same techniques as Bullet/PhysX/Havok etc, a broad/narrow phase contact detection scheme with a slightly modified variation on a theme in terms of its solver, perhaps LCP based) either way, its calculations will be naturally highly parallel in nature. Ramming them all onto a few CISC cores on a modern CPU compared to passing them through 200+ "cores" (stream processors) on a modern GPU makes very little sense if you can do the latter.

Gotta say, the physics in that video looked pretty ropey too, they had characters literally running through tables and chairs and the tables and chairs simply going flying!! Whoever put together that physical scene had no basic concept of the mass of heavy wooden tables! Try running into a heavy metal framed chair and see if it goes flying?

Whats the point in using realistic physics if they arn't based in reality as we know it? Or were the Ghostbusters enormous metallic robots that weighed tonnes :p

EDIT: Also what is interesting is that they are insinuating that as modern physics engines "use the GPU" to do the processing, that it takes away processing time for the rendering process and thats why they have such nice shadows etc. Of course what they have failed to take into account there is that currently there are no physics engines which can accelerate the physics they are talking about on the GPU.. at all... so there are literally no examples of what they are claiming to be better on the CPU than the GPU in their engine being done on the GPU... I'd be very interested to see what level of parallelism they have introduced into their CPU based physics calculations and also how many ms each iteration of their physics calculations takes as well as the solver accuracy. I'd be willing to bet that in fact, all they have done is gone away and YET AGAIN re-implemented physics calculation code when libraries that do it as well or better are available for free.

I'm so glad OpenGL and Direct3D have been accepted as rendering libraries or we would still be at the stage where every engine insisted on re-writing its own rendering code also and every single one would claim to be better than the others (but wouldnt be)... Libraries people, libraries!
 
Last edited:
Ghost busters uses its own physics, looks a lot better than Nvidas one as well and its a lot more accessable being on the CPU :D
 
I see what your saying it properly is simplier to them. (havok/physicsx= same result but done different I think is the way to put that no quoting just correct if i m wrong)



it looks like it uses the cpu rather than the steam process of the GPU to me anyway, althrough they was hinting about Nvida and physics in the beginning

ethoir way I perfer physics to be on the CPU particauly quite advtaned like that althrough chairs and that go flying, perfer that ratehr than staying static
 
Last edited:
Ghostbusters does use their own CPU physics engine - its quite nice but nothing special - it scales well with multiple CPU threads - but I'd like to see them cope with proper cloth or fluid dynamics - anyone can chuck 100s or even 1000s of RBs around on the CPU with reasonable performance.
 
TBH I watched that video with my head in my hands.

What I hope is that the person talking in it, knows what they are saying is wrong and is trying to pull the wool over the publics eyes. More likely sadly is they are simply ignorant of how current physics engines work.

Only PhysX and Bullet have attempts at GPU processing in them. Bullet is open source and rarely found outside of the PS3 and small indie games. PhysX only accelerates cloth / liquid and a few other small bits it can calculate on the GPU. Stuff like a Ghostbuster hitting a chair is a rigid body simulation problem. PhysX DOES NOT simulate this on the GPU at the moment (though it would on an old PPU.)

This means they are claiming their physics stuff is better than PhysX as it doesnt clog up the GPU and instead uses the free cycles on the underused multi core CPU. However, had they used PhysX (or Havok or Bullet in software mode) then they would have all done exactly the same. Possibly with better parallelism, as they have all had a lot of effort expended on getting their parallelism to work well as modern consoles absolutely require it (3 cores in a 360, not to mention the inherently highly parallel CELL CPU in the PS3).

I would like to hazard a guess that had they given them a go, these engines, which have been in development for a decade + in some cases, would in fact do the calculations as quickly as their own code, if not faster, and if properly utilised, result in more accurate (i.e. realistic) physics.

It's misinformation at its best, and it bugs me.

Download the PhysX SDK, its free from NV's website, if you have a compiler, have a play with the demos and get a feel for the code. All of the engines operate in a very similar manner.

I really wish game devs would stop trying to re-invent the wheel whenever they see the chance. Physics calculations are a well defined problem with mature libraries out there to solve them, that are only getting better. For the majority of applications they are free to use. So why re-implement physics code every single time... so annoying...

And no, i'm nothing to do with NV, just somebody who has been using physics engines for his own work for the past few years.
 
I'm not 100% sure what that guy is pushing when he says physics shouldnt be done in the realms of stream processing? Cos he's wrong!

Modern physics calculation techniques (and I can guarantee the VELOCITY stuff in that Infernal engine use the same techniques as Bullet/PhysX/Havok etc, a broad/narrow phase contact detection scheme with a slightly modified variation on a theme in terms of its solver, perhaps LCP based) either way, its calculations will be naturally highly parallel in nature. Ramming them all onto a few CISC cores on a modern CPU compared to passing them through 200+ "cores" (stream processors) on a modern GPU makes very little sense if you can do the latter.

Gotta say, the physics in that video looked pretty ropey too, they had characters literally running through tables and chairs and the tables and chairs simply going flying!! Whoever put together that physical scene had no basic concept of the mass of heavy wooden tables! Try running into a heavy metal framed chair and see if it goes flying?

Whats the point in using realistic physics if they arn't based in reality as we know it? Or were the Ghostbusters enormous metallic robots that weighed tonnes :p

EDIT: Also what is interesting is that they are insinuating that as modern physics engines "use the GPU" to do the processing, that it takes away processing time for the rendering process and thats why they have such nice shadows etc. Of course what they have failed to take into account there is that currently there are no physics engines which can accelerate the physics they are talking about on the GPU.. at all... so there are literally no examples of what they are claiming to be better on the CPU than the GPU in their engine being done on the GPU... I'd be very interested to see what level of parallelism they have introduced into their CPU based physics calculations and also how many ms each iteration of their physics calculations takes as well as the solver accuracy. I'd be willing to bet that in fact, all they have done is gone away and YET AGAIN re-implemented physics calculation code when libraries that do it as well or better are available for free.

I'm so glad OpenGL and Direct3D have been accepted as rendering libraries or we would still be at the stage where every engine insisted on re-writing its own rendering code also and every single one would claim to be better than the others (but wouldnt be)... Libraries people, libraries!

I 100% agree with pretty much what you say - nice to see someone with some real insight.

However what you say about the tables and stuff is a design choice - from my own experience in video game development it gets very tedious and frustrating when you keep bumping into physics stuff impeding your movement or even worse get elastic band lagging around the item - it might not be realistic but making the objects fly out the way of the player makes a much smoother gaming experience.
 
I 100% agree with pretty much what you say - nice to see someone with some real insight.

However what you say about the tables and stuff is a design choice - from my own experience in video game development it gets very tedious and frustrating when you keep bumping into physics stuff impeding your movement or even worse get elastic band lagging around the item - it might not be realistic but making the objects fly out the way of the player makes a much smoother gaming experience.

Very true. Absolute realism can sometimes be annoying and detract from the gaming experience. It was more of a general rant given they specifically talked about how realistic it was that their 2 tonne ghostbuster was ploughing through the furniture :p

Personally I work in the realms of being as accurate as possible, but the stuff I'm doing is not game related. I just take my inspiration from the gaming dev community a lot of the time (as many many scientists do, but may not admit to).
 
Ah I see...

I laugh every time I watch that video at them talking down the GPU for physics - when it is by far the more suited device.
 
It's worth doing if your room is getting a bit cold and you want to heat it with your electricity bill.
 
Absolutely,

In fact the original Ageia PPU was about as suited as they come. It was a step too far for a small upstart company though. It was effectively a parallel vector processor. It simply wasnt good enough while costing too much for an add-on card.

The library they produced however (and NVIDIA wisely bought, including some of the devs) was already very refined. It is arguable whether PhysX is better than Havok is better than Bullet etc, they are all slightly better than each other in certain ways.

But as PhysX heads into proper stream processing of its rigid body stuff with SDK 3.0 (this is the reason you havent heard much from NVIDIA about PhysX since they released SDK 2.8.1 about a year ago, they have been porting their rigid body code to CUDA) game devs that have had the foresight to use PhysX will be able to really improve speed overnight by simply offering a patch (or not even that if they have programmed it properly). And if at a later stage we see a nice OpenCL port of the engine, then any stream processor capable device will be able to get in on the act!

Going by Bullets rather naiive CUDA implementation of its own broadphase, in which they have basically directly copied NVIDIAs "particles" example from the CUDA SDK, I see about a 25% speed up on this laptop (T7400 CPU and 8800M GTX SLI). On a much more powerful GPU, this would be greater still. And I am assuming the PhysX CUDA implementation will be much less naiive than Bullets experiment as well as doing more than just the broadphase!

Think how much time the devs of that engine could have saved if they hadn't re-written physics calculation code for the umpteenth time and instead used a well documented code base such as PhysX or Havok or even Bullet if they want access to the gubbins without paying money... They could have made the shadows even nicer ;)
 
Last edited:
Back
Top Bottom