• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Physx in 2010

Soldato
Joined
26 May 2009
Posts
22,177
Ok here's the score, my laptop is a dell XPS 1730m, it has two 512mb 8800m GTX graphics boards and an AGEIA PhysX Card. I read an article today that says that Nvidia have discontinued support for the Ageia card in their newer drivers so with newer drivers running it will instead move the Physx load onto one of the GPU's. Now I may be wrong but I see that as a problem performance wise.

I have the following options:

1: Use stock GFX drivers and stock Physx drivers
2: Use newest GFX drivers and stock Physx drivers
3: use newest GFX and Physx drivers

My plan was to run 3dmark06 for all 3 configs and stay with the highest scoring one, but I don't know if 3dmark06 supports Physx anyway, which benchmark should I use?

Also if the Physx isn't running on the PPU by newer drivers will it be moved onto a GPU (hurting SLi performance) or onto the CPU? thanks
 
With that kind of setup, I would imagine option 2 would give the best results. Use 3Dmark Vantage, the 06 one is CPU dependant if I remember correctly.
 
Also if the Physx isn't running on the PPU by newer drivers will it be moved onto a GPU (hurting SLi performance) or onto the CPU? thanks
nVidia killed PPU support and nerfed CPU performance, so it will end up hurting your SLI performance. Your PPU is now a paperweight.
 
I see, so just to clarify, they removed the support for PPU's (as they never got any money for their sale) thus meaning you either have to let a GPU do it (hurting performance) or let the CPU do it (murdering performance) wow, I think my company will now boycott Nvidia products purely out of spite :P
 
The PPU doesn't have the same capabilities as the GPU - with newer features the old PPU architecture simply can't process them in hardware and the load has to be shifted to the CPU for those features anyway. Its a bit unfortunate and probably nVidia made no effort to work around it as they'd have no motivation to.
 
The PPU doesn't have the same capabilities as the GPU - with newer features the old PPU architecture simply can't process them in hardware and the load has to be shifted to the CPU for those features anyway.
But that was the whole point of the PPU hardware, which was meant to be highly parallelised to outperform CPUs and GPUs for physics. Unfortunately nVidia killed it off in order to focus on the GPU implementation, whilst also butchering CPU performance to make the advantage look more considerable. Obviously there are risks to being an early adopter but it doesn't help when you have a big company like nVidia behave in such an underhanded way, which is made all the more transparent by the disabling of hardware PhysX support if an ATI/AMD card is detected.
 
nVidia haven't butchered CPU performance - I've tested CPU only PhysX alongside comparable CPU based APIs like ODE, bullet, etc. and performance is broadly similiar.

There are some parts that aren't the most optimally implemented on CPU but we are talking less than 5% performance difference and last time I looked at the SDK the developer could recompile with support for more optimal routines, at the expense of breaking compatibility with some older CPUs.
 
nVidia haven't butchered CPU performance - I've tested CPU only PhysX alongside comparable CPU based APIs like ODE, bullet, etc. and performance is broadly similiar.

There are some parts that aren't the most optimally implemented on CPU but we are talking less than 5% performance difference and last time I looked at the SDK the developer could recompile with support for more optimal routines, at the expense of breaking compatibility with some older CPUs.

They restricted it to running on one core only, that really does look like butchering to most people.
 
Its not restricted to one core as such - thats more lazy developers. PhysXCore can be compiled with full multi-threading if you wish.
 
Its not restricted to one core as such - thats more lazy developers. PhysXCore can be compiled with full multi-threading if you wish.

Doesn't that take us back to the argument about the tool only being as good as the one who uses it? Which is currently what makes PhysX crap, because devs are really poorly implementing it? It really does seem that PhysX is only being tacked on to games as part of the TWIMTBP programme.
 
I suspect devs really can't be bothered with something that doesn't work on half the install base - just my two pence
 
I suspect devs really can't be bothered with something that doesn't work on half the install base - just my two pence

Yah thats the problem with PhysX... it will never take off until it does, in the mean time we see half arsed efforts that don't really justify the use.
 
I suspect devs really can't be bothered with something that doesn't work on half the install base - just my two pence

Even worse, it's less than half now with the recent change in market share. But yeah as you said, it really won't take off until it's manufacturer agnostic, and I can't see that in PhysX's future, so the best hope is for an open standard separate from AMD or nVidia.
 
Even worse, it's less than half now with the recent change in market share. But yeah as you said, it really won't take off until it's manufacturer agnostic, and I can't see that in PhysX's future, so the best hope is for an open standard separate from AMD or nVidia.

Nvidia should allow PhysX so be done on the CPU, this would allow for more devs to use it...
 
By the time proper physical simulations will be required for an immersive game we will have massively parallel computers anyway, thus rendering the whole thing a moot point, physx is cool and fun but personally I consider it value add more than anything.

[edit] it was a bit trashy of nv to buy out aegia and then stop supporting their card though, although i guess you gotta get people to upgrade somehow
 
Nvidia should allow PhysX so be done on the CPU, this would allow for more devs to use it...

It *can* run on the CPU, but the "gimmick" is that it runs "better" on the GPU, that's why it's often ran only on one core when comparing it, because it makes the difference between the GPU and CPU look a lot bigger than it otherwise would be. They don't want it to run well on the CPU so it's in nVidia's best interests to do so to give themselves a checkbox feature to put on their graphics card boxes.
 
It *can* run on the CPU, but the "gimmick" is that it runs "better" on the GPU, that's why it's often ran only on one core when comparing it, because it makes the difference between the GPU and CPU look a lot bigger than it otherwise would be. They don't want it to run well on the CPU so it's in nVidia's best interests to do so to give themselves a checkbox feature to put on their graphics card boxes.

I see, well it's catch 22 then, as game devs aren't going to want to spend time and money coding features that aren't even going to be seen by many people... It's a shame it's so restricted but as you say they want to sell lots of gfx cards regardless of the methods used.
 
Nvidia should allow PhysX so be done on the CPU, this would allow for more devs to use it...

Fluid dynamics, softbody effects, etc. run poorly on a CPU infact often struggle to maintain real time performance whereas on a massively parallel device they don't take an immense amount of processing power to put out acceptable performance. Sadly we don't really see any of these kinda effects from PhysX so far as most developers are too scared of alienating a significant proportion of their customers. Most of the rigid body effects we've seen so far would run reasonably well on the CPU and only a very few things need the GPU at all (i.e. mafia 2 only really needs the GPU in many outdoor scenes because they've put dynamic cloth effects on all and every NPC even tho you don't really notice it - tho to be fair the indoor/mission specific effects are more intensive and do make better use of the GPU if enabled - even if some of these effects seem to have been implemented by someone with a poor understanding of physics.)
 
Last edited:
Guys ive been looking into it more, the is noting done now that wouldn't work on the old PPU's, its just Nvidia stopped supporting them and didn't take the tech further because they wanted to force the people with them to buy an Nvidia card in order to use Physx.

Heres a couple of more bullet points on it:

> The CPU-based PhysX mode mostly uses only the older x87 instruction set instead of SSE2.
> Testing other compilations in the Bullet benchmark shows only a maximum performance increase of 10% to 20% when using SSE2.
> The optimization performance gains would thus only be marginal in a purely single-core application.
> Contrary to many reports, CPU-based PhysX supports multi-threading.
> There are scenarios in which PhysX is better on the CPU than the GPU.
> A game like Metro 2033 shows that CPU-based PhysX could be quite competitive.

Then why is the performance picture so dreary right now?

> With CPU-based PhysX, the game developers are largely responsible for fixing thread allocation and management, while GPU-based PhysX handles this automatically.
> This is a time and money issue for the game developers.
> The current situation is also architected to help promote GPU-based PhysX over CPU-based PhysX.
> With SSE2 optimizations and good threading management for the CPU, modern quad-core processors would be highly competitive compared to GPU PhysX. Predictably, Nvidia’s interest in this is lackluster.
 
Theres nothing done now in any current game no but the latest versions of the API support features which won't work on the PPU (atleast not without considerable effort). I'm not a great fan of the PPU being made obsolete but tbh it has had its day... such as it was - IIRC even when properly supported its only equivalent to a 9600GT for hardware physics which isn't enough for more modern titles (apparently).

While porting the code to fully use SSE2 would increase the performance 4 fold in certain areas it doesn't actually increase the overall API performance significantly - I believe they came up with a figure in single digit percentages from profiling.
 
Back
Top Bottom