• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

PhysX87: Software Deficiency

Interesting, so basically it's a scam by Nvidia to make games run like crap on non physx compatible hardware by using an outdated and inefficient code that runs like crap on todays CPU's.

But both Ageia and Nvidia use PhysX to highlight the advantages of their hardware over the CPU for physics calculations. In Nvidia’s case, they are also using PhysX to differentiate with AMD’s GPUs. The sole purpose of PhysX is a competitive differentiator to make Nvidia’s hardware look good and sell more GPUs. Part of that is making sure that Nvidia GPUs looks a lot better than the CPU, since that is what they claim in their marketing. Using x87 definitely makes the GPU look better, since the CPU will perform worse than if the code were properly generated to use SSE instructions.
 
Are Nvidia still set on the fact you need more graphics processing power than CPU power?

This article reminded of the time they were saying SLi gives you greater performance over a Quad Core or something along those lines.
 
Interesting, so basically it's a scam by Nvidia to make games run like crap on non physx compatible hardware by using an outdated and inefficient code that runs like crap on todays CPU's.

And you see how they optimised CPU Physx code on the console as they don't want Physx to run like crap on them as they have nothing to gain by doing so & cant sell gfx cards to console players.

Nvidia already has PhysX running on consoles using the AltiVec extensions for PPC, which are very similar to SSE. It would probably take about a day or two to get PhysX to emit modern SSE2 code, and several weeks for compatibility testing. In fact for backwards compatibility, PhysX could select at install time whether to use an SSE2 version or an x87 version – just in case the elusive gamer with a Pentium Overdrive decides to try it.
 
Well lets not all act like this is a big surprise, this is Nvidia were talking about at the end of the day a company that’s has a its policies driven by it’s own insecurities and mis-management.
 
It's weird. People boycott Microsoft products, get them on anti-competitive charges, and even get the European Council to fine/force changes upon them.

Yet people continue to buy nVidia products and don't seem phased by their outright childish approach to outside competition, marketing, and indeed customer relations.

I wish they would really struggle for 18-24 months and thus bring around a shakedown of how the company is run from the ground up.
 
Hasn't this been the industry's worst kept secret for a while now? Nice to see somebody put together a decent article showing NV's crappy practices.
 
All this would do is marginally increase performance with primitive physics, more advanced soft body effects still remain outside the performance realm of the CPU.

The author seems to be ignoring the fact that floating point performance is only one part of the performance issues with physics processing and you'd still be bottlenecked by other areas - so it wouldn't even be a 2x speed up let alone 4.
 
Last edited:
PhysX itself is actually fairly good - the solver is robust meaning you don't have to dedicated as much time to making sure your simulation doesn't "explode" as other physics APIs it also handles collisions and reactions better for a gaming context compared to other libraries which tend to concentrate on semi-realistic outcomes which can result in the player getting "hung" up on objects or blocked by objects, etc.

It has the potential as far as the library goes to implement much more immersive environments with widespread use of destruction and soft body effects but no developer wants to do that due to the dependency on GPU acceleration. These effects simply can't be done on CPU in the same manner, sure for limited use of the effects some of the high end programmers (not typically found at your average studio) could write a specialised solver but thats not very useful in a wider context.

I'm not a fan of what they've done with physx - its effectively put physics back in the stone age for the time being - but the library itself is capable of much more than people realise. In my personal projects I've swapped to ODE and most of the commercial projects I have links with have swapped to havok.
 
It's weird. People boycott Microsoft products, get them on anti-competitive charges, and even get the European Council to fine/force changes upon them.

Yet people continue to buy nVidia products and don't seem phased by their outright childish approach to outside competition, marketing, and indeed customer relations.

I wish they would really struggle for 18-24 months and thus bring around a shakedown of how the company is run from the ground up.

I dont understand the Nvidia loyalty either, as their customer you certainly get shafted and if you aint their customer they still shaft you with physx, time to boycott.
 
I dont understand the Nvidia loyalty either, as their customer you certainly get shafted and if you aint their customer they still shaft you with physx, time to boycott.

Because they make a good product. Take for example the 480, yes it is expensive, yes it is a power hog and yes it can cook your bacon after a good gaming session; but nobody can deny it is currently the single fastest GPU money can buy right now.
 
I dont understand the Nvidia loyalty either, as their customer you certainly get shafted and if you aint their customer they still shaft you with physx, time to boycott.

Its not nVidia loyalty as such... just some people prefer that choice over ATI and theres very little other option.
 
Because they make a good product. Take for example the 480, yes it is expensive, yes it is a power hog and yes it can cook your bacon after a good gaming session; but nobody can deny it is currently the single fastest GPU money can buy right now.

Not surprising seeing as it came out seven months after their competition, and when you take the delay, power usage, heat, price and the marginal performance it has over its direct competition the 5870, is it really a good product when you factor in all the above.
 
Last edited:
Back
Top Bottom