• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia PhysX FLEX

I'm not getting in on the PhysX is dead bandwagon, but we are talking about GPU accelerated unless you have info the states that GPU accelerated PhysX is on the new consoles and not the CPU based which has been on the consoles already.

Physics, even this kind of "PhysX" does not need to run on the GPU, there is more than enough horsepower on modern CPU's, even on the new GC's weak CPU because it has 8 cores. having said that there is no reason why it can't run on GPU OpenGL

That has long been the issue with PhysX on the CPU, its gimped to just one thread and seemingly uses a very badley optimised API ontop of that.
Well, this more recently, Metro 2033 with full PhysX actually ran better on an overclocked i7 9## of the time than it did on a GTX 480, (when it was still multi-threaded.)
Things changed dramatically after that.

I hope Nvidia put this right again, i want Mantle, i want PhysX, i want my cake and eat it, i don't want to have to chose between GPU's to get just one or the other. more over, i want to see Mantle and PhysX in more than just one or two games a year.

So, AMD hold to your Open promises on Mantle, Nvidia, please do the right thing. :)
 
Last edited:
Physx isn't "in" either console. The software implementation can be used, like any physics API. I think it was more a case of "me too" and placating the few developers paid for Physx titles thus far and in the near future tbh.

It's a shame for the gamer what's happening at the moment, Mantle is also going in the wrong direction too (imo) even though DX needs a kick up the rear. It's not going to benefit everyone, and therefore, likely no one, in the long run.

Mantle is heading in an open direction according to AMD so i fail to see how this can be wrong. Where mantle is heading it's far to early to say. Physx has been around a long time now and it's really went no where. I think Nvidia need to take it somewhere else and put it to better use.
 
Last edited:
Physics, even this kind of "PhysX" does not need to run on the GPU, there is more than enough horsepower on modern CPU's, even on the new GC's weak CPU because it has 8 cores. having said that there is no reason why it can't run on GPU OpenGL

That has long been the issue with PhysX on the CPU, its gimped to just one thread and seemingly uses a very badley optimised API ontop of that.
Well, this more recently, Metro 2033 with full PhysX actually ran better on an overclocked i7 9## of the time than it did on a GTX 480 when it was still multithreaded.
Things changed dramatically after that.

I hope Nvidia put this right again, i want Mantle, i want PhysX, i want my cake and eat it, i don't want to have to chose between GPU's to get just one or the other. more over, i want to see Mantle and PhysX in more than just one or two games a year.

So, AMD hold to your Open promises on Mantle, Nvidia, please do the right thing. :)

Why does everyone keep saying this. CPUs are not designed in a way that they can in any way shape or form keep up with a GPU when it comes to physics calculations. CPUs aren't properly compatible and don't translate the instructions as well.

GPUs are parallel by nature and with tons of stream processors available which are ideal. So my question to you is, why would you even want to run it on the CPU, ever? You say modern CPUs as though we've come on leeps and bounds in a few years, but the fact is we haven't. PhysX as an SDK might well be crippled on the CPU due to poor optimisation, but even if Nvidia took the time to optimise it, you just wouldn't get the same performance and in turn you'd never be able to render the same level of effects.

GPUs are the choice for rendering these effects, anything else is just taking backward steps.
 
Why does everyone keep saying this. CPUs are not designed in a way that they can in any way shape or form keep up with a GPU when it comes to physics calculations. CPUs aren't properly compatible and don't translate the instructions as well.

GPUs are parallel by nature and with tons of stream processors available which are ideal. So my question to you is, why would you even want to run it on the CPU, ever? You say modern CPUs as though we've come on leeps and bounds in a few years, but the fact is we haven't. PhysX as an SDK might well be crippled on the CPU due to poor optimisation, but even if Nvidia took the time to optimise it, you just wouldn't get the same performance and in turn you'd never be able to render the same level of effects.

GPUs are the choice for rendering these effects, anything else is just taking backward steps.

Yes, the GPU has more grunt to run PhysX

You get a definite performance drop by running it on the GPU, the power you use to render PhysX on the GPU takes it away from its GFX processing power, the only way around that is to run a second dedicated PhysX GPU.

When properly optimised the same PhysX can be offloaded to the CPU to use up its spare capacity and take nothing away from the GPU's performance.

You can also use a combination of both to implement better more advanced PhysX, a well written API would offload as much as possible to the CPU leaving the GPU free to do the job of GFX rendering, or more onto the GPU if the CPU is weak, or ration the GPU to increase PhysX effects. this would work for both CUDA and OpenCL.

At the moment Nvidia own PhysX engine is gimped, its capable of so much more if they alow the CPU better optimisation, more and better things can be done with it.
 
Last edited:
So essentially you think it's better for the CPU to do most of the work, and it's Nvidia crippling PhysX that's the problem.

I can't argue if that's honestly what you think, because it remains to be seen by anyone. But why do you think we've not seen Havok develop their engine to produce similar effects? Or any SDK for that matter?

The facts are, CPUs are not designed in a way that translate the instructions required for PhysX/physics effects.

GPUs have the necessary parallel stream processors which do.

You'd rather cripple your CPU and take the back road by offloading most of these effects to it, than have them running on the GPU - which could be better optimised eventually with lesser overhead.

Knock yourself out Winston, by all means! I'll passionately kiss a lama when this happens.

Look at how multithreading in games has only just made advancements, it was only a few years ago that multiple threads were only really used for low load processes like networking etc. If you think someone is going to just come along and optimise a physics SDK that runs on a CPU with little to no overheads you're really, really optimistic.
 
Last edited:
It's fairly obvious how gimped PhysX is if you look at Kaap's benchmarks he did with Titan's in the latest Batman - I mean come on, taking that kind of hit (20fps@1080p on minimums) on a freaking Titan just for reactive smoke and a swishy cape - get real :rolleyes:
 
It's fairly obvious how gimped PhysX is if you look at Kaap's benchmarks he did with Titan's in the latest Batman - I mean come on, taking that kind of hit (20fps@1080p on minimums) on a freaking Titan just for reactive smoke and a swishy cape - get real :rolleyes:

Well if you think you can do better...I'm not saying it couldn't be better optimised...but it's certainly better than running it on the CPU. I wouldn't use Arkham Origins as a prime example either as the game is little short of being well optimised.

And Havok doesn't have anything near PhysX within games Humbug. Never has a Havok game thrown particles around like PhysX has. I don't think some people appreciate just how difficult it is to translate simple effects like 'reactive smoke' into real time and put it in an-actual-game.

It's all so easy apparently.
 
Last edited:
There is no way in hell a modern cpu could run the level of physics in this demo no matter how well optimised it was. They just aren't designed to do it as has been said. A 780ti has what, 2880 cores isn't it? Compared to 8 at most on desktop cpus.
 
And Havok doesn't have anything near PhysX within games Humbug. Never has a Havok game thrown particles around like PhysX has. I don't think some people appreciate just how difficult it is to translate simple effects like 'reactive smoke' into real time and put it in an-actual-game.

It's all so easy apparently.

I guess i still need convincing of that, i haven't seen anything in a PhysX game thats not in games that use Havok
 
It's fairly obvious how gimped Physxs if you look at Kaap's benchmarks he did with Titan's in the latest Batman - I mean come on, taking that kind of hit (20fps@1080p on minimums) on a freaking Titan just for reactive smoke and a swishy cape - get real :rolleyes:

I would like to see Kaaps benches now. Every time I have run a PhysX game, it has had quite a big impact on frame rates when the PhysX is in full flow. Oranges hit it on the nail by picking out 60fps with nothing going on in the background. Do you think that moving a cape or smoke in a real way will not impact performance at all?

I am a manual worker and know a little bit about computers but seems the forum truly has some experts who know better than Nvidia/AMD and all the game devs.

As for the Physics on the CPU in consoles, how you can compare that to GPU PhysX is a joke Humbug.
 
I guess i still need convincing of that, i haven't seen anything in a PhysX game thats not in games that use Havok



I see

ibzo2syRK75Xvo.gif
 
I refer you back to what i originally said

I would like to see what the difference is between this and whats Physics are use in games where its not call Nvidia PhysX.

Boarder Lands 2 had some interesting effects but that was a game crying out for the imagination to run wild given its cartoon nature, it delivered on that, i'm not convinced the same thing could not have been done with Havoc or Bullet.

Other games like the Metro series, Planet Side 2 ecte... i don't see anything thats not in BF4, Crysis 3 and the like
.

The question for me is, does it have to be Nvidia, or AMD? don't we already have what they have minus the brand ties?

 
Well if you think you can do better...I'm not saying it couldn't be better optimised...but it's certainly better than running it on the CPU. I wouldn't use Arkham Origins as a prime example either as the game is little short of being well optimised.

And Havok doesn't have anything near PhysX within games Humbug. Never has a Havok game thrown particles around like PhysX has. I don't think some people appreciate just how difficult it is to translate simple effects like 'reactive smoke' into real time and put it in an-actual-game.

It's all so easy apparently.
I'm not saying it's easy - what I am saying is people seem to accept having a 20fps hit on minimums at a rather basic 1080p res (God knows what kind of hit you take @ 1440p+) for effects that frankly, don't warrant the performance drop.

I don't really care how complex it actually is - if that's the kind of hit I have to accept, I'd rather have a more basic Havok version & higher frame rates. The point is, I pay this premium to nVidia to have these extra features - and PhysX is still a bitter pill to swallow imo.
 
Back
Top Bottom